Browse Source

add list of abbreviations

Carsten Porth 5 years ago
parent
commit
b5b2a677fe
29 changed files with 178 additions and 153 deletions
  1. 2 0
      thesis/Masterthesis - Hybrid Online Social Networks - Carsten Porth.tex
  2. 28 0
      thesis/abbreviations.tex
  3. 1 1
      thesis/content/02-background.tex
  4. 9 9
      thesis/content/02-background/dapps.tex
  5. 11 11
      thesis/content/02-background/p2p.tex
  6. 5 11
      thesis/content/02-background/software-system-architecture.tex
  7. 3 3
      thesis/content/03-related-work.tex
  8. 8 8
      thesis/content/03-related-work/activitypub.tex
  9. 2 2
      thesis/content/03-related-work/akasha.tex
  10. 3 3
      thesis/content/03-related-work/diaspora.tex
  11. 2 2
      thesis/content/03-related-work/facecloak.tex
  12. 4 4
      thesis/content/03-related-work/peepeth.tex
  13. 4 4
      thesis/content/03-related-work/twitterize.tex
  14. 1 1
      thesis/content/04-concept.tex
  15. 1 1
      thesis/content/04-concept/introduction.tex
  16. 6 6
      thesis/content/04-concept/requirements.tex
  17. 7 7
      thesis/content/04-concept/restrictions.tex
  18. 7 7
      thesis/content/04-concept/solution-strategy-architecture.tex
  19. 4 4
      thesis/content/04-concept/solution-strategy-client.tex
  20. 4 4
      thesis/content/04-concept/stakeholders.tex
  21. 19 19
      thesis/content/05-proof-of-concept/building-block-view.tex
  22. 3 3
      thesis/content/05-proof-of-concept/insights.tex
  23. 1 1
      thesis/content/05-proof-of-concept/introduction.tex
  24. 2 2
      thesis/content/05-proof-of-concept/objective.tex
  25. 13 13
      thesis/content/05-proof-of-concept/osn-selection.tex
  26. 6 6
      thesis/content/05-proof-of-concept/runtime-view.tex
  27. 5 5
      thesis/content/05-proof-of-concept/security.tex
  28. 16 16
      thesis/content/06-discussion/threat-model.tex
  29. 1 0
      thesis/header.tex

+ 2 - 0
thesis/Masterthesis - Hybrid Online Social Networks - Carsten Porth.tex

@@ -57,6 +57,8 @@
   %Run bibtex to generate bibliography
   \bibliography{bib/bibliography}{}
   \bibliographystyle{plain}
+  
+  \input{abbreviations}
 
 \end{document}
 

+ 28 - 0
thesis/abbreviations.tex

@@ -0,0 +1,28 @@
+\chapter*{Acronyms}
+\addcontentsline{toc}{chapter}{Acronyms}
+
+\begin{acronym}[JSON-LD]
+	\acro{AES}{Advanced Encryption Standard}
+	\acro{API}{Application Programming Interface}
+	\acro{AWS}{Amazon Web Services}
+	\acro{CBC}{Cipher Block Chaining Mode }
+	\acro{CPU}{Central Processing Unit}
+	\acro{CSS}{Cascading Style Sheets}
+	\acro{dApp}{Decentralized Application}
+	\acro{DB}{Database}
+	\acro{DHT}{Distributed Hash Table}
+	\acro{DOM}{Document Object Model}
+	\acro{dWeb}{Decentralized Web}
+	\acro{GCM}{Galois/Counter Mode}
+	\acro{HTML}{Hypertext Markup Language}
+	\acro{HTTP}{Hypertext Transfer Protocol}
+	\acro{IP}{Internet Protocol}
+	\acro{IPFS}{InterPlanetary File System}
+	\acro{JSON}{JavaScript Object Notation}
+	\acro{JSON-LD}{JavaScript Object Notation for Linked Data}
+	\acro{NFC}{Near Field Communication}
+	\acro{OSN}{Online Social Network}
+	\acro{P2P}{Peer-to-Peer}
+	\acro{TTL}{Time to Live}
+	\acro{W3C}{World Wide Web Consortium}
+\end{acronym}

+ 1 - 1
thesis/content/02-background.tex

@@ -11,7 +11,7 @@
 \label{sec:p2p}
 \input{content/02-background/p2p}
 
-\section{Web 3.0 - Distributed Apps (dApps)}
+\section{Web 3.0 - Decentralized Applications (dApps)}
 \label{sec:dapps}
 \input{content/02-background/dapps}
 

+ 9 - 9
thesis/content/02-background/dapps.tex

@@ -2,32 +2,32 @@ The term Web 1.0 refers to the beginnings of the Internet, which consisted of si
 
 With the next version 3.0 of the web, more transparency, security, and fairness should be created. However, while there is broad agreement on what is meant by terms Web 1.0 and Web 2.0, there is no uniform definition of Web 3.0 that has prevailed to date. There are many ideas, but no final solution yet.
 
-An understanding of what Web 3.0 is, is all about decentralization, which is why it is also called the decentralized Web (dWeb). In this context, Web 3.0 is considered an umbrella term for a group of emerging technologies such as blockchain, crypto currencies, and distributed systems that are interconnected to create novel applications, so-called dApps (decentralized apps). Although decentralized applications have existed for a long time (e.g., BitTorrent), these applications do not meet the criteria of a dApp (see next section).
+An understanding of what Web 3.0 is, is all about decentralization, which is why it is also called the \acf{dWeb}. In this context, Web 3.0 is considered an umbrella term for a group of emerging technologies such as blockchain, crypto currencies, and distributed systems that are interconnected to create novel applications, so-called \acf{dApp}. Although decentralized applications have existed for a long time (e.g., BitTorrent), these applications do not meet the criteria of a \ac{dApp} (see next section).
 
-In the following, the characteristics of a dApp are described and individual, essential components are examined in more detail.
+In the following, the characteristics of a \ac{dApp} are described and individual, essential components are examined in more detail.
 
-\subsection{Characteristics of a dApp}
+\subsection{Characteristics of a \ac{dApp}}
 \label{sec:dapp-characterisitics}
-Just as with client-server applications, a dApp is also divided into front and back end. The main difference is that the back end is not represented by a centralized server and thus a single point of failure, but by code that is executed in a decentralized P2P network. In the back end so-called smart contracts are used to perform logical operations, and instead of a database, a blockchain is used. There are no special requirements for the front end so that it can be displayed as an app or website. It must only be possible to execute calls from the front end to the back end.
+Just as with client-server applications, a \ac{dApp} is also divided into front and back end. The main difference is that the back end is not represented by a centralized server and thus a single point of failure, but by code that is executed in a decentralized P2P network. In the back end so-called smart contracts are used to perform logical operations, and instead of a database, a blockchain is used. There are no special requirements for the front end so that it can be displayed as an app or website. It must only be possible to execute calls from the front end to the back end.
 
-Johnston et al. and Siraj Raval name the following four criteria for a dApp \cite{johnston2015dapp,raval2016decentralized}:
+Johnston et al. and Siraj Raval name the following four criteria for a \ac{dApp} \cite{johnston2015dapp,raval2016decentralized}:
 
 \begin{itemize}
 	\item \textbf{Open Source}: Trust and transparency are created by the disclosure of the source code. This also enables improvement through contributions from other developers.
 	\item \textbf{Blockchain based}: The data of the application is stored cryptographically in a public, decentralized blockchain. The immutability of the blockchain can be used in conjunction with smart contracts to build consensus and trust.
 	\item \textbf{Cryptographic token}: Certain actions in the network can only be performed by paying with a token. This token can either be purchased or given to the user in exchange for data, storage space or similar. Especially as a reward for positive actions, users should be rewarded with tokens. However, no entity should have control over the majority of the tokens.
-	\item \textbf{Token generation}: The application must generate tokens according to a standard cryptographic algorithm. (e.g. Proof of Work Algorithm in Bitcoin)
+	\item \textbf{Token generation}: The application must generate tokens according to a standard cryptographic algorithm. (e.g., Proof of Work Algorithm in Bitcoin)
 \end{itemize}
 
-In addition to the criteria of a dApp, Johnston et al. also describe a classification system for dApps \cite{johnston2015dapp}:
+In addition to the criteria of a \ac{dApp}, Johnston et al. also describe a classification system for \acp{dApp} \cite{johnston2015dapp}:
 
 \begin{itemize}
 	\item \textbf{Type 1}: Use their own blockchain (e.g. Bitcoin, Ethereum, EOS).
 	\item \textbf{Type 2}: Protocols, to use another blockchain of type 1 with own tokens (e.g. Omni Protocol).
-	\item \textbf{Type 3}: Protocols with own tokens, which in turn use protocols of a dApp of type 2.
+	\item \textbf{Type 3}: Protocols with own tokens, which in turn use protocols of a \ac{dApp} of type 2.
 \end{itemize}
 
-The advantages of a dApp include not only reliability but also the avoidance of censorship and manipulation by design. Furthermore, there are no dependencies towards a service provider. However, the development of a dApp is complex and it is difficult to install updates and bugfixes. Currently, there are still very few dApps, so interactions between different dApps are not able to create synergy effects.
+The advantages of a \ac{dApp} include not only reliability but also the avoidance of censorship and manipulation by design. Furthermore, there are no dependencies towards a service provider. However, the development of a \ac{dApp} is complex and it is difficult to install updates and bugfixes. Currently, there are still very few \acp{dApp}, so interactions between different \acp{dApp} are not able to create synergy effects.
 
 \subsection{Ethereum}
 \label{sec:ethereum}

+ 11 - 11
thesis/content/02-background/p2p.tex

@@ -1,34 +1,34 @@
-The distinctive feature of peer to peer (P2P) systems is that each participant has the role of both a server and a client. The participants are therefore equal with each other and provide each other with services, which is reflected in the naming. P2P networks are usually characterized as overlay networks over the Internet. Concerning the structure of the overlay network, a distinction is made between structured and unstructured networks. The P2P principle became mainly well known in 1999 with Napster. With the file-sharing application Napster, it was possible to exchange (mainly copyrighted) songs among the participants without having to offer them from a central server.
+The distinctive feature of \ac{P2P} systems is that each participant has the role of both a server and a client. The participants are therefore equal with each other and provide each other with services, which is reflected in the naming. \ac{P2P} networks are usually characterized as overlay networks over the Internet. Concerning the structure of the overlay network, a distinction is made between structured and unstructured networks. The \ac{P2P} principle became mainly well known in 1999 with Napster. With the file-sharing application Napster, it was possible to exchange (mainly copyrighted) songs among the participants without having to offer them from a central server.
 
-Popular applications of P2P networks are file sharing (e.g., BitTorrent), instant messaging (e.g., Skype) and blockchain technology (e.g., Bitcoin).
+Popular applications of \ac{P2P} networks are file sharing (e.g., BitTorrent), instant messaging (e.g., Skype) and blockchain technology (e.g., Bitcoin).
 
-Their independence particularly characterizes P2P networks: there are no control points and not necessarily a fixed infrastructure. This also has a positive effect on operating costs. Besides, P2P networks are self-organized and self-scaling, as each additional user contributes its resources.
+Their independence particularly characterizes \ac{P2P} networks: there are no control points and not necessarily a fixed infrastructure. This also has a positive effect on operating costs. Besides, \ac{P2P} networks are self-organized and self-scaling, as each additional user contributes its resources.
 
-However, there are also some challenges in P2P networks that need to be solved for successful operation. These include finding peers in the network (peer discovery) and finding resources (resource discovery). Especially in file sharing networks, solutions have to be found how to motivate users to upload data and not only use the download one-sidedly. The replication of data and the associated availability must also be taken into account in solutions. Another critical issue is the Internet connection of individual participants, which may not be powerful or permanent.
+However, there are also some challenges in \ac{P2P} networks that need to be solved for successful operation. These include finding peers in the network (peer discovery) and finding resources (resource discovery). Especially in file sharing networks, solutions have to be found how to motivate users to upload data and not only use the download one-sidedly. The replication of data and the associated availability must also be taken into account in solutions. Another critical issue is the Internet connection of individual participants, which may not be powerful or permanent.
 
-\subsection{Unstructured P2P}
+\subsection{Unstructured \ac{P2P} Networks}
 \label{sec:unstructured-p2p}
-In unstructured P2P networks there are no specifications for the overlay network, so the peers are only loosely connected. Due to the loose structures, the failure of one peer has no significant influence on the function of the rest of the network. Another advantage of the unstructured topology is the lower vulnerability.
+In unstructured \ac{P2P} networks there are no specifications for the overlay network, so the peers are only loosely connected. Due to the loose structures, the failure of one peer has no significant influence on the function of the rest of the network. Another advantage of the unstructured topology is the lower vulnerability.
 
-While such loose structures are easy to create, the performance of the entire network suffers. A multicast request is sent to all connected peers, who forward the request again and flood the entire network. If a peer can respond to the request, it responds by the same route that the request used to reach it. Each request has a validity time (time to live, TTL) before it is discarded. Popular files with wide distribution can thus be found quickly. However, rare files are difficult to find because the TTL may have been reached before. A flooding search is not efficient and provides a large amount of signaling traffic. An example of this approach is Gnutella.
+While such loose structures are easy to create, the performance of the entire network suffers. A multicast request is sent to all connected peers, who forward the request again and flood the entire network. If a peer can respond to the request, it responds by the same route that the request used to reach it. Each request has a validity time (\ac{TTL}) before it is discarded. Popular files with wide distribution can thus be found quickly. However, rare files are difficult to find because the \ac{TTL} may have been reached before. A flooding search is not efficient and provides a large amount of signaling traffic. An example of this approach is Gnutella.
 
 In order to counteract the problem of inefficient and complicated searching for resources, in other network implementations, central points are created to answer the search requests. These include Napster, FastTrack, and BitTorrent. Figure \ref{fig:unstructured-p2p} shows a comparison of the search process in the respective networks.
 
 \begin{figure}[h!]
 	\centering
 	\includegraphics[width=0.85\textwidth]{unstructured-p2p-networks}
-	\caption{Search process in unstructured P2P systems \cite{moltchanov2014p2p-networks}}
+	\caption{Search process in unstructured \ac{P2P} systems \cite{moltchanov2014p2p-networks}}
 	\label{fig:unstructured-p2p}
 \end{figure}
 
-\subsection{Structured P2P}
+\subsection{Structured \ac{P2P} Networks}
 \label{sec:structured-p2p}
 By defining a particular structure, for example a circle or a tree, search processes in the overlay network can be designed efficiently and deterministically. In such structured networks, compliance with the structure is strictly controlled. Routing algorithms determine how a node is arranged in the overlay network. The performance of the entire network depends directly on the arrangement of the nodes and how quick changes (joining and leaving nodes) are detected.
 
-Usually, the routing algorithms are based on a Distributed Hash Table (DHT). Hash tables are data structures in which key-value pairs are stored, whereby the key must be unique. The corresponding value can then be queried via the key. The keys are ids, which are generated with a hash function (e.g., SHA-1). For the addresses of the nodes and the files, ids are created equally, so that they lie in the same address space. For finding a file, it is searched at the node with the same or the next larger id. If it is not available there, it does not exist on the network.
+Usually, the routing algorithms are based on a \ac{DHT}. Hash tables are data structures in which key-value pairs are stored, whereby the key must be unique. The corresponding value can then be queried via the key. The keys are ids, which are generated with a hash function (e.g., SHA-1). For the addresses of the nodes and the files, ids are created equally, so that they lie in the same address space. For finding a file, it is searched at the node with the same or the next larger id. If it is not available there, it does not exist on the network.
 
 For joining a network, either one or more peers must be known as the entry point, or this information must be obtained from a bootstrap server. When entering a structured network, the joining node is assigned a unique id and thus positions itself in the structure. The routing tables of the nodes affected by the structural change must then be updated.
 
 When leaving a network, this happens either gracefully, and all affected nodes are informed to update their routing tables, or unexpectedly. Therefore, nodes must always check the correctness of their routing tables.
 
-Known routing algorithms that use DHTs include Chrod\cite{stoica2003chord}, CAN\cite{ratnasamy2001scalable}, Pastry\cite{rowstron2001pastry}, Tapestry\cite{zhao2004tapestry} and Kademlia\cite{maymounkov2002kademlia}. Among other things, they differ in their distinct structure and the hash functions used.
+Known routing algorithms that use \acp{DHT} include Chrod\cite{stoica2003chord}, CAN\cite{ratnasamy2001scalable}, Pastry\cite{rowstron2001pastry}, Tapestry\cite{zhao2004tapestry} and Kademlia\cite{maymounkov2002kademlia}. Among other things, they differ in their distinct structure and the hash functions used.

+ 5 - 11
thesis/content/02-background/software-system-architecture.tex

@@ -1,5 +1,7 @@
 The software system architecture describes the relationships and properties of individual software components. It is a model that describes a software on a high-level design. The structure of an architecture can be represented mathematically as a graph, with the nodes representing the individual software components and the edges their relationships to each other. Although the individual components can be executed on the same computer, they are usually interconnected via networks. In general, a distinction is made between centralized, decentralized and distributed architectures as shown in Figure \ref{fig:software-system-architecture}.
 
+In the following, the characteristics and peculiarities of the different architectures are described in detail.
+
 \begin{figure}[h!]
 	\centering
 	\includegraphics[width=1.0\textwidth]{network-architectures}
@@ -7,14 +9,6 @@ The software system architecture describes the relationships and properties of i
 	\label{fig:software-system-architecture}
 \end{figure}
 
-%\begin{itemize}
-%	\item Centralized Applications
-%	\item Decentralized Applications
-%	\item Distributed Applications
-%\end{itemize}
-
-In the following, the characteristics and peculiarities of the different architectures are described in detail.
-
 \subsection{Centralized Applications}
 \label{sec:centralized-applications}
 In a centralized application, the software essentially runs on a central node with a static address with which all other nodes communicate. This central node is called a server and the nodes connected to it are called clients. The clients use the services of the server.
@@ -33,15 +27,15 @@ Thus, the advantages of decentralized applications include, in addition to the a
 
 The drawbacks include the difficulty of finding data because they are spread across multiple servers. Search functionalities are thus difficult to implement. If a server is taken offline, the data is no longer available, even if the system itself remains functional. Since there is no longer a central point that is managed by an operator, rolling out updates is difficult. This raises the challenge that server nodes of different versions can still work together.
 
-Examples of decentralized applications are the social networks Diaspora and Mastodon. Each user can operate his own server in these networks. Unlike Facebook, the decision on the existence of the service is therefore not in the control of the operator, but the user. Bitcoin is also a decentralized application architecture. In doing so, transactions are carried out in a decentralized database (blockchain).
+Examples of decentralized applications are the social networks diaspora* and Mastodon. Each user can operate his own server in these networks. Unlike Facebook, the decision on the existence of the service is therefore not in the control of the operator, but the user. Bitcoin is also a decentralized application architecture. In doing so, transactions are carried out in a decentralized database (blockchain).
 
 \subsection{Distributed Applications}
 \label{sec:distributed-applications}
 A feature of distributed applications is the distributed computation across all nodes. At the same time, a single task is executed in parallel on several nodes of a system, and the entire system delivers the result in total.
 
-In distributed applications, there is no hierarchy with servers or clients. Each node is equal to the other nodes with everyone performing the same tasks. For this reason, a node in this architecture is called a peer. The structure is then referred to as peer to peer (P2P). With such structures, there are no scaling problems, since each node contributes the required resources itself. Thus, the resources of the entire system grow with each new node added.
+In distributed applications, there is no hierarchy with servers or clients. Each node is equal to the other nodes with everyone performing the same tasks. For this reason, a node in this architecture is called a peer. The structure is then referred to as \ac{P2P}. With such structures, there are no scaling problems, since each node contributes the required resources itself. Thus, the resources of the entire system grow with each new node added.
 
-BitTorrent is an application that belongs to this architecture type. It solves the problem of downloading large files efficiently. When downloading a file, it is offered by several nodes so that different pieces can be loaded in parallel from different sources in difference to HTTP where only one source provides the file. Each node not only downloads but also provides itself to other files. In order to avoid that nodes download only (so-called Leechers) but also contribute, they are penalized with slow download speeds. This principle is known as \textit{tit for tat}.
+BitTorrent is an application that belongs to this architecture type. It solves the problem of downloading large files efficiently. When downloading a file, it is offered by several nodes so that different pieces can be loaded in parallel from different sources in difference to \ac{HTTP} where only one source provides the file. Each node not only downloads but also provides itself to other files. In order to avoid that nodes download only (so-called Leechers) but also contribute, they are penalized with slow download speeds. This principle is known as \textit{tit for tat}.
 
 \subsection{Comparison}
 \label{sec:architecture-comparison}

+ 3 - 3
thesis/content/03-related-work.tex

@@ -1,6 +1,6 @@
 \chapter{Related Work}
 \label{ch:related-work}
-This chapter gives a comprehensive overview of different projects trying to protect the users' personal data in online social networks. Hereby, six different approaches are presented. First, extensions are considered that make the use of established social networks more secure. Then, alternative social networks will be presented that have placed the protection of personal data at the center. After that two next-generation social networks will be considered, which take advantage of the blockchain technology and belong to the group of dApps. Finally, the ActivityPub protocol is presented, which maps the communication in decentralized platforms. The chapter concludes with a summary of the related work.
+This chapter gives a comprehensive overview of different projects trying to protect the users' personal data in \acp{OSN}. Hereby, six different approaches are presented. First, extensions are considered that make the use of established social networks more secure. Then, alternative social networks will be presented that have placed the protection of personal data at the center. After that two next-generation social networks will be considered, which take advantage of the blockchain technology and belong to the group of dApps. Finally, the ActivityPub protocol is presented, which maps the communication in decentralized platforms. The chapter concludes with a summary of the related work.
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 % For each project, write about
@@ -29,7 +29,7 @@ Existing connections to other people and already created content can bind users
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 \section{Privacy-protecting Social Networks}
 \label{sec:privacy-protecting-social-networks}
-In the business models of the large, popular OSNs, user data plays an essential role. The data is evaluated and used to make a profit, for example through personalized advertising. Anonymity and the protection of privacy are not among the overriding objectives. 
+In the business models of the large, popular \acp{OSN}, user data plays an essential role. The data is evaluated and used to make a profit, for example through personalized advertising. Anonymity and the protection of privacy are not among the overriding objectives. 
 
 In the following, two social networks, diaspora* and LifeSocial, are presented which have placed the protection of data at the center.
 
@@ -43,7 +43,7 @@ In the following, two social networks, diaspora* and LifeSocial, are presented w
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 % dApps
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\section{dApps - The Next Generation Social Networks}
+\section{\acp{dApp} - The Next Generation Social Networks}
 
 \subsection{Akasha}
 \label{sec:akasha}

+ 8 - 8
thesis/content/03-related-work/activitypub.tex

@@ -1,10 +1,10 @@
-ActivityPub is a protocol published by the World Wide Web Consortium (W3C) in January 2018 as an official standard\footnote{https://www.w3.org/TR/activitypub}. The protocol regulates communication within an open, decentralized social network. There are two levels: client to server (Social API) and server to server (Federation Protocol). The two protocols are designed in such a way that they can be used independently of each other. If one of them is implemented, it is easy to implement the other. The Activity Streams\footnote{https://www.w3.org/TR/activitystreams-core} data format is used to describe activities in JSON-LD format. This data format is also an official W3C standard with the aim to record meta data of an action in a human-friendly but machine-processable syntax.
+ActivityPub is a protocol published by the \ac{W3C} in January 2018 as an official standard\footnote{https://www.w3.org/TR/activitypub}. The protocol regulates communication within an open, decentralized social network. There are two levels: client to server (Social \ac{API}) and server to server (Federation Protocol). The two protocols are designed in such a way that they can be used independently of each other. If one of them is implemented, it is easy to implement the other. The Activity Streams\footnote{https://www.w3.org/TR/activitystreams-core} data format is used to describe activities in \ac{JSON-LD} format. This data format is also an official \ac{W3C} standard with the aim to record meta data of an action in a human-friendly but machine-processable syntax.
 
-The principle behind Activity Pub is similar to that of e-mail. Servers can be uniquely identified via the domain. Within a server, each mailbox is accessible via a unique name. Thus, users can communicate with each other via different servers by having their messages forwarded to their mailbox.
+The principle behind ActivityPub is similar to that of e-mail. Servers can be uniquely identified via the domain. Within a server, each mailbox is accessible via a unique name. Thus, users can communicate with each other via different servers by having their messages forwarded to their mailbox.
 
-\subsubsection{Client to Server (Social API)}
+\subsubsection{Client to Server (Social \ac{API})}
 \label{sec:social-api}
-Users are called actors in ActivityPub and are represented by an associated account on the server. Since there can be several servers, it is important to emphasize that an account is only ever located on one server and user names must always be unique only within a server. Each actor has an inbox and an outbox on the server on which he is registered with his account. These are the two endpoints with which the client application communicates via HTTP requests. More is not necessary, because the server ensures that all messages and information for the user end up in his inbox and that the messages in his outbox are forwarded to the desired recipients (see \ref{sec:federation-protocol}). For ensuring that only authorized clients store content in an outbox, the sender must sign the posts.
+Users are called actors in ActivityPub and are represented by an associated account on the server. Since there can be several servers, it is important to emphasize that an account is only ever located on one server and user names must always be unique only within a server. Each actor has an inbox and an outbox on the server on which he is registered with his account. These are the two endpoints with which the client application communicates via \ac{HTTP} requests. More is not necessary, because the server ensures that all messages and information for the user end up in his inbox and that the messages in his outbox are forwarded to the desired recipients (see \ref{sec:federation-protocol}). For ensuring that only authorized clients store content in an outbox, the sender must sign the posts.
 
 \begin{figure}[h]
 	\includegraphics[width=1.0\textwidth]{activitypub-communication}
@@ -14,7 +14,7 @@ Users are called actors in ActivityPub and are represented by an associated acco
 
 The outbox of an actor holds all his published posts. When accessing the outbox of an actor without authorization, the server delivers all public posts of the actor. If access with authorization occurs, the explicitly shared content is also transmitted.
 
-The corresponding actors can only access their own inbox. On access, the messages are downloaded from the server. New messages get in the inbox per HTTP POST request from another server.
+The corresponding actors can only access their own inbox. On access, the messages are downloaded from the server. New messages get in the inbox per \ac{HTTP} POST request from another server.
 
 Each actor has some so-called collections. To these collections, objects can be added and removed. The collections are used to store information related to an actor. These are the collections each actor has:
 
@@ -29,14 +29,14 @@ The following interactions are defined between the client and the server, but so
 \subsubsection{Server to Server (Federation Protocol)}
 \label{sec:federation-protocol}
 
-This protocol defines the exchange of activities between actors of different servers. The network between the servers with all actors is called a social graph. The interactions defined in the Social API must be implemented by the server so that they reach the addressed actors. The starting point is always the outbox of an actor. If a new activity is created using an HTTP POST request and, for example, the follower collection of the actor is selected as the addressee, the server must ensure that every actor in the follower collection receives the activity in its inbox. The recipients and their addresses can be found by following the links in the activities. It is also up to the server to ensure that there is no duplication of content.
+This protocol defines the exchange of activities between actors of different servers. The network between the servers with all actors is called a social graph. The interactions defined in the Social \ac{API} must be implemented by the server so that they reach the addressed actors. The starting point is always the outbox of an actor. If a new activity is created using an \ac{HTTP} POST request and, for example, the follower collection of the actor is selected as the addressee, the server must ensure that every actor in the follower collection receives the activity in its inbox. The recipients and their addresses can be found by following the links in the activities. It is also up to the server to ensure that there is no duplication of content.
 
 \subsubsection{Application Examples}
 \label{sec:mastodon}
 
-The most popular application example is the social network Mastodon\footnote{https://joinmastodon.org/}. Mastodon is a decentralized network based on free and open source software. Everyone is invited to host their own platform. With currently 1.6 million users it is the most significant implementation of the ActivityPub protocol. The Federation Protocol is used for communication between the individual servers. The Social API is not used, instead Mastodon offers its own API\footnote{https://docs.joinmastodon.org/api/guidelines/} for communication with the client.
+The most popular application example is the social network Mastodon\footnote{https://joinmastodon.org/}. Mastodon is a decentralized network based on free and open source software. Everyone is invited to host their own platform. With currently 1.6 million users it is the most significant implementation of the ActivityPub Protocol. The Federation Protocol is used for communication between the individual servers. The Social \ac{API} is not used, instead Mastodon offers its own \ac{API}\footnote{https://docs.joinmastodon.org/api/guidelines/} for communication with the client.
 
-Networking with other users is not limited to mastodon instances. Each service that has implemented the ActivityPub protocol allows its actors to network with actors of entirely different applications because the communication is standardized. So it is possible without problems to follow an actor of the video platform PeerTube\footnote{https://joinpeertube.org/en/} as Mastodon actor and be notified when he uploads a new video there.
+Networking with other users is not limited to mastodon instances. Each service that has implemented the ActivityPub Protocol allows its actors to network with actors of entirely different applications because the communication is standardized. So it is possible without problems to follow an actor of the video platform PeerTube\footnote{https://joinpeertube.org/en/} as Mastodon actor and be notified when he uploads a new video there.
 
 In addition to cross-platform networking, one big advantage is that it has no significant impact no matter what happens to Mastodon. The network can still exist and thanks to the open protocols and open source software it can be developed and used without restrictions. If Facebook were to go offline, all contacts would be lost, and the platform would never be accessible again.
 

+ 2 - 2
thesis/content/03-related-work/akasha.tex

@@ -1,4 +1,4 @@
-In early 2015, Mihai Alisie (co-founder of Ethereum) had the idea for AKASHA. AKASHA is a social network that differs from other known social networks mainly in its decentralization. The absence of a central server meant that censorship was ruled out by design alone. This is realized by the two technologies Ethereum and the Inter-Planetary File System (IPFS). Electron, React, Redux, and NodeJS complete the technology stack so that the primary programming language is JavaScript. In addition to Mihai Alisie, 12 other employees now work at AKASHA. Furthermore, the founders of Ethereum (Vitalik Buterin) and IPFS (Juan Benet) advise the project.
+In early 2015, Mihai Alisie (co-founder of Ethereum) had the idea for AKASHA. AKASHA is a social network that differs from other known social networks mainly in its decentralization. The absence of a central server meant that censorship was ruled out by design alone. This is realized by the two technologies Ethereum and the \acf{IPFS}. Electron, React, Redux, and NodeJS complete the technology stack so that the primary programming language is JavaScript. In addition to Mihai Alisie, 12 other employees now work at AKASHA. Furthermore, the founders of Ethereum (Vitalik Buterin) and \ac{IPFS} (Juan Benet) advise the project.
 
 Alisie sees AKASHA as \enquote{the missing puzzle piece that will enable us to tackle two of the most critical challenges we face today as a modern information-based society: freedom of expression and creative perpetuity}. The central goal is therefore to prevent censorship and to obtain information over a long period.
 
@@ -12,7 +12,7 @@ With the announcement of the web version, the team behind AKASHA also released p
 	\item \textbf{AETH} is a transferable, ERC 20 compatible token, living on the Rinkeby test network
 	\item \textbf{Mana} is non-transferable and is obtained by locking AETH for X time at Y ratio (Manafied AETH). The Mana amount regenerates every day for as long as AETH remains locked, in a \enquote{Manafied} state.
 	\item \textbf{Essence} is non-transferable and is obtained through positive contributions. It can be burned to mint new AETH into existence. When people use their Mana to vote on artifacts, the authors can collect the burned Mana as Essence.
-	\item \textbf{Karma} is not a state, but rather a score tracking user contributions. For every unit of Essence collected, the user receives also Karma. Karma is used for defining milestones, thresholds and unlocking functionality within the dapp.
+	\item \textbf{Karma} is not a state, but rather a score tracking user contributions. For every unit of Essence collected, the user receives also Karma. Karma is used for defining milestones, thresholds and unlocking functionality within the \ac{dApp}.
 \end{itemize}
 
 \begin{figure}[h!]

+ 3 - 3
thesis/content/03-related-work/diaspora.tex

@@ -10,10 +10,10 @@ For funding the development of diaspora*, \$ 10,000 should be crowdfunded on Kic
 
 The diaspora* back end is written in Ruby, the front end to the user is a website. A server running diaspora* is called pod. Each pod has its own domain, so users of a pod have a username similar to an e-mail address (for example, username@podname.org). Diaspora* has the typical functionalities of a social network (hashtags, @ mentions, likes, comments, private messages). What marked a peculiarity at the time of diaspora*'s appearance are so-called aspects. Aspects are groupings of contacts that can be specified as a target audience when posting content. Only the contacts associated with the aspect can see the post.
 
-For staying in contact with friends on other platforms like social networks (Facebook, Twitter) or blogs (Tumblr, Wordpress), the initial idea was to connect these platforms. Data exchange should work both ways. Posts published on diaspora* should also appear on other platforms at the same time. Also, posts from the other networks should be viewed in diaspora*. Diaspora* should play the role of a social media hub. Unfortunately, the APIs of some platforms have become increasingly limited as instances of misuse of the interfaces have become public.
+For staying in contact with friends on other platforms like social networks (Facebook, Twitter) or blogs (Tumblr, Wordpress), the initial idea was to connect these platforms. Data exchange should work both ways. Posts published on diaspora* should also appear on other platforms at the same time. Also, posts from the other networks should be viewed in diaspora*. Diaspora* should play the role of a social media hub. Unfortunately, the \acp{API} of some platforms have become increasingly limited as instances of misuse of the interfaces have become public.
 
-The data of the users are unencrypted on a pod so that someone having access to the database can see them\cite{diasporaXXXXfaq-users}. In order to protect his own data in the best possible way, the operation of a separate diaspora* instance is necessary. The communication between the pods is encrypted with SSL\cite{diasporaXXXXfaq-users}. Furthermore, the exchanged messages are first signed (Salmon Magic Signatures), then symmetrically encrypted with AES-256-CBC\cite{diasporaXXXXmagic-signatures}. The AES key is encrypted with the public key of the recipient and sent together with the encrypted message.
+The data of the users are unencrypted on a pod so that someone having access to the database can see them\cite{diasporaXXXXfaq-users}. In order to protect his own data in the best possible way, the operation of a separate diaspora* instance is necessary. The communication between the pods is encrypted with SSL\cite{diasporaXXXXfaq-users}. Furthermore, the exchanged messages are first signed (Salmon Magic Signatures), then symmetrically encrypted with \ac{AES}-256-\ac{CBC}\cite{diasporaXXXXmagic-signatures}. The \ac{AES} key is encrypted with the public key of the recipient and sent together with the encrypted message.
 
-Diaspora* does not use the ActivityPub protocol, but its own diaspora* federation protocol\cite{diasporaXXXXprotocol}. Other platforms such as Friendica\footnote{https://friendi.ca/}, Hubzilla\footnote{https://zotlabs.org/page/hubzilla/hubzilla-project} or Socialhome\footnote{https://socialhome.network/} can also communicate via the diaspora* federation protocol. There is no official API, which makes app development difficult. Diaspora* points out that the website is also usable on mobile devices, so there is no need for a native application\cite{diasporaXXXXfaq-users}.
+Diaspora* does not use the ActivityPub protocol, but its own diaspora* federation protocol\cite{diasporaXXXXprotocol}. Other platforms such as Friendica\footnote{https://friendi.ca/}, Hubzilla\footnote{https://zotlabs.org/page/hubzilla/hubzilla-project} or Socialhome\footnote{https://socialhome.network/} can also communicate via the diaspora* federation protocol. There is no official \ac{API}, which makes app development difficult. Diaspora* points out that the website is also usable on mobile devices, so there is no need for a native application\cite{diasporaXXXXfaq-users}.
 
 According to the statistics of the-federation.info\footnote{https://the-federation.info/diaspora} on February 24, 2019, 679723 users were registered on a total of 251 pods. Over the last 12 months, 19591 new users have joined the network. In January 2019, only 4.4\% of all users were active with 30042 users. However, the numbers are incomplete, as some pods do not share information and there may be more than the 251 listed pods.

+ 2 - 2
thesis/content/03-related-work/facecloak.tex

@@ -33,13 +33,13 @@ In addition to adhering to the above design principles, the proposed architectur
 \subsubsection{FaceCloak for Facebook}
 To protect the privacy of Facebook users, Luo, Xiu, and Hengartner have developed a Firefox browser extension according to the previously described architecture, as well as a server application for storing encrypted real data.\footnote{Download: https://crysp.uwaterloo.ca/software/facecloak/download.html}
 
-The extension uses AES and a key length of 128 bits to encrypt the data. The indices for the encrypted data are calculated using SHA-1. The authors propose an e-mail for the key exchange. For this purpose, the browser extension automatically generates e-mail texts and recipient lists and forwards them to the standard e-mail program. The recipients then have to store the received keys in the extension manually.
+The extension uses \ac{AES} and a key length of 128 bits to encrypt the data. The indices for the encrypted data are calculated using SHA-1. The authors propose an e-mail for the key exchange. For this purpose, the browser extension automatically generates e-mail texts and recipient lists and forwards them to the standard e-mail program. The recipients then have to store the received keys in the extension manually.
 
 In order to protect data with FaceCloak, the prefix @@ must be added to the information in a text field. For other form elements such as dropdowns, radio buttons or checkboxes, the extension creates additional options that also start with @@. When submitting the form, the extension intervenes and replaces the data marked with @@ with fake data. The data to be protected is encrypted with the stored keys and transferred as a key-value pair to the third party server where it is stored. FaceCloak can protect all profile information, but only for name, birthday, and gender algorithms for the meaningful creation of fake data are implemented.
 
 In addition to profile information, the extension can also protect Facebook Wall and Facebook Notes data. To avoid attracting attention with random, unusual character strings, the contents of random Wikipedia articles are transmitted as fake data.
 
-When loading a profile page that contains protected data, the extension with asynchronous HTTP requests retrieves the information from the third party server, decrypts it, and replaces the fake data. A large part of the replacement can thus be performed during the load process so that the user does not see the fake data. However, since Facebook also loads content asynchronously, some replacements can only be performed with a time delay and the fake data is shortly visible.
+When loading a profile page that contains protected data, the extension with asynchronous \ac{HTTP} requests retrieves the information from the third party server, decrypts it, and replaces the fake data. A large part of the replacement can thus be performed during the load process so that the user does not see the fake data. However, since Facebook also loads content asynchronously, some replacements can only be performed with a time delay and the fake data is shortly visible.
 
 To use the same account on multiple devices, the keys must be transferred to all devices and stored in the extension. It is not possible to use multiple accounts with the same Firefox profile, as all data is stored in the extension and these are always bound to exactly one Facebook account.
 

+ 4 - 4
thesis/content/03-related-work/peepeth.tex

@@ -2,11 +2,11 @@ Peepeth is a microblogging platform that is very similar to Twitter in functiona
 
 From Peepeth's point of view, social media is broken. The fault lies with the operators of social networks, who control the online identities of users, sell their data and violate their privacy. The news feeds are manipulated to drive the user to a higher level of interaction at any price. Besides, the platforms are teeming with trolls, bullying and flame wars. Barton wants to counter these grievances. Therefore there is no advertising on Peepeth. 
 
-The website Peepeth.com is the front end of a decentralized app (dApp), which uses the Eteherum blockchain and the Inter-Planetary File System (IPFS). This front end can theoretically be exchanged arbitrarily and Peepeth's data can be read and written because the blockchain protocol is open. No Ethereum test network is used, but the main network. The execution of transactions on the Ethereum blockchain is associated with costs. Peepeth bears the costs for its users. The necessary capital was collected via a crowdfunding campaign. However, when accounts distribute spam, Peepeth no longer bears the cost of writing to the blockchain. The resulting costs should make spamming unattractive and reduce it to a minimum because technically it is still possible. Peepeth requires a dApp browser (e.g., Opera) or a browser that has been extended by a wallet (e.g., using MetaMask extension) for use. Although Peepeth covers the costs, the user has to sign the transactions, which is why the browser has to contain the corresponding extension.
+The website Peepeth.com is the front end of a \ac{dApp}, which uses the Eteherum blockchain and \ac{IPFS}. This front end can theoretically be exchanged arbitrarily and Peepeth's data can be read and written because the blockchain protocol is open. No Ethereum test network is used, but the main network. The execution of transactions on the Ethereum blockchain is associated with costs. Peepeth bears the costs for its users. The necessary capital was collected via a crowdfunding campaign. However, when accounts distribute spam, Peepeth no longer bears the cost of writing to the blockchain. The resulting costs should make spamming unattractive and reduce it to a minimum because technically it is still possible. Peepeth requires a \ac{dApp} browser (e.g., Opera) or a browser that has been extended by a wallet (e.g., using MetaMask extension) for use. Although Peepeth covers the costs, the user has to sign the transactions, which is why the browser has to contain the corresponding extension.
 
-In order to keep transaction fees low, the actions executed on Peepeth are collected on the server hosting the front end and written to the blockchain in batches every hour. Several actions are bundled in one file and transaction. The actual contents end up as a JSON file in IPFS and only the reference hash in the blockchain.
+In order to keep transaction fees low, the actions executed on Peepeth are collected on the server hosting the front end and written to the blockchain in batches every hour. Several actions are bundled in one file and transaction. The actual contents end up as a \ac{JSON} file in \ac{IPFS} and only the reference hash in the blockchain.
 
-While the smart contracts are open source, the front end is closed source. So it is impossible to understand what is happening on the server hosting the front end Peepeth.com. Image files are not only stored in IPFS but also mirrored at Amazon AWS to provide a better user experience. The client does not communicate directly with IPFS, but the server behind the front end communicates with the two back end technologies IPFS and Ethereum Blockchain, as shown in Figure \ref{fig:peepeth-architecture}.
+While the smart contracts are open source, the front end is closed source. So it is impossible to understand what is happening on the server hosting the front end Peepeth.com. Image files are not only stored in \ac{IPFS} but also mirrored at \ac{AWS} to provide a better user experience. The client does not communicate directly with \ac{IPFS}, but the server behind the front end communicates with the two back end technologies \ac{IPFS} and Ethereum blockchain, as shown in Figure \ref{fig:peepeth-architecture}.
 
 \begin{figure}[h!]
 	\centering
@@ -20,6 +20,6 @@ The data written to the Ethereum blockchain cannot be deleted or modified. For B
 In addition to writing short messages, it is possible to like posts. However, posts cannot be liked infinitely. There is only one like per day available, a so-called \textit{Ensō}. The resulting rarity should express the particular appreciation of a contribution. \enquote{Ensō (Japanese for \enquote{circle}) is a circle that is hand-drawn in one uninhibited brushstroke. It represents creativity, freedom of expression, and unity}.
 Furthermore, good content from other users can be rewarded with a tip. 10\% of the tip goes to Peepeth and serve to finance. Also for the verification of an account and special badges go 10\% of the fees to Peepeth.
 
-On 29th January 2019, Peepeth had 4055 users who posted a total of 66262 Peeps. For an account, future users have to apply first and receive a sign-up link by email to join the platform after some time. On invitation of an active user, new users can join directly without waiting time. Users can verify themselves with their existing Github and Twitter accounts. In the future, it will be possible to use further platforms for the verification of an account. In order to verify an account, the user must post a \enquote{special message}, which also contains his Ethereum address. The link to this post must then be handed over to a Smart Contract, which confirms the ownership of the account.
+On 29th January 2019, Peepeth had 4055 users who posted a total of 66262 Peeps. For an account, future users have to apply first and receive a sign-up link by email to join the platform after some time. On invitation of an active user, new users can join directly without waiting time. Users can verify themselves with their existing Github and Twitter accounts. In the future, it will be possible to use further platforms for the verification of an account. In order to verify an account, the user must post a \enquote{special message}, which also contains his Ethereum address. The link to this post must then be handed over to a smart contract, which confirms the ownership of the account.
 
 Peepeth communicated the next milestones to increase the user experience as part of the crowdfunding campaign. The first milestone has already been reached. The fact that the possession of a cryptocurrency requirement for the use and procurement of such a currency is difficult was to be eliminated. Peepeth bears the costs for its users. The next steps are the use without special software requirements (renouncement of particular browsers or MetaMask) and the development of an iOS app. However, only 140.56 ETH could be collected from the required 1000 ETH. It is unclear to what extent the desired goals will now be achieved.

+ 4 - 4
thesis/content/03-related-work/twitterize.tex

@@ -28,7 +28,7 @@ In order to establish communication within a group, the three following phases m
 
 First, a user must define the hashtag and generate a key for symmetric encryption. This key is used later to encrypt the messages as well as the hashtag itself. From the encrypted hashtag, only a hash value is used (so-called pseudonym) so that the actual hashtag stays private.
 
-In order to join the group, other users need the key and knowledge of the name of the hashtag. However, the key exchange should not be carried out via the social network used. An exchange via e-mail, Near Field Communication (NFC), QR codes is conceivable.
+In order to join the group, other users need the key and knowledge of the name of the hashtag. However, the key exchange should not be carried out via the social network used. An exchange via e-mail, \ac{NFC}, QR codes is conceivable.
 
 If the users of a group with a common hashtag are all aware of the key and the hashtag, the second phase of building an overlay network begins. Here the private flooding mechanism is used \cite{daubert2013distributed-anonymous-pubsub}. A publisher creates an advertisement tweet consisting of a pseudonym and the first element of a hash chain and posts it in the underlay network. This tweet appears in the timelines of his followers. If not already done, each follower distributes the advertisement tweet on his timeline and thus reaches his followers. However, this tweet differs from the original tweet by extending the hash chain. It generates a hash of the previous hash chain and thus receives the next element of the chain. According to this principle, the entire network is flooded, and as a result, each user saves a triplet consisting of the pseudonym, the hash chain and the user previous to him in the chain in a database table.
 
@@ -46,10 +46,10 @@ In the third phase, users can exchange messages using the previously shared hash
 \subsubsection{Proof of Concept}
 As proof of concept, Daubert et al. implemented Twitterize as an app for Android. The app was written in Java, the code remained closed source, and the app is not available from the Google Play Store. The representation of the data in the app takes place on different timelines. In addition to the standard home and user timelines, each hashtag gets its timeline. Each timeline is accessible via a tab.
 
-For encryption 128bit keys with AES CBC is used. The keys are exchanged between the users via NFC or QR codes. Since data structures, such as JSON, are too verbose, the tweets are encoded using Base64. To make message types distinguishable, rarely used UTF-8 symbols are used to give the messages a rough structure. The formation of the overlay network and the exchange of messages then works as described above.
+For encryption 128bit keys with \ac{AES} \ac{CBC} is used. The keys are exchanged between the users via \ac{NFC} or QR codes. Since data structures, such as \ac{JSON}, are too verbose, the tweets are encoded using Base64. To make message types distinguishable, rarely used UTF-8 symbols are used to give the messages a rough structure. The formation of the overlay network and the exchange of messages then works as described above.
 
-The design of the Twitterize architecture meets the privacy requirements. The other requirements for implementation were also well received during the development of the app. With the exchange of keys via NFC or QR codes an easy way for key management was found and implemented. Just a few seconds of physical proximity is enough to form a group.
+The design of the Twitterize architecture meets the privacy requirements. The other requirements for implementation were also well received during the development of the app. With the exchange of keys via \ac{NFC} or QR codes an easy way for key management was found and implemented. Just a few seconds of physical proximity is enough to form a group.
 
-The avoidance of overheads was also successful, although the timeline is updated in the background every minute. CPU usage and power consumption were only slightly above the values of the original Twitter app. For the storage of the additional data, only a little space is necessary, since each hashtag occupies only 48 bytes. Assuming that in the Twitter Social Graph two arbitrary users are connected via 4.71 following users in between, an average delivery time of 142 seconds was calculated for a message.
+The avoidance of overheads was also successful, although the timeline is updated in the background every minute. \ac{CPU} usage and power consumption were only slightly above the values of the original Twitter app. For the storage of the additional data, only a little space is necessary, since each hashtag occupies only 48 bytes. Assuming that in the Twitter Social Graph two arbitrary users are connected via 4.71 following users in between, an average delivery time of 142 seconds was calculated for a message.
 
 Restrictions arise on the one hand due to limits on the use of the Twitter API and on the other hand because the application must always be online to get the best user experience.

+ 1 - 1
thesis/content/04-concept.tex

@@ -2,7 +2,7 @@
 \label{ch:concept}
 \input{content/04-concept/introduction}
 
-\section{Requirements to the Hybrid OSN}
+\section{Requirements to the Hybrid \ac{OSN}}
 \label{sec:requirements}
 \input{content/04-concept/requirements}
 

+ 1 - 1
thesis/content/04-concept/introduction.tex

@@ -7,4 +7,4 @@ Although the problems and dangers are known for a long time and new scandals reg
 
 If switching to another platform is not an alternative, it is necessary to look for ways to better protect users and their data on existing platforms. The Researcher Training Group (RTG) \enquote{Privacy and Trust for Mobile Users}\footnote{https://www.informatik.tu-darmstadt.de/privacy-trust/privacy\_and\_trust/index.en.jsp} in research area B \enquote{Privacy and Trust in Social Networks} is dealing with problems like this. Subarea B.2 deals specifically with the protection of privacy in hybrid social networks.
 
-In the following, a concept for a hybrid social network will be developed that tries to take into account the interests of the different stakeholders. Besides, functionality requirements and potential limitations are listed. Finally, a solution strategy is shown, and a possible architecture is presented. The concept should apply to all social networks. A specific implementation will be presented in the next chapter for Twitter.
+In the following, a concept for a hybrid \ac{OSN} will be developed that tries to take into account the interests of the different stakeholders. Besides, functionality requirements and potential limitations are listed. Finally, a solution strategy is shown, and a possible architecture is presented. The concept should apply to all social networks. A specific implementation will be presented in the next chapter for Twitter.

+ 6 - 6
thesis/content/04-concept/requirements.tex

@@ -2,13 +2,13 @@ The solution should meet specific functional and non-functional requirements so
 
 \textbf{Functional requirements}:
 \begin{itemize}
-	\item The standard functionality of the OSN is usable without restrictions.
-	\item The user can see which way (private P2P or public OSN) the data comes from and where it goes.
+	\item The standard functionality of the \ac{OSN} is usable without restrictions.
+	\item The user can see which way (private \ac{P2P} or public \ac{OSN}) the data comes from and where it goes.
 	\item Data access should only be possible for authorized users.
 	\item The data exchange should be automatically encrypted so that the data is worthless for third parties.
-	\item The data format is flexible in order to map changes and new OSN functionalities.
-	\item The solution is client-side since there is no control over the OSN server.
-	\item The OSN Service Provider can retrieve anonymized data relevant to him.
+	\item The data format is flexible in order to map changes and new \ac{OSN} functionalities.
+	\item The solution is client-side since there is no control over the \ac{OSN} server.
+	\item The \ac{OSN} Service Provider can retrieve anonymized data relevant to him.
 	\item The solution is platform independent.
 \end{itemize}
 
@@ -17,7 +17,7 @@ The solution should meet specific functional and non-functional requirements so
 	\item Data exchange over the private network is fast and secure.
 	\item The user interface is simple and understandable.
 	\item The design of the app is modern and appealing.
-	\item No violation of the guidelines and terms of use/service of the OSN.
+	\item No violation of the guidelines and terms of use/service of the \ac{OSN}.
 	\item No restrictions for standard users who do not use the hybrid solution.
 	\item The additional effort for the user when using the hybrid solution should be minimal.
 \end{itemize}

+ 7 - 7
thesis/content/04-concept/restrictions.tex

@@ -1,12 +1,12 @@
-When designing the hybrid OSN, there are a few limitations that need to be considered, for which appropriate solutions have to be found. These restrictions include:
+When designing the hybrid \ac{OSN}, there are a few limitations that need to be considered, for which appropriate solutions have to be found. These restrictions include:
 
 \begin{itemize}
-	\item \textbf{Interfaces of the OSN}: Ideally the OSN offers a public API with full functionality. Since the user's data should be protected as best as possible, access via the API is usually restricted. The restriction may be due to a limited number of requests per time interval or a limited range of offered functions.
-	\item \textbf{Crawling the OSN web pages}: If there is no official API or if it is sharply restricted, the contents can theoretically also be extracted by crawling. However, this brings with it several challenges. Modern web pages load many contents asynchronously so that the initial HTML does not yet contain these contents. Furthermore, there are sophisticated mechanisms that notice crawling and lock out crawlers. Likewise, it may be difficult to add data to the OSN. For security reasons, in most cases, special tokens are sent along with each request to detect and prevent abuse and fake requests.
+	\item \textbf{Interfaces of the \ac{OSN}}: Ideally the \ac{OSN} offers a public \ac{API} with full functionality. Since the user's data should be protected as best as possible, access via the \ac{API} is usually restricted. The restriction may be due to a limited number of requests per time interval or a limited range of offered functions.
+	\item \textbf{Crawling the \ac{OSN} web pages}: If there is no official \ac{API} or if it is sharply restricted, the contents can theoretically also be extracted by crawling. However, this brings with it several challenges. Modern web pages load many contents asynchronously so that the initial \ac{HTML} does not yet contain these contents. Furthermore, there are sophisticated mechanisms that notice crawling and lock out crawlers. Likewise, it may be difficult to add data to the \ac{OSN}. For security reasons, in most cases, special tokens are sent along with each request to detect and prevent abuse and fake requests.
 	\item \textbf{Development, operation and licensing costs}: Costs for the development, operation and licensing of third-party software may incurred. At best, conscious decisions lead to the avoidance of expenses.
-	\item \textbf{Operating system or runtime environment}: Nowadays OSNs can be used on almost all devices; independent of their operating system. In order to achieve the same user experience, the hybrid OSN should be usable on the same platforms. Any restrictions imposed by the operating system (user and application rights, connectivity) must be taken into account during development.
-	\item \textbf{Resources}: The devices running the hybrid OSN may have limited resources (storage space, processing power, Internet connection/data volume, battery). When making design decisions, it is important to plan as resource-conserving as possible and to find scalable solutions. Overall, the overhead for the hybrid extension should be as low as possible compared to the original application.
-	\item \textbf{Availability of data}: The data that is exchanged securely and not via the OSN's servers must always be available. Whether a user is offline or how old the data is must not affect its availability.
+	\item \textbf{Operating system or runtime environment}: Nowadays \acp{OSN} can be used on almost all devices; independent of their operating system. In order to achieve the same user experience, the hybrid \ac{OSN} should be usable on the same platforms. Any restrictions imposed by the operating system (user and application rights, connectivity) must be taken into account during development.
+	\item \textbf{Resources}: The devices running the hybrid \ac{OSN} may have limited resources (storage space, processing power, Internet connection/data volume, battery). When making design decisions, it is important to plan as resource-conserving as possible and to find scalable solutions. Overall, the overhead for the hybrid extension should be as low as possible compared to the original application.
+	\item \textbf{Availability of data}: The data that is exchanged securely and not via the \ac{OSN}'s servers must always be available. Whether a user is offline or how old the data is must not affect its availability.
 \end{itemize}
 
-While the restrictions on the hybrid client itself can be actively influenced and resolved, the restrictions on the OSN cannot be controlled. If the OSN does not provide any interfaces and the hurdle of data exchange with the servers is insurmountable, this can completely prevent the development of a hybrid client.
+While the restrictions on the hybrid client itself can be actively influenced and resolved, the restrictions on the \ac{OSN} cannot be controlled. If the \ac{OSN} does not provide any interfaces and the hurdle of data exchange with the servers is insurmountable, this can completely prevent the development of a hybrid client.

+ 7 - 7
thesis/content/04-concept/solution-strategy-architecture.tex

@@ -1,18 +1,18 @@
-Various models can be used to implement a secure data exchange between the users of an OSN via an add-on. The solution strategies shown below differ primarily in the question of where data is stored and how it can be found.
+Various models can be used to implement a secure data exchange between the users of an \ac{OSN} via an add-on. The solution strategies shown below differ primarily in the question of where data is stored and how it can be found.
 
 \begin{figure}[h!]
 	\centering
 	\includegraphics[width=1.0\textwidth]{solution-strategy-architecture}
-	\caption{Different architectures: a) Use of a central server to which all hybrid OSN users connect to, b) Creation of a P2P network among the users for data exchange.}
+	\caption{Different architectures: a) Use of a central server to which all hybrid \ac{OSN} users connect to, b) Creation of a P2P network among the users for data exchange.}
 	\label{fig:solution-strategy-architecture}
 \end{figure}
 
-One possibility is to use an extra infrastructure to store the data, as shown in Figure \ref{fig:solution-strategy-architecture}.a. Additional servers are used to store and distribute the private data to be protected. Using additional servers has the advantage that the data is always available and there are no dependencies to other hybrid OSN users. Furthermore, resources must only be available centrally and not locally for every user. At the central location, the data can be indexed and explicitly queried. However, the operation and maintenance of one or more such servers are problematic. In principle, the question of the operators must be clarified, because the infrastructure must function reliably. An architecture based on this proposal was used by FaceCloak.
+One possibility is to use an extra infrastructure to store the data, as shown in Figure \ref{fig:solution-strategy-architecture}.a. Additional servers are used to store and distribute the private data to be protected. Using additional servers has the advantage that the data is always available and there are no dependencies to other hybrid \ac{OSN} users. Furthermore, resources must only be available centrally and not locally for every user. At the central location, the data can be indexed and explicitly queried. However, the operation and maintenance of one or more such servers are problematic. In principle, the question of the operators must be clarified, because the infrastructure must function reliably. An architecture based on this proposal was used by FaceCloak.
 
 In contrast, a decentralized or distributed solution strategy would create a network among users of the hybrid application. This strategy is depicted in Figure \ref{fig:solution-strategy-architecture}.b. No extra infrastructure would have to be operated. The users would then have a typical peer role. With this model, solutions must be found for how data is always available and can be found, even if a user is temporarily or permanently offline.
-Furthermore, the resources on the end devices are limited, so that effective, economical solutions are needed. Another challenge is the addressing of peers. Since they typically do not have a static IP address, but the IP address changes frequently, solutions must be found for accessibility. Since there is no central, global index, finding data is even more difficult.
+Furthermore, the resources on the end devices are limited, so that effective, economical solutions are needed. Another challenge is the addressing of peers. Since they typically do not have a static \ac{IP} address, but the \ac{IP} address changes frequently, solutions must be found for accessibility. Since there is no central, global index, finding data is even more difficult.
 
-An interim solution is also conceivable, in which an existing infrastructure, e.g., an already existing P2P network or the blockchain, is used for storing and exchanging data. Since no influence can be exerted on existing infrastructure, its use entails further restrictions and potential risks.
+An interim solution is also conceivable, in which an existing infrastructure, e.g., an already existing \ac{P2P} network or the blockchain, is used for storing and exchanging data. Since no influence can be exerted on existing infrastructure, its use entails further restrictions and potential risks.
 
 Table \ref{tab:solution-strategy-architecture-comparison} lists the advantages and disadvantages of the different strategies.
 
@@ -22,7 +22,7 @@ Table \ref{tab:solution-strategy-architecture-comparison} lists the advantages a
 			\item Availability of data
 			\item Finding the data
 			\item Resources only have to be available centrally
-			\item No dependencies among hybrid OSN users
+			\item No dependencies among hybrid \ac{OSN} users
 		\end{itemize}
 		\hspace{1mm}
 \end{minipage}}
@@ -78,6 +78,6 @@ Table \ref{tab:solution-strategy-architecture-comparison} lists the advantages a
 		\multicolumn{1}{|l|}{\textbf{\begin{tabular}[c]{@{}l@{}}Own network\\ (decentralized/distributed)\end{tabular}}} & \advantageon                    & \disadvantageon                       \\ \hline
 		\multicolumn{1}{|l|}{\textbf{External infrastructure}}                                                           & \advantageei                    & \disadvantageei                       \\ \hline
 	\end{tabular}
-	\caption{Advantages and disadvantages of the different solution strategies for the hybrid OSN architecture.}
+	\caption{Advantages and disadvantages of the different solution strategies for the hybrid \ac{OSN} architecture.}
 	\label{tab:solution-strategy-architecture-comparison}
 \end{table}

+ 4 - 4
thesis/content/04-concept/solution-strategy-client.tex

@@ -1,7 +1,7 @@
-Concerning the implementation of the hybrid approach, two possibilities are conceivable. On the one hand, the extension of the original OSN client (app or web front end) as an addon. On the other hand, the development of an entirely new client.
+Concerning the implementation of the hybrid approach, two possibilities are conceivable. On the one hand, the extension of the original \ac{OSN} client (app or web front end) as an add-on. On the other hand, the development of an entirely new client.
 
-When an addon extends the OSN, there is no need to take care of the standard functionality of the OSN. Therefore, the development can entirely focus on the addon and the private extension. At crucial points, the add-on extends the interface by additional elements that enable secure data exchange. The service providers usually do not offer developers an interface to extend the OSN with such own functionalities. With web front ends it is possible to manipulate the website content using browser add-ons. One example for doing so is FaceCloak. Since such browser extensions manipulate the Document Object Model (DOM), knowledge about the document structure is necessary for the successful function in order to make changes in the right places. The short release cycles of OSNs and the associated frequent changes to this DOM structure make it difficult to keep up with the changes. When consuming the OSN via the official apps on mobile devices, an extension or manipulation is not possible.
+When an add-on extends the \ac{OSN}, there is no need to take care of the standard functionality of the \ac{OSN}. Therefore, the development can entirely focus on the add-on and the private extension. At crucial points, the add-on extends the interface by additional elements that enable secure data exchange. The service providers usually do not offer developers an interface to extend the OSN with such own functionalities. With web front ends it is possible to manipulate the website content using browser add-ons. One example for doing so is FaceCloak. Since such browser extensions manipulate the \ac{DOM}, knowledge about the document structure is necessary for the successful function in order to make changes in the right places. The short release cycles of \acp{OSN} and the associated frequent changes to this \ac{DOM} structure make it difficult to keep up with the changes. When consuming the \ac{OSN} via the official apps on mobile devices, an extension or manipulation is not possible.
 
-The alternative to the extension approach described above is a new hybrid client app. The entire functional range of the OSN must be implemented and kept up to date. As already mentioned with the restrictions (see \ref{sec:requirements}), the functions are usually not entirely provided via an API, and the crawling of the content also brings with it some challenges. By having complete control over the development of the client, the additional protected, secure communication can be added at the appropriate points. In the best-case scenario, at least one hybrid app is available for every operating system for which an official OSN app exists.
+The alternative to the extension approach described above is a new hybrid client app. The entire functional range of the \ac{OSN} must be implemented and kept up to date. As already mentioned with the restrictions (see \ref{sec:requirements}), the functions are usually not entirely provided via an \ac{API}, and the crawling of the content also brings with it some challenges. By having complete control over the development of the client, the additional protected, secure communication can be added at the appropriate points. In the best-case scenario, at least one hybrid app is available for every operating system for which an official \ac{OSN} app exists.
 
-Both approaches can be combined by displaying the (mobile) web page of the OSN in a WebView in a separate app and executing DOM manipulations via injected JavaScript code. For example, there are some alternative clients for Facebook (e.g. \enquote{Friendly for Facebook}\footnote{https://play.google.com/store/apps/details?id=io.friendly} \footnote{https://itunes.apple.com/de/app/friendly-for-facebook/id400169658}, \enquote{Metal Pro}\footnote{https://play.google.com/store/apps/details?id=com.nam.fbwrapper.pro}) that use this approach.
+Both approaches can be combined by displaying the (mobile) web page of the \ac{OSN} in a WebView in a separate app and executing \ac{DOM} manipulations via injected JavaScript code. For example, there are some alternative clients for Facebook (e.g. \enquote{Friendly for Facebook}\footnote{https://play.google.com/store/apps/details?id=io.friendly} \footnote{https://itunes.apple.com/de/app/friendly-for-facebook/id400169658}, \enquote{Metal Pro}\footnote{https://play.google.com/store/apps/details?id=com.nam.fbwrapper.pro}) that use this approach.

+ 4 - 4
thesis/content/04-concept/stakeholders.tex

@@ -11,14 +11,14 @@ Even if the stakeholders are not necessarily directly involved, it is still esse
 
 \newcommand{\interestsOSN}{\begin{minipage} [t] {0.4\textwidth} 
 		\begin{itemize}
-			\item Unrestricted use of the OSN
-			\item No disadvantages due to hybrid OSN users 
+			\item Unrestricted use of the \ac{OSN}
+			\item No disadvantages due to hybrid \ac{OSN} users 
 		\end{itemize}
 \end{minipage}}
 
 \newcommand{\interestsHOSN}{\begin{minipage} [t] {0.4\textwidth} 
 		\begin{itemize}
-			\item Unrestricted use of the OSN
+			\item Unrestricted use of the \ac{OSN}
 			\item Decision-making sovereignty over what happens with data 
 		\end{itemize}
 \end{minipage}}
@@ -114,7 +114,7 @@ Even if the stakeholders are not necessarily directly involved, it is still esse
 		\centering
 		\begin{tabular}{|l|l|l|l|}
 			\hline
-			\textbf{Steakholder}                                           & \textbf{Interests} & \textbf{Attitude towards hybrid OSN} & \textbf{Influence on hybrid OSN} \\ \hline
+			\textbf{Steakholder}                                           & \textbf{Interests} & \textbf{Attitude towards hybrid \ac{OSN}} & \textbf{Influence on hybrid \ac{OSN}} \\ \hline
 			\begin{tabular}[c]{@{}l@{}}Service\\ Provider\end{tabular}     & \interestsSP       & \attitudeSP                          & \influenceSP                     \\ \hline
 			\begin{tabular}[c]{@{}l@{}}OSN\\ User\end{tabular}             & \interestsOSN      & \attitudeOSN                         & \influenceOSN                    \\ \hline
 			\begin{tabular}[c]{@{}l@{}}Hybrid\\ OSN User\end{tabular}      & \interestsHOSN     & \attitudeHOSN                        & \influenceHOSN                   \\ \hline

+ 19 - 19
thesis/content/05-proof-of-concept/building-block-view.tex

@@ -1,17 +1,17 @@
-In this section, the context in which Hybrid OSN is located is first considered, and then a breakdown into the individual components is carried out. The function of the respective blocks is then described in more detail. Finally, the function of individual components in interaction is explained using the examples of displaying the home timeline and posting a new tweet.
+In this section, the context in which Hybrid \ac{OSN} is located is first considered, and then a breakdown into the individual components is carried out. The function of the respective blocks is then described in more detail. Finally, the function of individual components in interaction is explained using the examples of displaying the home timeline and posting a new tweet.
 
 \subsection{Scope and Context}
 \label{sec:scope-and-context}
-Figure \ref{fig:building-block-view} shows a black box view of which other systems Hybrid OSN communicates with via interfaces. The systems are:
+Figure \ref{fig:building-block-view} shows a black box view of which other systems Hybrid \ac{OSN} communicates with via interfaces. The systems are:
 
 \begin{itemize}
-	\item Twitter API
+	\item Twitter \ac{API}
 	\item Gun
-	\item IPFS via Infura
+	\item \ac{IPFS} via Infura
 	\item User 
 \end{itemize}
 
-Infura\footnote{https://infura.io/} is a service that provides access to Ethereum and IPFS via a simple interface. Communication with the API happens using HTTP requests. The connection of IPFS in Hybrid OSN can thus be carried out in an uncomplicated way. The use of an additional system entails an additional risk typically. However, there is a JavaScript client for IPFS, which can be integrated into Hybrid OSN and thus the dependency on Infura would be omitted. At present and for the development of the prototype, the decision was made to use Infura for reasons of simplicity. Infura can be used for IPFS free of charge and without registration.
+Infura\footnote{https://infura.io/} is a service that provides access to Ethereum and \ac{IPFS} via a simple interface. Communication with the \ac{API} happens using \ac{HTTP} requests. The connection of \ac{IPFS} in Hybrid \ac{OSN} can thus be carried out in an uncomplicated way. The use of an additional system entails an additional risk typically. However, there is a JavaScript client for \ac{IPFS}, which can be integrated into Hybrid \ac{OSN} and thus the dependency on Infura would be omitted. At present and for the development of the prototype, the decision was made to use Infura for reasons of simplicity. Infura can be used for \ac{IPFS} free of charge and without registration.
 
 \begin{figure}[h!]
 	\centering
@@ -23,30 +23,30 @@ Infura\footnote{https://infura.io/} is a service that provides access to Ethereu
 \subsection{White Box View}
 \label{sec:white-box}
 
-The used Ionic framework uses Angular in the core, in the concrete case of Hybrid OSN Angular 5.2 is used. Accordingly, the Hybrid OSN app is in principle an Angular application. The essential building blocks are components, pages, and providers. In the following, these components are described in detail and examples are given of where they are used in Hybrid OSN.
+The used Ionic framework uses Angular in the core, in the concrete case of Hybrid \ac{OSN} Angular 5.2 is used. Accordingly, the Hybrid \ac{OSN} app is in principle an Angular application. The essential building blocks are components, pages, and providers. In the following, these components are described in detail and examples are given of where they are used in Hybrid \ac{OSN}.
 
 \subsubsection{Providers}
 \label{providers}
-Data access is performed using providers (known as services in Angular). For the external services (Twitter API, P2P database, P2P storage), there is one provider each to handle the communication. Besides, providers are used as helper classes that provide specific functionality that is used several times. This functionality includes, for example, encryption and decryption and the compilation of aggregated timelines. Providers are injected into components using the constructor. Table \ref{tab:providers} lists all providers used in Hybrid OSN and their functional descriptions.
+Data access is performed using providers (known as services in Angular). For the external services (Twitter \ac{API}, \ac{P2P} database, \ac{P2P} storage), there is one provider each to handle the communication. Besides, providers are used as helper classes that provide specific functionality that is used several times. This functionality includes, for example, encryption and decryption and the compilation of aggregated timelines. Providers are injected into components using the constructor. Table \ref{tab:providers} lists all providers used in Hybrid OSN and their functional descriptions.
 
 \begin{table}[h!]
 	\begin{tabularx}{\textwidth}{|l|X|}
 		\hline
-		\textbf{Provider} & \textbf{Purpose}                                                                             \\ \hline
-		Auth              & Manage and perform authentication against the Twitter API. Responsible for login and logout. \\ \hline
-		Crypto            & Provides methods for encryption, decryption, and key generation                              \\ \hline
-		Feed              & Aggregation of private (P2P) and public (Twitter) tweets to compose a chronological timeline \\ \hline
-		P2P-Database-Gun  & Interface for data exchange with Gun                                                         \\ \hline
-		P2P-Storage-IPFS  & Interface for data exchange with IPFS via Infura                                             \\ \hline
-		Twitter-API       & Interface to use the Twitter API using the Twit package                                      \\ \hline
+		\textbf{Provider}      & \textbf{Purpose}                                                                                  \\ \hline
+		Auth                   & Manage and perform authentication against the Twitter \ac{API}. Responsible for login and logout. \\ \hline
+		Crypto                 & Provides methods for encryption, decryption, and key generation                                   \\ \hline
+		Feed                   & Aggregation of private (\ac{P2P}) and public (Twitter) tweets to compose a chronological timeline \\ \hline
+		\ac{P2P}-Database-Gun  & Interface for data exchange with Gun                                                              \\ \hline
+		\ac{P2P}-Storage-IPFS  & Interface for data exchange with \ac{IPFS} via Infura                                             \\ \hline
+		Twitter-API            & Interface to use the Twitter \ac{API} using the Twit package                                      \\ \hline
 	\end{tabularx}
-	\caption{Providers used in the Hybrid OSN app in alphabetical order with their purpose.}
+	\caption{Providers used in the Hybrid \ac{OSN} app in alphabetical order with their purpose.}
 	\label{tab:providers}
 \end{table}
 
 \subsubsection{Components}
 \label{sec:components}
-Components are the basic building blocks of a user interface. Figure \ref{fig:component-example} shows an example of the representation of a tweet in Hybrid OSN using various components. A component consists of an HTML template, CSS styling, and JavaScript logic, whereby the logic is typically limited to a minimum. Components can be used as elements in other components or pages. A component receives the data it is supposed to visualize. Furthermore, components can process events or return them to parent components for handling.
+Components are the basic building blocks of a user interface. Figure \ref{fig:component-example} shows an example of the representation of a tweet in Hybrid \ac{OSN} using various components. A component consists of an \ac{HTML} template, \ac{CSS} styling, and JavaScript logic, whereby the logic is typically limited to a minimum. Components can be used as elements in other components or pages. A component receives the data it is supposed to visualize. Furthermore, components can process events or return them to parent components for handling.
 
 \begin{figure}[h!]
 	\centering
@@ -67,7 +67,7 @@ Table \ref{tab:pages} lists all pages and their purpose. When the app is opened,
 		\textbf{Page}                 & \textbf{Purpose}                                                                                                                         \\ \hline
 		About                         & Information about the app, which can be accessed via the login page to get basic information about the app before logging in             \\ \hline
 		Home                          & Chronological view of the latest tweets from Twitter and the private network                                                             \\ \hline
-		Login                         & Authentication against Twitter to use the Twitter API                                                                                    \\ \hline
+		Login                         & Authentication against Twitter to use the Twitter \ac{API}                                                                               \\ \hline
 		Profile                       & Presentation of a user profile consisting of the user data (profile picture, brief description, location, website) and the user timeline \\ \hline
 		Search                        & Container page for searching for tweets and users, where tweets are also divided into popular and recent (see Search-Results-Tweets-Tab) \\ \hline
 		Search-Results-Tweets-Popular & Search results of currently popular tweets for a given keyword                                                                           \\ \hline
@@ -77,10 +77,10 @@ Table \ref{tab:pages} lists all pages and their purpose. When the app is opened,
 		Settings                      & Configuration of keywords that trigger the private mode and settings regarding encryption                                                \\ \hline
 		Write-Tweet                   & Form for writing a tweet                                                                                                                 \\ \hline
 	\end{tabularx}
-	\caption{Pages used in the Hybrid OSN app in alphabetical order with their purpose.}
+	\caption{Pages used in the Hybrid \ac{OSN} app in alphabetical order with their purpose.}
 	\label{tab:pages}
 \end{table}
 
 \subsubsection{Local Storage}
 \label{sec:local-storage}
-As the name suggests, this is a local storage that is accessible by the app. With Hybrid OSN, this memory is used to store essential information for usage. These include the Twitter user id, the two tokens for accessing the Twitter API, the keywords that trigger the private mode, and private and public keys for encryption. Log out completely deletes the local storage.
+As the name suggests, this is a local storage that is accessible by the app. With Hybrid \ac{OSN}, this memory is used to store essential information for usage. These include the Twitter user id, the two tokens for accessing the Twitter \ac{API}, the keywords that trigger the private mode, and private and public keys for encryption. Log out completely deletes the local storage.

+ 3 - 3
thesis/content/05-proof-of-concept/insights.tex

@@ -1,4 +1,4 @@
-The requirements mentioned in \ref{sec:requirements} also include the provision of anonymized data for the OSN service provider. Since the business model of Twitter is based on personal data, and therefore the interests of hybrid OSN are contrary to those of Twitter, the fulfillment of this requirement is extremely complex.
+The requirements mentioned in \ref{sec:requirements} also include the provision of anonymized data for the \ac{OSN} service provider. Since the business model of Twitter is based on personal data, and therefore the interests of hybrid \ac{OSN} are contrary to those of Twitter, the fulfillment of this requirement is extremely complex.
 
 A prominent feature of Twitter is the analysis and promotion of trends (see Figure \ref{fig:twitter-trends}). The trends are identified through frequently used hashtags and presented in a ranking. Such data can also be collected and evaluated in the private network without having to establish a connection to the users.
 
@@ -13,10 +13,10 @@ To collect this information, when a new tweet is posted to the private network,
 	\end{subfigure}
 	\begin{subfigure}[b]{0.49\textwidth}
 		\includegraphics[width=\textwidth]{hybrid-osn-trends}
-		\caption{Hybrid OSN trends}
+		\caption{Hybrid \ac{OSN} trends}
 		\label{fig:hybrid-osn-trends}
 	\end{subfigure}
 	\caption{Trending hashtags in Twitter and the private network side by side}
 \end{figure}
 
-Because Gun is JavaScript-based and therefore executable in the web browser, access to the data from a simple HTML web page can be performed using JavaScript code. The raw data is loaded and then aggregated and displayed.
+Because Gun is JavaScript-based and therefore executable in the web browser, access to the data from a simple \ac{HTML} web page can be performed using JavaScript code. The raw data is loaded and then aggregated and displayed.

+ 1 - 1
thesis/content/05-proof-of-concept/introduction.tex

@@ -1 +1 @@
-After working out the concept of a hybrid OSN in the previous chapter \ref{ch:concept}, this chapter presents a proof of concept in the form of an individual Twitter client for Android. The previously described solution strategies, as well as functional and non-functional requirements, actively influenced the development of the client, and attention was also paid to compliance with the defined quality goals. In the following, the decisions made are explained, and the resulting architecture is presented.
+After working out the concept of a hybrid \ac{OSN} in the previous chapter \ref{ch:concept}, this chapter presents a proof of concept in the form of an individual Twitter client for Android. The previously described solution strategies, as well as functional and non-functional requirements, actively influenced the development of the client, and attention was also paid to compliance with the defined quality goals. In the following, the decisions made are explained, and the resulting architecture is presented.

+ 2 - 2
thesis/content/05-proof-of-concept/objective.tex

@@ -1,7 +1,7 @@
-With the proof of concept, the basic feasibility of the idea of an extension of an established OSN by a secure data exchange should be proven. Within the framework of this thesis, the task was to provide the proof of concept as a native Android app. Concerning the architecture, the focus from the beginning was on a P2P solution, which is why a solution with its additional servers was not pursued further in the development of the prototype. Furthermore, an interface should be available for the service provider of the OSN, through which anonymized information can be obtained from the privately exchanged data.
+With the proof of concept, the basic feasibility of the idea of an extension of an established \ac{OSN} by a secure data exchange should be proven. Within the framework of this thesis, the task was to provide the proof of concept as a native Android app. Concerning the architecture, the focus from the beginning was on a \ac{P2P} solution, which is why a solution with its additional servers was not pursued further in the development of the prototype. Furthermore, an interface should be available for the service provider of the \ac{OSN}, through which anonymized information can be obtained from the privately exchanged data.
 
 Even though the implementation as an add-on, as shown in the previous chapter as a possible solution strategy, was thus fundamentally ruled out, it nevertheless influenced the made decisions. It was always considered how the architecture could be this open and flexible to enable all kinds of extensions and clients.
 
-Since it is only a proof of concept, the mapping of the complete functionality of the OSN was not the highest priority. However, again, this was taken into account by considerations and decisions, how for example data formats can be arranged so flexible that every function can be mapped.
+Since it is only a proof of concept, the mapping of the complete functionality of the \ac{OSN} was not the highest priority. However, again, this was taken into account by considerations and decisions, how for example data formats can be arranged so flexible that every function can be mapped.
 
 Also, the focus was on compliance with the quality goals and implementation of the functional and non-functional requirements as defined in the previous chapter. How well this has been achieved will be evaluated in the following chapter for evaluation and discussion.

+ 13 - 13
thesis/content/05-proof-of-concept/osn-selection.tex

@@ -1,24 +1,24 @@
-When selecting a suitable OSN for the development of a hybrid client, Facebook was the obvious first choice due to the numerous negative headlines about data protection. With over two billion users per month, it is currently the most widely used social network in the world. In the recent past, it has often been criticized for its handling of its users' data. In particular, the scandal surrounding the data analysis company Cambridge Analytica, which had access to the data of up to 87 million users, hit Facebook hard. As a result, CEO Mark Zuckerberg had to face the US Congress and the EU Parliament in question rounds and did not leave a good impression by avoiding many questions. As a result of this scandal, there were further restrictions to the Facebook API.
+When selecting a suitable \ac{OSN} for the development of a hybrid client, Facebook was the obvious first choice due to the numerous negative headlines about data protection. With over two billion users per month, it is currently the most widely used social network in the world. In the recent past, it has often been criticized for its handling of its users' data. In particular, the scandal surrounding the data analysis company Cambridge Analytica, which had access to the data of up to 87 million users, hit Facebook hard. As a result, CEO Mark Zuckerberg had to face the US Congress and the EU Parliament in question rounds and did not leave a good impression by avoiding many questions. As a result of this scandal, there were further restrictions to the Facebook \ac{API}.
 
-However, the Facebook API is not suitable for developing a new client. The functionalities offered by the API offer the possibility to develop an app that can be used within Facebook, for example for a game. So, it is not possible to make a like for a post through this API, which is part of the core functionality of a Facebook client. As discussed in Chapter \ref{ch:concept}, it is possible to access the data through crawling. However, the constant and rapid development would make this an arduous undertaking. Facebook writes in a blog post\cite{facebook2017release} that the code changes every few hours. Therefore it is almost impossible to adjust the crawler fast enough and roll out the adjusted code.
+However, the Facebook \ac{API} is not suitable for developing a new client. The functionalities offered by the \ac{API} offer the possibility to develop an app that can be used within Facebook, for example for a game. So, it is not possible to make a like for a post through this \ac{API}, which is part of the core functionality of a Facebook client. As discussed in Chapter \ref{ch:concept}, it is possible to access the data through crawling. However, the constant and rapid development would make this an arduous undertaking. Facebook writes in a blog post\cite{facebook2017release} that the code changes every few hours. Therefore it is almost impossible to adjust the crawler fast enough and roll out the adjusted code.
 
 Even the mixed version of displaying and manipulating the mobile website in a WebView in a container app does not seem to be an option due to the short release cycles and frequent changes. Apps like \enquote{Friendly for Facebook} do not manage to keep up with the changes as reported in various user ratings on the Google Play Store. The false representations and bugs worsen the user experience and result in the frustration of users.
 
-For this number of reasons, Facebook dropped out as an OSN candidate for the prototype despite the particular interest. As a further candidate, the OSN Google Plus was dropped, as Google announced in October 2018 that it would discontinue its OSN\cite{google-plus2018shutdown}.
+For this number of reasons, Facebook dropped out as an \ac{OSN} candidate for the prototype despite the particular interest. As a further candidate, the \ac{OSN} Google Plus was dropped, as Google announced in October 2018 that it would discontinue its \ac{OSN}\cite{google-plus2018shutdown}.
 
-In the end, Twitter was chosen for the prototype. With 336 million active users per month, it is one of the largest social networks. It is particularly well suited for the development of a hybrid client for two reasons: on the one hand, it has a comprehensive API that provides almost full functionality free of charge, and on the other hand, compared to Facebook, it offers only a few simple functions. These are the ideal prerequisites for the first proof of concept.
+In the end, Twitter was chosen for the prototype. With 336 million active users per month, it is one of the largest social networks. It is particularly well suited for the development of a hybrid client for two reasons: on the one hand, it has a comprehensive \ac{API} that provides almost full functionality free of charge, and on the other hand, compared to Facebook, it offers only a few simple functions. These are the ideal prerequisites for the first proof of concept.
 
-Twitter offers different APIs for developers that serve different purposes. The current APIs are:
+Twitter offers different \acp{API} for developers that serve different purposes. The current \acp{API} are:
 \begin{itemize}
-	\item \textbf{Standard API}: the free and public API offering basic query functionality and foundational access to Twitter data.
-	\item \textbf{Premium API}: introduced in November 2017 to close the gap between Standard and Entrprise API. Improvements over the Standard API: \enquote{more Tweets per request, higher rate limits, a counts endpoint that returns time-series counts of Tweets, more complex queries and metadata enrichments, such as expanded URLs and improved profile geo information}\footnote{https://blog.twitter.com/developer/en\_us/topics/tools/2017/introducing-twitter-premium-apis.html}. Prices to use this API start at 149\$/month.
-	\item \textbf{Enterprise API}: tailored packages with annual contracts for those who depend on Twitter data.
-	\item \textbf{Ads API}: this API is only of interest for creating and managing ad campaigns.
-	\item \textbf{Twitter for websites}: this is more a suite of tools than an API. It's free to use and enables people to embed tweets and tweet buttons on their website.
+	\item \textbf{Standard \ac{API}}: the free and public \ac{API} offering basic query functionality and foundational access to Twitter data.
+	\item \textbf{Premium \ac{API}}: introduced in November 2017 to close the gap between Standard and Entrprise \ac{API}. Improvements over the Standard \ac{API}: \enquote{more Tweets per request, higher rate limits, a counts endpoint that returns time-series counts of Tweets, more complex queries and metadata enrichments, such as expanded URLs and improved profile geo information}\footnote{https://blog.twitter.com/developer/en\_us/topics/tools/2017/introducing-twitter-premium-apis.html}. Prices to use this \ac{API} start at 149\$/month.
+	\item \textbf{Enterprise \ac{API}}: tailored packages with annual contracts for those who depend on Twitter data.
+	\item \textbf{Ads \ac{API}}: this \ac{API} is only of interest for creating and managing ad campaigns.
+	\item \textbf{Twitter for websites}: this is more a suite of tools than an \ac{API}. It's free to use and enables people to embed tweets and tweet buttons on their website.
 \end{itemize}
 
-In the case of the hybrid client, the standard API is the right one. For using the API, first, the registration of a \enquote{Twitter App} is necessary to receive a consumer key and access token. These two authentication tokens are required to log in users via the hybrid app and successfully communicate with the API.
+In the case of the hybrid client, the standard \ac{API} is the right one. For using the \ac{API}, first, the registration of a \enquote{Twitter App} is necessary to receive a consumer key and access token. These two authentication tokens are required to log in users via the hybrid app and successfully communicate with the \ac{API}.
 
-Twitter offers almost the entire range of functions via the API. The missing functionality (e.g., the targeted retrieval of replies to a tweet) is not so critical for building a client app. A significant limitation is a restriction on the number of requests. Twitter argues that this restriction is necessary to avoid the exposure of the system to too much load. It also aims to prevent bots from abusing Twitter. The exact limits can be found on a help page\footnote{https://developer.twitter.com/en/docs/basics/rate-limits}. In the app stores of Google and Apple, there are a number of alternative Twitter clients (Twitterific\footnote{https://itunes.apple.com/de/app/twitterrific-5-for-twitter/id580311103?mt=8}, Talon for Twitter\footnote{https://play.google.com/store/apps/details?id=com.klinker.android.twitter\_l}, Fenix 2 for Twitter\footnote{https://play.google.com/store/apps/details?id=it.mvilla.android.fenix2}), which are also subject to these restrictions in terms of functionality and request restrictions.
+Twitter offers almost the entire range of functions via the \ac{API}. The missing functionality (e.g., the targeted retrieval of replies to a tweet) is not so critical for building a client app. A significant limitation is a restriction on the number of requests. Twitter argues that this restriction is necessary to avoid the exposure of the system to too much load. It also aims to prevent bots from abusing Twitter. The exact limits can be found on a help page\footnote{https://developer.twitter.com/en/docs/basics/rate-limits}. In the app stores of Google and Apple, there are a number of alternative Twitter clients (Twitterific\footnote{https://itunes.apple.com/de/app/twitterrific-5-for-twitter/id580311103?mt=8}, Talon for Twitter\footnote{https://play.google.com/store/apps/details?id=com.klinker.android.twitter\_l}, Fenix 2 for Twitter\footnote{https://play.google.com/store/apps/details?id=it.mvilla.android.fenix2}), which are also subject to these restrictions in terms of functionality and request restrictions.
 
-The API can be accessed using HTTP requests. The data exchanged are in JSON format. Furthermore, there are also various libraries (e.g., Twit\footnote{https://github.com/ttezel/twit}), some of which are developed directly by Twitter (see Twitter Kit for Android\footnote{https://github.com/twitter/twitter-kit-android} or iOS\footnote{https://github.com/twitter/twitter-kit-ios}) and simplify the use of the API.
+The \ac{API} can be accessed using \ac{HTTP} requests. The data exchanged are in \ac{JSON} format. Furthermore, there are also various libraries (e.g., Twit\footnote{https://github.com/ttezel/twit}), some of which are developed directly by Twitter (see Twitter Kit for Android\footnote{https://github.com/twitter/twitter-kit-android} or iOS\footnote{https://github.com/twitter/twitter-kit-ios}) and simplify the use of the \ac{API}.

+ 6 - 6
thesis/content/05-proof-of-concept/runtime-view.tex

@@ -23,17 +23,17 @@ On the write-tweet page, new tweets can be written and posted using a form. In a
 
 After entering the message in the input field, determining the destination network, and pressing the \enquote{TWEET!} button, processing starts in the WriteTweetPage class. If the message is destined for Twitter, the Twitter API provider sends an HTTP POST request with the data to the \texttt{statuses/update}\footnote{https://developer.twitter.com/en/docs/tweets/post-and-engage/api-reference/post-statuses-update} interface. In this case, nothing more needs to be done, as the Twitter API takes over the preparation of the data and extracts, for example, hashtags, mentions, and URLs.
 
-When publishing in the private network, the system first checks whether the public key has already been published. The Crypto provider performs this check using the Twitter API provider (for further information about the handling of public keys see the section about security \ref{sec:security}). If the public key has not yet been published, the user receives a warning, and the posting process is aborted. Otherwise, the private tweet will be constructed. The entered text is converted into a simplified tweet object (see Twitter documentation for original tweet object\footnote{https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet-object.html}) that contains the essential information.
+When publishing in the private network, the system first checks whether the public key has already been published. The Crypto provider performs this check using the Twitter \ac{API} provider (for further information about the handling of public keys see the section about security \ref{sec:security}). If the public key has not yet been published, the user receives a warning, and the posting process is aborted. Otherwise, the private tweet will be constructed. The entered text is converted into a simplified tweet object (see Twitter documentation for original tweet object\footnote{https://developer.twitter.com/en/docs/tweets/data-dictionary/overview/tweet-object.html}) that contains the essential information.
 
 Beside the message (\texttt{full\_text}) the Twitter user id (\texttt{user\_id}) and the timestamp (\texttt{created\_at}) are set. In addition, there is a flag (\texttt{private\_tweet: true}) to distinguish the private tweet, which later influences the design. The \texttt{display\_text\_range} indicates the beginning and end of the relevant part in \texttt{full\_text}. For private tweets, this is always the entire text, for tweets from the Twitter API an additional, not needed URL may be appended, which is cut off by clipping. Furthermore, the tweet entities are extracted. The extraction includes URLs, hashtags and user mentions. An example of a private tweet is shown in Listing \ref{listing:private-tweet}.
 
-\lstinputlisting[label=listing:private-tweet, caption=Private Tweet in JSON format]{listings/private-tweet.json}
+\lstinputlisting[label=listing:private-tweet, caption=Private Tweet in \ac{JSON} format]{listings/private-tweet.json}
 
-The crypto provider performs the encryption of the private tweet data. For asymmetrical encryption, the RSA algorithm is used. The P2P storage provider sends the encrypted data via an HTTP POST request to Infura for storage in IPFS. The response contains the hash which addresses the data in IPFS. This hash is stored in Gun together with the timestamp and the author's Twitter user id. For saving to Gun, the P2P DB provider is used. Besides, the previously extracted hashtags with the timestamp are also stored in Gun with the same provider so that the data in the dashboard is accessible to the service provider without having to conclude individual users.
+The crypto provider performs the encryption of the private tweet data. For asymmetrical encryption, the RSA algorithm is used. The \ac{P2P} storage provider sends the encrypted data via an \ac{HTTP} POST request to Infura for storage in \ac{IPFS}. The response contains the hash which addresses the data in \ac{IPFS}. This hash is stored in Gun together with the timestamp and the author's Twitter user id. For saving to Gun, the \ac{P2P} DB provider is used. Besides, the previously extracted hashtags with the timestamp are also stored in Gun with the same provider so that the data in the dashboard is accessible to the service provider without having to conclude individual users.
 
 \subsection{Load the Home Timeline}
 \label{sec:load-home-timeline}
-When opening the home page, the logged in user gets the latest tweets of the accounts he follows chronologically presented. The tweets are loaded in batches of 20 tweets from the Twitter API and enriched with the private tweets for this period. If the user scrolls to the end of the feed, the reloading of the next batch is triggered and automatically inserted at the end. At the top of the feed, a \enquote{pull to refresh} action intents the feed reloading. The loading process is shown in Figure \ref{fig:home-timeline-flow-chart} as a flow chart and in Figure \ref{fig:home-timeline-building-block-view} as a building block view of the interacting components.
+When opening the home page, the logged in user gets the latest tweets of the accounts he follows chronologically presented. The tweets are loaded in batches of 20 tweets from the Twitter \ac{API} and enriched with the private tweets for this period. If the user scrolls to the end of the feed, the reloading of the next batch is triggered and automatically inserted at the end. At the top of the feed, a \enquote{pull to refresh} action intents the feed reloading. The loading process is shown in Figure \ref{fig:home-timeline-flow-chart} as a flow chart and in Figure \ref{fig:home-timeline-building-block-view} as a building block view of the interacting components.
 
 \begin{figure}[h!]
 	\centering
@@ -49,8 +49,8 @@ When opening the home page, the logged in user gets the latest tweets of the acc
 	\label{fig:home-timeline-building-block-view}
 \end{figure}
 
-The starting point is the home page, which is accessed by the user. Several components display the data obtained from the feed provider. Using the Twitter API provider, the feed provider loads the latest 20 timeline tweets via the corresponding interface (\texttt{statuses/home\_timeline}\footnote{https://developer.twitter.com/en/docs/tweets/timelines/yapi-reference/get-statuses-home\_timeline.html}) via an HTTP GET request.
+The starting point is the home page, which is accessed by the user. Several components display the data obtained from the feed provider. Using the Twitter \ac{API} provider, the feed provider loads the latest 20 timeline tweets via the corresponding interface (\texttt{statuses/home\_timeline}\footnote{https://developer.twitter.com/en/docs/tweets/timelines/yapi-reference/get-statuses-home\_timeline.html}) via an \ac{HTTP} GET request.
 
-The next step is to load the private tweets that match the period marked by the current timestamp and timestamp of the 20th (the oldest) tweet. Furthermore, the user ids of the users the user follows (so-called friends) are required. These must initially be requested from the Twitter API (\texttt{friends/list}\footnote{https://developer.twitter.com/en/docs/accounts-and-users/follow-search-get-users/api-reference/get-friends-list}) via the Twitter Feed provider using an HTTP GET request. The loaded user ids are cached in order to keep the number of requests to the Twitter API to a minimum for later loading processes. For each user id, a lookup for private tweets in the given period is performed. The P2P DB provider queries Gun. If there are private tweets, the hashes for IPFS are returned together with the \texttt{created\_at} timestamp. If no private tweets are available for the period, the feed provider returns the data of the public tweets to the home page. Otherwise, next, the private tweets are loaded and decrypted. First, the P2P storage provider is used to load the data behind the hash addresses from IPFS via Infura. For this purpose, the hash is transferred to Infura with an HTTP GET request, and the data is received from IPFS as the response. The author's public key, which can be obtained from the user's public key history, is needed for decryption. The public key history is loaded and decrypted via the crypto provider, which in turn uses the Twitter API provider. Afterward, the private tweet is decrypted.
+The next step is to load the private tweets that match the period marked by the current timestamp and timestamp of the 20th (the oldest) tweet. Furthermore, the user ids of the users the user follows (so-called friends) are required. These must initially be requested from the Twitter \ac{API} (\texttt{friends/list}\footnote{https://developer.twitter.com/en/docs/accounts-and-users/follow-search-get-users/api-reference/get-friends-list}) via the Twitter Feed provider using an \ac{HTTP} GET request. The loaded user ids are cached in order to keep the number of requests to the Twitter \ac{API} to a minimum for later loading processes. For each user id, a lookup for private tweets in the given period is performed. The \ac{P2P} \ac{DB} provider queries Gun. If there are private tweets, the hashes for \ac{IPFS} are returned together with the \texttt{created\_at} timestamp. If no private tweets are available for the period, the feed provider returns the data of the public tweets to the home page. Otherwise, next, the private tweets are loaded and decrypted. First, the \ac{P2P} storage provider is used to load the data behind the hash addresses from \ac{IPFS} via Infura. For this purpose, the hash is transferred to Infura with an \ac{HTTP} GET request, and the data is received from \ac{IPFS} as the response. The author's public key, which can be obtained from the user's public key history, is needed for decryption. The public key history is loaded and decrypted via the crypto provider, which in turn uses the Twitter \ac{API} provider. Afterward, the private tweet is decrypted.
 
 Finally, the private and public tweets are merged and sorted according to their \texttt{created\_at} timestamp in descending order. This data is returned to the home page. If the user triggers a reload by reaching the end of the feed or by \enquote{pull to refresh}, the previously described process starts again.

+ 5 - 5
thesis/content/05-proof-of-concept/security.tex

@@ -4,7 +4,7 @@ Tweets are usually posted publicly on Twitter. Only those who explicitly set the
 
 The following three requirements apply to encryption:
 \begin{enumerate}
-	\item The author is verifiable. It is not possible to distribute tweets on behalf of another user on the P2P network.
+	\item The author is verifiable. It is not possible to distribute tweets on behalf of another user on the \ac{P2P} network.
 	\item A private tweet should have the same visibility to other users as a standard tweet on Twitter.
 	\item The service provider (Twitter) must not be able to decrypt private tweets or associate them with a user.
 \end{enumerate}
@@ -13,17 +13,17 @@ Concrete actions can be concluded to meet these requirements:
 \begin{enumerate}
 	\item Private tweets must be signed or asymmetrically encrypted so that the author is identifiable.
 	\item Distribution of the public key for decryption must take place via the user's profile. The profile is the only place that guarantees that only authorized users can access the key and that the public key belongs to a specific user without any doubt.
-	\item The Hybrid OSN application must encrypt the public keys so that Twitter cannot read them and therefore cannot decrypt the private tweets.
+	\item The Hybrid \ac{OSN} application must encrypt the public keys so that Twitter cannot read them and therefore cannot decrypt the private tweets.
 \end{enumerate}
 
 \subsection{Realization}
 \label{sec:security-relaization}
-In the app settings, an asymmetric key pair can be stored or generated, which is used to encrypt the private tweets. The RSA-OAEP algorithm is used here. Furthermore, by clicking on a particular button, the public key is published. The new key together with the current timestamp is recorded in the public key history of the user and stored in IPFS. Listing \ref{listing:public-key-history} shows an example for the public key history. In JSON format, public key and validity start are stored in an array. The address at which the public key history can be retrieved is posted as a regular tweet in the user's timeline during publication. So that this tweet can be found quickly and easily; the id of this tweet is saved in the profile description of the user.
+In the app settings, an asymmetric key pair can be stored or generated, which is used to encrypt the private tweets. The RSA-OAEP algorithm is used here. Furthermore, by clicking on a particular button, the public key is published. The new key together with the current timestamp is recorded in the public key history of the user and stored in \ac{IPFS}. Listing \ref{listing:public-key-history} shows an example for the public key history. In \ac{JSON} format, public key and validity start are stored in an array. The address at which the public key history can be retrieved is posted as a regular tweet in the user's timeline during publication. So that this tweet can be found quickly and easily; the id of this tweet is saved in the profile description of the user.
 
-\lstinputlisting[label=listing:public-key-history, caption=Public key history in JSON format. The file is symmetrically encrypted before storing on IPFS.]{listings/key-history.json}
+\lstinputlisting[label=listing:public-key-history, caption=Public key history in \ac{JSON} format. The file is symmetrically encrypted before storing on \ac{IPFS}.]{listings/key-history.json}
 
 By saving this information on the user's profile, it is ensured that only users who can read the user's regular tweets have access to the user's public keys and hence to his private tweets. The history allows the changing of the key pair and still ensure that older private tweets are still decryptable.
 
-Since Twitter has access to all the data stored on its servers, it can also find the link to the public key history. Therefore, it is necessary to prevent Twitter from decrypting the private tweets of the user by retrieving the public key history. For this reason, the public key history is additionally encrypted symmetrically with the AES-GCM algorithm. The key is stored in the Hybrid OSN app and therefore unknown to Twitter.
+Since Twitter has access to all the data stored on its servers, it can also find the link to the public key history. Therefore, it is necessary to prevent Twitter from decrypting the private tweets of the user by retrieving the public key history. For this reason, the public key history is additionally encrypted symmetrically with the \ac{AES}-\ac{GCM} algorithm. The key is stored in the Hybrid \ac{OSN} app and therefore unknown to Twitter.
 
 When writing a new, private tweet, the system checks whether the public key has been published before posting. Only if this is fulfilled, the private tweet will be encrypted with the private key and posted.

+ 16 - 16
thesis/content/06-discussion/threat-model.tex

@@ -1,45 +1,45 @@
-In the threat model of Hybrid OSN the potential threats for different sub-areas are shown, and the particular risk discussed. The worst would be if private data could be decrypted and assigned to a user or if identity abuse were possible. However, other dangers such as identification by the service provider or manipulation of data must also be analyzed.
+In the threat model of Hybrid \ac{OSN} the potential threats for different sub-areas are shown, and the particular risk discussed. The worst would be if private data could be decrypted and assigned to a user or if identity abuse were possible. However, other dangers such as identification by the service provider or manipulation of data must also be analyzed.
 
 \subsection{Service Provider – Twitter}
 \label{sec:threat-model-service-provider}
-Hybrid OSN users can be easily identified by the service provider Twitter, even if they only use Hybrid OSN passively to read private tweets of other users and do not write private tweets themselves.
+Hybrid \ac{OSN} users can be easily identified by the service provider Twitter, even if they only use Hybrid \ac{OSN} passively to read private tweets of other users and do not write private tweets themselves.
 
-For using the Twitter API, it is essential to register an app to get an app token. This app token is attached to all requests sent to the Twitter API. When logging in on Hybrid OSN for the first time, the user accepts to use the app to access Twitter.
+For using the Twitter \ac{API}, it is essential to register an app to get an app token. This app token is attached to all requests sent to the Twitter API. When logging in on Hybrid \ac{OSN} for the first time, the user accepts to use the app to access Twitter.
 
-So far not implemented, but theoretically possible is that each user creates an app for the use of the API on their own. The obtained app token could then be stored in the Hybrid OSN app, and the use of the application could be obscured. In this case, the identification possibility via the Hybrid OSN app token is omitted, and the passive use would be possible without danger.
+So far not implemented, but theoretically possible is that each user creates an app for the use of the \ac{API} on their own. The obtained app token could then be stored in the Hybrid \ac{OSN} app, and the use of the application could be obscured. In this case, the identification possibility via the Hybrid \ac{OSN} app token is omitted, and the passive use would be possible without danger.
 
-Active use requires a public tweet and a reference in the profile description for the distribution of the public key history. Although the contents are inconspicuous, they are still sufficient for the identification of a Hybrid OSN user.
+Active use requires a public tweet and a reference in the profile description for the distribution of the public key history. Although the contents are inconspicuous, they are still sufficient for the identification of a Hybrid \ac{OSN} user.
 
 \subsection{Gun}
 \label{sec:threat-model-gun}
-In Hybrid OSN, Gun takes the role of a database shared by the peers. The dashboard also establishes a direct connection. The data is publicly accessible and editable.
+In Hybrid \ac{OSN}, Gun takes the role of a database shared by the peers. The dashboard also establishes a direct connection. The data is publicly accessible and editable.
 
-The stored data is a combination of hashtag and timestamp, which serve as information for the trends in the Hybrid OSN dashboard. For every private tweet of a user, there is an entry consisting of Twitter user id, IPFS address hash, and timestamp. Also, there are the private likes, for which there is a counter to the tweet id.
+The stored data is a combination of hashtag and timestamp, which serve as information for the trends in the Hybrid \ac{OSN} dashboard. For every private tweet of a user, there is an entry consisting of Twitter user id, \ac{IPFS} address hash, and timestamp. Also, there are the private likes, for which there is a counter to the tweet id.
 
 For preventing the hashtag and private tweet timestamps from connecting, the time of the hashtag timestamp is set to 00:01. The trends in the dashboard are aggregated by the day, so the exact time is not essential.
 
-The greatest threat is that an attacker may modify or delete data. By deleting entries, private tweets would no longer be found and thus no longer displayed. Changing the IPFS hash would mean that the data could not be found and would also not be displayed. Manipulation of the timestamp would result in private tweets being loaded at the wrong time interval when the feed is loaded and thus positioned at the wrong place. Furthermore, the timestamp in Gun is used to use the appropriate public key from the public key history for decryption. Under certain circumstances, the wrong key would be selected and the private tweet could not be decrypted.
+The greatest threat is that an attacker may modify or delete data. By deleting entries, private tweets would no longer be found and thus no longer displayed. Changing the \ac{IPFS} hash would mean that the data could not be found and would also not be displayed. Manipulation of the timestamp would result in private tweets being loaded at the wrong time interval when the feed is loaded and thus positioned at the wrong place. Furthermore, the timestamp in Gun is used to use the appropriate public key from the public key history for decryption. Under certain circumstances, the wrong key would be selected and the private tweet could not be decrypted.
 
 Creating entries for private tweets does not have a significant effect because the associated content stored in IPFS must be encrypted with the private key, which is unknown to a third party. Adding wrong or modifying existing hashtag entries for trend detection is also possible and poses a significant risk as it allows manipulation of the trends. Ultimately, it is not possible to verify which hashtags were used and how often. The same applies to private likes. Since in this case the complete information is stored in Gun and can be changed, it is not possible to determine whether data has been manipulated.
 
-\subsection{IPFS}
+\subsection{\ac{IPFS}}
 \label{sec:threat-model-ipfs}
-Since IPFS is publicly accessible, anyone can add and retrieve data. However, it is not possible to change or delete data. A hash of the content addresses the data stored in IPFS. Since the content is entirely unknown (especially by encrypting the plaintext content), it is not possible to conclude the hash. A targeted search for private data in IPFS is therefore impossible. The encrypted data also does not contain any clues that allow conclusions to be drawn about Hybrid OSN.
+Since \ac{IPFS} is publicly accessible, anyone can add and retrieve data. However, it is not possible to change or delete data. A hash of the content addresses the data stored in \ac{IPFS}. Since the content is entirely unknown (especially by encrypting the plaintext content), it is not possible to conclude the hash. A targeted search for private data in \ac{IPFS} is therefore impossible. The encrypted data also does not contain any clues that allow conclusions to be drawn about Hybrid \ac{OSN}.
 
-In combination with the publicly available information from Gun, all private tweet data could be found in IPFS. However, because of the encryption of the content, the data is worthless.
+In combination with the publicly available information from Gun, all private tweet data could be found in \ac{IPFS}. However, because of the encryption of the content, the data is worthless.
 
-Due to the decentralization of the system, the availability of IPFS is always guaranteed. However, only as long as there are peers who make the service possible. If a peer leaves the network, its data is also lost if not reproduced beforehand. Therefore, there is no guarantee for the permanent availability of data.
+Due to the decentralization of the system, the availability of \ac{IPFS} is always guaranteed. However, only as long as there are peers who make the service possible. If a peer leaves the network, its data is also lost if not reproduced beforehand. Therefore, there is no guarantee for the permanent availability of data.
 
 \subsection{Encryption – Leakage of Keys}
 \label{sec:threat-model-encryption}
-On the one hand, the public key history is symmetrically encrypted; on the other hand, the private tweets are asymmetrically encrypted. The keys for asymmetric encryption are generated independently by each user and are therefore individual for each user. With symmetric encryption, only one key is used, which is stored in the source code of Hybrid OSN. In this way, only the Hybrid OSN app can decrypt the public key history of a user and therefore decrypt its private tweets.
+On the one hand, the public key history is symmetrically encrypted; on the other hand, the private tweets are asymmetrically encrypted. The keys for asymmetric encryption are generated independently by each user and are therefore individual for each user. With symmetric encryption, only one key is used, which is stored in the source code of Hybrid \ac{OSN}. In this way, only the Hybrid \ac{OSN} app can decrypt the public key history of a user and therefore decrypt its private tweets.
 
 Disclosure of the source code would reveal the symmetric key. The service provider would then have all the necessary information and access to all data to read private tweets and assign them to users.
 
 \subsection{Authorized Access}
 \label{sec:threat-model-authorized-access}
-A user's private tweets should be readable by all users who can also read the public tweets - except for the service provider Twitter. Therefore, a user posts the IPFS hash that leads to the public key history on his timeline.
+A user's private tweets should be readable by all users who can also read the public tweets - except for the service provider Twitter. Therefore, a user posts the \ac{IPFS} hash that leads to the public key history on his timeline.
 
-There is a threat that authorized users may create a copy of the decrypted public key history and pass it on to third parties. Since the data in IPFS is permanent and therefore not erasable, it can be decrypted at any time later with the appropriate public key.
+There is a threat that authorized users may create a copy of the decrypted public key history and pass it on to third parties. Since the data in \ac{IPFS} is permanent and therefore not erasable, it can be decrypted at any time later with the appropriate public key.
 
-If a user decides to change his profile to \enquote{private} in the account settings, the profile will no longer be publicly accessible. Only accepted followers should then be able to read public and private tweets. A non-approved twitter user is still able to fetch the encrypted private tweets from IPFS. However, since the link to the public key history is no longer accessible, the private tweets decryption is not possible. If non-approved users or third parties already have the link to or a backup of the public key history from the past, all private tweets of the past can still be decrypted. Whenever a profile is changed to \enquote{private} a new pair of keys should be generated to ensure that future private tweets are only readable to approved users.
+If a user decides to change his profile to \enquote{private} in the account settings, the profile will no longer be publicly accessible. Only accepted followers should then be able to read public and private tweets. A non-approved twitter user is still able to fetch the encrypted private tweets from \ac{IPFS}. However, since the link to the public key history is no longer accessible, the private tweets decryption is not possible. If non-approved users or third parties already have the link to or a backup of the public key history from the past, all private tweets of the past can still be decrypted. Whenever a profile is changed to \enquote{private} a new pair of keys should be generated to ensure that future private tweets are only readable to approved users.

+ 1 - 0
thesis/header.tex

@@ -29,6 +29,7 @@
 \usepackage{csquotes}
 \usepackage{lscape}
 \usepackage{subcaption}
+\usepackage{acronym}
 
 % llt: Define a global style for URLs, rather that the default one
 \makeatletter