Browse Source

backup latex

Jia, Jingyi 2 years ago
parent
commit
d6e1ef2108
90 changed files with 190 additions and 7 deletions
  1. BIN
      .DS_Store
  2. BIN
      Concepts for operating ground based rescue robots using virtual reality/.DS_Store
  3. BIN
      Concepts for operating ground based rescue robots using virtual reality/Concepts_for_operating_ground_based_rescue_robots_using_virtual_reality.pdf
  4. 3 0
      Concepts for operating ground based rescue robots using virtual reality/Thesis_Jingyi.tex
  5. 76 4
      Concepts for operating ground based rescue robots using virtual reality/chapters/implementation.tex
  6. 111 1
      Concepts for operating ground based rescue robots using virtual reality/chapters/result.tex
  7. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/I found it easy to concentrate on controlling the robot.jpg
  8. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/I found it easy to move robot in desired position.jpg
  9. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/I found it easy to perceive the details of the environment.jpg
  10. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/Rescue situation.png
  11. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/Robot Performance.png
  12. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/handle1.png
  13. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/handle2.png
  14. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/lab1.png
  15. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/lab3.png
  16. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/lab4.png
  17. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/remote1.png
  18. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/remote2.png
  19. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/remote3.png
  20. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/remote4.png
  21. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/robot1.png
  22. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/robot2.png
  23. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/robot3.png
  24. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/robot4.png
  25. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/summary.jpg
  26. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/testCollider.png
  27. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/testCollider2.png
  28. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/testVictim1.png
  29. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/testVictim2.png
  30. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/testVictim3.png
  31. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/testVictim4.png
  32. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/total.png
  33. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/ui1.png
  34. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/ui2.png
  35. BIN
      Concepts for operating ground based rescue robots using virtual reality/graphics/ui3.png
  36. BIN
      User Study/.DS_Store
  37. BIN
      User Study/Einverstaendnis.pdf
  38. BIN
      User Study/Photo/.DS_Store
  39. BIN
      User Study/Photo/Snipaste_2021-07-10_14-59-39.png
  40. BIN
      User Study/Photo/Snipaste_2021-07-10_16-08-44.png
  41. BIN
      User Study/Photo/Snipaste_2021-07-10_17-03-50.png
  42. BIN
      User Study/Photo/Snipaste_2021-07-10_17-24-14.png
  43. BIN
      User Study/Photo/Snipaste_2021-07-10_17-24-36.png
  44. BIN
      User Study/Photo/Snipaste_2021-07-10_17-24-58.png
  45. BIN
      User Study/Photo/Snipaste_2021-07-10_17-36-48.png
  46. BIN
      User Study/Photo/Snipaste_2021-07-10_17-37-54.png
  47. BIN
      User Study/Photo/Snipaste_2021-07-10_17-44-25.png
  48. BIN
      User Study/Photo/Snipaste_2021-07-10_17-44-30.png
  49. BIN
      User Study/Photo/Snipaste_2021-07-10_17-44-58.png
  50. BIN
      User Study/Photo/Snipaste_2021-07-10_17-45-05.png
  51. BIN
      User Study/Photo/Snipaste_2021-07-10_17-46-06.png
  52. BIN
      User Study/Photo/Snipaste_2021-07-10_17-52-09.png
  53. BIN
      User Study/Photo/Snipaste_2021-07-17_18-30-39.png
  54. BIN
      User Study/Photo/Snipaste_2021-07-17_18-33-13.png
  55. BIN
      User Study/Photo/Snipaste_2021-07-17_18-33-35.png
  56. BIN
      User Study/Photo/Snipaste_2021-07-17_18-34-20.png
  57. BIN
      User Study/Photo/Snipaste_2021-07-17_18-34-40.png
  58. BIN
      User Study/Photo/Snipaste_2021-07-17_18-35-37.png
  59. BIN
      User Study/Photo/Snipaste_2021-07-17_18-46-13.png
  60. BIN
      User Study/Photo/Snipaste_2021-07-17_18-46-56.png
  61. BIN
      User Study/Photo/Snipaste_2021-07-17_18-50-44.png
  62. BIN
      User Study/Photo/Snipaste_2021-07-17_18-51-21.png
  63. BIN
      User Study/Photo/Snipaste_2021-07-17_18-51-33.png
  64. BIN
      User Study/Photo/Snipaste_2021-07-17_18-51-57.png
  65. BIN
      User Study/Photo/Snipaste_2021-07-17_18-53-11.png
  66. BIN
      User Study/Photo/Snipaste_2021-07-17_18-53-26.png
  67. BIN
      User Study/Photo/handle1.png
  68. BIN
      User Study/Photo/handle2.png
  69. BIN
      User Study/Photo/lab1.png
  70. BIN
      User Study/Photo/lab3.png
  71. BIN
      User Study/Photo/lab4.png
  72. BIN
      User Study/Photo/remote1.png
  73. BIN
      User Study/Photo/remote2.png
  74. BIN
      User Study/Photo/remote3.png
  75. BIN
      User Study/Photo/remote4.png
  76. BIN
      User Study/Photo/remote5.png
  77. BIN
      User Study/Photo/robot1.png
  78. BIN
      User Study/Photo/robot2.png
  79. BIN
      User Study/Photo/robot3.png
  80. BIN
      User Study/Photo/robot4.png
  81. BIN
      User Study/Photo/testCollider.png
  82. BIN
      User Study/Photo/testCollider2.png
  83. BIN
      User Study/Photo/testVictim1.png
  84. BIN
      User Study/Photo/testVictim2.png
  85. BIN
      User Study/Photo/testVictim3.png
  86. BIN
      User Study/Photo/testVictim4.png
  87. BIN
      User Study/Photo/ui1.png
  88. BIN
      User Study/Photo/ui2.png
  89. BIN
      User Study/Photo/ui3.png
  90. 0 2
      User Study/Procedure.md

BIN
.DS_Store


BIN
Concepts for operating ground based rescue robots using virtual reality/.DS_Store


BIN
Concepts for operating ground based rescue robots using virtual reality/Concepts_for_operating_ground_based_rescue_robots_using_virtual_reality.pdf


+ 3 - 0
Concepts for operating ground based rescue robots using virtual reality/Thesis_Jingyi.tex

@@ -67,6 +67,9 @@
 \usepackage{graphicx} 
 \usepackage{float} 
 \usepackage{subfigure} 
+\usepackage[justification=centering]{caption} 
+\usepackage{wrapfig}
+\usepackage{picinpar}
 
 %Formatierungen für Beispiele in diesem Dokument. Im Allgemeinen nicht notwendig!
 \let\file\texttt

+ 76 - 4
Concepts for operating ground based rescue robots using virtual reality/chapters/implementation.tex

@@ -14,7 +14,21 @@ The proposed system runs on a computer with the Windows 10 operating system. Thi
 \gls{unity} was chosen as the platform to develop the system. \gls{unity} is a widely used game engine with \gls{steamvr} \footnote{https://assetstore.unity.com/packages/tools/integration/steamvr-plugin-32647}, which allows developers to focus on the \gls{vr} environment and interactive behaviors in programming, rather than specific controller buttons and headset positioning, making \gls{vr} development much simpler. Another reason why \gls{unity} was chosen as a development platform was the potential for collaboration with \gls{ros}, a frequently used operating system for robot simulation and manipulation, which is flexible, low-coupling, distributed, open source, and has a powerful and rich third-party feature set. In terms of collaboration between \gls{unity} and \gls{ros}, Siemens provides open-source software libraries and tools in C\# for communicating with ROS from .NET applications \footnote{https://github.com/siemens/ros-sharp}. Combining \gls{ros} and \gls{unity} to develop a collaborative \gls{hri} platform proved to be feasible \cite{Whitney:2018wk}. Since the focus of this work is on \gls{hri}, collaboration and synchronization of \gls{ros} will not be explored in detail here.
 
 \section{Robot}
-The proposed system needs to simulate the process that a robot uses a \gls{lidar} remote sensor to detect the real environment and synchronize it to \gls{unity}. Thus, a sphere collision body was set up on the robot. The robot will transform the Layers of the objects in the scene into visible Layers by collision detection and a trigger event (onTriggerEnter function). The robot's driving performance, such as the number of collisions, average speed, total distance, will be recorded in each test. The detailed recorded information can be seen in Chapter \ref{result}. The movement of the robot depends on the value of the signal that is updated in each mode. In addition, the robot's Gameobject has the \gls{nav} \footnote{https://docs.unity3d.com/ScriptReference/AI.NavMeshAgent.html} component, which supports the robot's navigation to the specified destination with automatic obstacle avoidance in the test scene. The virtual robot has three cameras. One of the cameras is a simulation of a surveillance camera mounted on the robot, which can see all the items in the scene, although the distant items are not yet detected by LiDAR. Two of these cameras are set up in such a way that they can only see the area detected by \gls{lidar}. Each camera captures what it sees and modifies the bound image in real time. The four operation modes described later all use the camera viewport as a monitoring screen by rendering the camera viewport on UI canvas.
+The proposed system needs to simulate the process that a robot uses a \gls{lidar} remote sensor to detect the real environment and synchronize it to \gls{unity}. Thus, a sphere collision body was set up on the robot as seen in \ref{fig:robot}. The robot will transform the Layers of the objects in the scene into visible Layers by collision detection and a trigger event (onTriggerEnter function). The robot's driving performance, such as the number of collisions, average speed, total distance, will be recorded in each test. The detailed recorded information can be seen in Chapter \ref{result}. The movement of the robot depends on the value of the signal that is updated in each mode. In addition, the robot's Gameobject has the \gls{nav} \footnote{https://docs.unity3d.com/ScriptReference/AI.NavMeshAgent.html} component, which supports the robot's navigation to the specified destination with automatic obstacle avoidance in the test scene. The virtual robot has three cameras. One of the cameras is a simulation of a surveillance camera mounted on the robot, which can see all the items in the scene, although the distant items are not yet detected by LiDAR. Two of these cameras are set up in such a way that they can only see the area detected by \gls{lidar}. Each camera captures what it sees and modifies the bound image in real time. The four operation modes described later all use the camera viewport as a monitoring screen by rendering the camera viewport on UI canvas.
+
+
+\begin{figure}[htbp]
+    \centering
+    \subfigure[LiDAR Collider]{
+        \includegraphics[height=5cm]{graphics/robot2.png}
+    }
+    \subfigure[Surveillance camera]{ 
+        \includegraphics[height=5cm]{graphics/robot4.png}
+    }
+    \caption{Robot}
+    \label{fig:robot} 
+\end{figure}
+
 
 
 \section{Interaction techniques}
@@ -41,22 +55,68 @@ In this mode, the user controls the robot's movement directly through the motion
 
 \begin{figure}[htbp]
     \centering
-	\includegraphics[height=10cm]{graphics/htc.png}
-	\caption{HTC handle illustration.}
-	\label{fig:htc}
+    \begin{minipage}[t]{0.48\textwidth}
+        \centering
+        \includegraphics[height=7cm]{graphics/handle1.png}
+        \caption{Handle Mode}
+        \label{fig:handle}
+    \end{minipage}
+    \begin{minipage}[t]{0.48\textwidth}
+        \centering
+        \includegraphics[height=7cm]{graphics/htc.png}
+        \caption{HTC handle illustration}
+        \label{fig:htc}
+    \end{minipage}
 \end{figure}
 
+
 \subsection{Lab Mode}
 This pattern was designed with reference to the system proposed by \cite{Perez:2019ub}\cite{Matsas:2017aa}. Their frameworks are used to train operators to work with the robot, avoiding risks and saving learning costs. In addition, they also mentioned that being in a simulated factory or laboratory can improve immersion. Therefore, in this mode, a virtual laboratory environment is constructed, in which simulated buttons, controllers, and monitoring equipment are placed. The laboratory consists of two parts. The first part is the monitoring equipment: the monitoring screen is enlarged and placed at the front of the lab as a huge display. The second part is the operating console in the center of the laboratory, which can be moved by the user as desired. This is due to the fact that users have different heights and may wish to operate the robot in a standing or sitting position. The user can use the buttons on the table to lock the robot or let it walk forward automatically. In the middle of the console are two operating joysticks that determine the robot's forward motion and rotation respectively. The part that involves virtual joystick movement and button effects uses an open-source GitHub project VRtwix\footnote{https://github.com/rav3dev/vrtwix}. With the sliding stick on the left, the user can edit the speed of the robot's forward movement and rotation.
 
+\begin{figure}[htbp]
+    \centering
+    \subfigure[Overview]{
+        \includegraphics[height=5cm]{graphics/lab3.png}
+    }
+    \subfigure[Operating console]{ 
+        \includegraphics[height=5cm]{graphics/lab4.png}
+    }
+    \caption{Lab Mode}
+    \label{fig:lab} 
+\end{figure}
+
 \subsection{Remote Mode}
 In this mode, the user can set the driving target point directly or control the robot by picking up the remote control placed on the toolbar. The target point is set by the ray emitted by the right motion controller. This process is similar to setting a teleportation point. After the target point is set, a square representing the destination is shown in the scene, and the robot will automatically travel to the set destination. The entire driving process uses the \gls{nav} component and is therefore capable of automatic obstacle avoidance.
 A movable toolbar with remote control and a monitoring device can be opened by clicking on the menu button. The remote control is a safety precaution if the automatic navigation fails to navigate the target point properly. The user can adjust the direction of the robot's travel by using the remote control. The pickup and auto-release parts use the ItemPackage component available in the \gls{steamvr}.
 
+\begin{figure}[htbp]
+    \centering
+    \subfigure[Overview]{
+        \includegraphics[height=5cm]{graphics/remote2.png}
+    }
+    \subfigure[Set the destination]{ 
+        \includegraphics[height=5cm]{graphics/remote3.png}
+    }
+    \caption{Remote Mode}
+    \label{fig:remote} 
+\end{figure}
+
 
 \subsection{UI Mode}
 The virtual menu is also an interaction method that is often used in \gls{vr}, so this mode is proposed. In this mode, the user must interact with the virtual menu using the ray emitted by the right motion controller. The virtual menu is set up with buttons for the direction of movement, a speed controller, and buttons to open and close the monitor screen. In addition to this, an additional follow function is added to the menu, allowing the robot to follow the user's position in the virtual world. This is intended to let the user concentrate on observing the rendered \gls{vr} environment. Also, having a real robot following the user's location in the virtual world is a novel, unique \gls{hri} approach in \gls{vr}. The robot's automatic navigation uses the \gls{nav}.
 
+\begin{figure}[htbp]
+    \centering
+    \subfigure[Overview]{
+        \includegraphics[height=5cm]{graphics/ui2.png}
+    }
+    \subfigure[Follow Function]{ 
+        \includegraphics[height=5cm]{graphics/ui3.png}
+    }
+    \caption{UI Mode}
+    \label{fig:ui} 
+\end{figure}
+
 
 \section{Test Scene}
 In order to simulate the use of rescue robots in disaster scenarios, the test scenes were built to mimic the post-disaster urban environment as much as possible. The POLYGON Apocalypse \footnote{https://assetstore.unity.com/packages/3d/environments/urban/polygon-apocalypse-low-poly-3d-art-by-synty-154193}, available on the \gls{unity} Asset Store, is a low poly asset pack with a large number of models of buildings, streets, vehicles, etc. This resource pack was used as a base. Additional collision bodies of the appropriate size were manually added to each building and obstacle after the resource pack was imported, which was needed to help track the robot's driving crash in subsequent tests.
@@ -64,3 +124,15 @@ In order to simulate the use of rescue robots in disaster scenarios, the test sc
 Considering that four operation modes need to be tested, four scenes with similar complexity and composition but different road conditions and placement of buildings were constructed. The similarity in complexity of the scenes ensures that the difficulty of the four tests is basically identical. The different scene setups ensure that the scene information learned by the user after one test will not make him understand the next test scene and thus affect the accuracy of the test data. 
 
 The entire scene is initially invisible, and the visibility of each object in the test scene is gradually updated as the robot drives along. Ten interactable sufferer characters were placed in each test scene. The placement place can be next to the car, the house side and some other reasonable places.
+
+\begin{figure}[htbp]
+    \centering
+    \subfigure[Obstacle]{
+        \includegraphics[height=5cm]{graphics/testCollider2.png}
+    }
+    \subfigure[Victims]{
+        \includegraphics[height=5cm]{graphics/testVictim4.png}
+    }
+    \caption{Test Scene}
+    \label{fig:testscene} 
+\end{figure}

+ 111 - 1
Concepts for operating ground based rescue robots using virtual reality/chapters/result.tex

@@ -1,4 +1,114 @@
 \chapter{Results and discussion}
 \label{result}
 
-\gls{vr}
+
+
+
+\section{Participants}
+
+A total of 8 volunteers participated in the user study (3 females and 5 males between 22 and 32 years, mean age xxx years). Four participants had previous experience with VR,  but had played it only a few times. .......
+
+\section{Quantitative Results}
+
+Part of the data for the quantitative analysis comes from the robot's performance and testing results, which were automatically recorded by the proposed system during the tests. The other part of the data comes from the questionnaires that the participants filled out after the test.
+
+
+
+\subsection{Robot Performance}
+
+[introduce what was recorded]
+\begin{figure}[htbp]
+    \centering
+    \includegraphics[width=\textwidth]{graphics/Robot Performance.png}
+    \caption{Robot Performance} 
+    \label{fig:performance}
+\end{figure}
+[analysis]
+
+
+\newpage
+\subsection{Rescue situation}
+[introduce what was recorded]
+\begin{wrapfigure}{r}{0.4\textwidth}
+\flushright
+  \includegraphics[height=7cm]{graphics/Rescue situation.png}\\
+  \caption{Rescue situation}
+  \label{fig:rescue}
+  \vspace{-30pt}    % 对应高度3
+\end{wrapfigure}
+[analysis]
+% \begin{wrapfigure}{r}{0cm}
+%   \vspace{-15pt}    % 对应高度1
+%   \includegraphics[width=0.5\textwidth]{graphics/Rescue situation.png}\\
+%   \vspace{-15pt}    % 对应高度2
+%   \label{fig:rescue}
+%   \vspace{-15pt}    % 对应高度3
+% \end{wrapfigure}
+
+
+
+
+\subsection{TLX Score}
+[explain tlx]
+
+\begin{figure}[htbp]
+    \centering
+    \subfigure{
+        \includegraphics[width=\textwidth]{graphics/summary.jpg}
+    }
+    \subfigure{ 
+        \includegraphics[width=\textwidth]{graphics/total.png}
+    }
+    \caption{TLX Score.expain...}
+    \label{fig:tlx} 
+\end{figure}
+
+[analysis]
+
+
+\subsection{Likert Questionnaire Results}
+A questionnaire was used to get their feedback:
+    \begin{enumerate}
+    \item I found it easy to move the robot in desired position.
+    \item I found it easy to concentrate on controlling the robot.
+    \item I found it easy to perceive the details of the environment.
+    \end{enumerate}
+    
+\begin{figure}[htbp]
+    \centering
+    \subfigure{
+        \includegraphics[height=7cm]{graphics/I found it easy to move robot in desired position.jpg}
+    }
+    \subfigure{ 
+        \includegraphics[height=7cm]{graphics/I found it easy to concentrate on controlling the robot.jpg}
+    }
+    \subfigure{ 
+        \includegraphics[height=7cm]{graphics/I found it easy to perceive the details of the environment.jpg}
+    }
+    \caption{Likert Questionnaire Results.(1:strongly disagree, 5:strongly agree)}
+    \label{fig:liker} 
+\end{figure}
+
+[analysis]
+
+
+
+\section{Qualitative Results}
+This section will discuss the feedback from participants. Overall, every participant gave positive comments about operating the robot in a \gls{vr} platform. They thought the proposed system was exciting and did allow them to perceive more details in the post-disaster environment than the traditional video-based manipulation. The feedbacks obtained from each mode will be listed next.
+
+70\% of participants ranked Lab Mode as the least preferred mode. Some experimenters were very unaccustomed to using \gls{vr} handles to grasp objects, which makes it difficult for them to operate the robot with virtual joysticks smoothly. For those who have \gls{vr} experience, even without any hints and learning, they subconsciously understood what each button and joystick represented and were able to operate the robot directly. Nevertheless, for the actual rescue experience in the test focus, both kinds of participants responded that the robot's operation was more complex and difficult than the other modes. Participants attributed the reasons to obstacles in the environment. One of the participants said:"\textit{There is no physical access to the joystick. So it is slightly tough for me to control the robot.}" In some cases, when the robot was stuck in a corner, it took them much effort to get the robot out of this situation. Also, since the lab mode uses a simulated screen, the lab mode is not as good as the other three in terms of observing the details of the scene. Participants felt that the simulated screen was blurred, and the frequent switching between multiple screens made them very tired. 
+
+%Handle
+Handle mode directly using motion controllers for moving robot, and the user can open and close the two monitoring screen through the button. The evaluation of this operation mode depends in large part on the construction of the motion controllers. More than half of the users thought that the \gls{htc} motion controllers made them less flexible when operating the robot's steering. Participants were often unable to accurately touch the correct position of the touchpad when using it, and it was very likely to be touched by mistake. At the end of the experiment, these participants were additionally invited to re-operate the robot using the \gls{vr} controller with joysticks, and said that using joysticks was easier for them to control the direction. Some participants said that they did not like the two monitoring screens provided by this mode. The additional surveillance screens made them subconsciously distracted to observe them, preventing them from concentrating on the rescue mission. Others, however, thought that the monitor was particularly helpful. As it was very difficult to control the robot while teleporting themselves, they first relied on the monitor screen to drive the robot to a place, and then teleported themselves to the location of the robot. The experiment also found that participants tended to forget that the two monitor screens could be closed, and they usually tried to drag the screens to places where they did not affect their view and dragged them back when they wanted to use them.
+
+Remote Mode and UI Mode that use AI intelligent obstacle avoidance walking algorithm were most well-received. Participants felt that in both modes they did not need to worry about how to control the robot's steering and forward speed, but that the computer was responsible for everything, allowing them to focus on virtual world exploration.
+
+For the UI model, one of the participants remarked: "\textit{I can just let the robot follow me. I don't need to think about how to operate the robot. This way I can concentrate on the rescue.} " In the experiment, it was observed that all participants did not use the direction buttons and monitoring screens in the virtual menu. At the beginning of the test, they all turned on the follow me function directly and adjusted the robot's driving speed to the maximum. After that, the robot was more like a moveable \gls{lidar} sensor. This therefore leads to the fact that these participants could completely disregard the location of the robot and just explore the \gls{vr} world on their own. One participant in the experiment teleported so fast that when he reached a location and had been waiting for a while, the robot was still on its way. In fact, the problem of not being able to find the robot happens in Handle Mode as well.
+
+In contrast, Remote mode solves this problem of the robot not being in view. One participant stated that “\textit{The robot is always in sight, so I don't have to waste extra time looking for the robot. Control of the robot is also very easy}.” Another participant reflected that after setting the destination of the trolley operation, he would subconsciously observe the movement of the robots, thus making him always know where the robot was. They also thought it was very easy in this mode to operate the robot. Many participants alternated between using the right- and left-hand rays, first setting the robot's moving target point with the right-hand ray, and then teleporting themselves there with the left-hand ray. The security measures set up (remote controller) were almost not used in the actual test. When it came to the robot's inability to navigate automatically to the destination, the participants preferred to move the robot by resetting the destination point or moving themselves.
+
+In addition to this, participants were found lost in each of the operational modes. They would forget whether the place was already visited by themselves.
+
+
+
+\section{Discussion}

BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/I found it easy to concentrate on controlling the robot.jpg


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/I found it easy to move robot in desired position.jpg


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/I found it easy to perceive the details of the environment.jpg


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/Rescue situation.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/Robot Performance.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/handle1.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/handle2.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/lab1.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/lab3.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/lab4.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/remote1.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/remote2.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/remote3.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/remote4.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/robot1.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/robot2.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/robot3.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/robot4.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/summary.jpg


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/testCollider.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/testCollider2.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/testVictim1.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/testVictim2.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/testVictim3.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/testVictim4.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/total.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/ui1.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/ui2.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/ui3.png


BIN
User Study/.DS_Store


BIN
User Study/Einverstaendnis.pdf


BIN
User Study/Photo/.DS_Store


BIN
User Study/Photo/Snipaste_2021-07-10_14-59-39.png


BIN
User Study/Photo/Snipaste_2021-07-10_16-08-44.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-03-50.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-24-14.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-24-36.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-24-58.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-36-48.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-37-54.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-44-25.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-44-30.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-44-58.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-45-05.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-46-06.png


BIN
User Study/Photo/Snipaste_2021-07-10_17-52-09.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-30-39.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-33-13.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-33-35.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-34-20.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-34-40.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-35-37.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-46-13.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-46-56.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-50-44.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-51-21.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-51-33.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-51-57.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-53-11.png


BIN
User Study/Photo/Snipaste_2021-07-17_18-53-26.png


BIN
User Study/Photo/handle1.png


BIN
User Study/Photo/handle2.png


BIN
User Study/Photo/lab1.png


BIN
User Study/Photo/lab3.png


BIN
User Study/Photo/lab4.png


BIN
User Study/Photo/remote1.png


BIN
User Study/Photo/remote2.png


BIN
User Study/Photo/remote3.png


BIN
User Study/Photo/remote4.png


BIN
User Study/Photo/remote5.png


BIN
User Study/Photo/robot1.png


BIN
User Study/Photo/robot2.png


BIN
User Study/Photo/robot3.png


BIN
User Study/Photo/robot4.png


BIN
User Study/Photo/testCollider.png


BIN
User Study/Photo/testCollider2.png


BIN
User Study/Photo/testVictim1.png


BIN
User Study/Photo/testVictim2.png


BIN
User Study/Photo/testVictim3.png


BIN
User Study/Photo/testVictim4.png


BIN
User Study/Photo/ui1.png


BIN
User Study/Photo/ui2.png


BIN
User Study/Photo/ui3.png


+ 0 - 2
User Study/Procedure.md

@@ -10,8 +10,6 @@
 
 ## List
 
-*Alle Informationen, die ich haben sollte, sowie die Daten, sobald der Test abgeschlossen ist.*
-
 **Geplannt: 8 Probanden**
 
 + [ ]  8 x Einverständniserklärung (Handgeschrieben/gescannt)