Jia, Jingyi před 3 roky
rodič
revize
2cbb42a2db
31 změnil soubory, kde provedl 96 přidání a 77 odebrání
  1. binární
      .DS_Store
  2. binární
      Concepts for operating ground based rescue robots using virtual reality/.DS_Store
  3. binární
      Concepts for operating ground based rescue robots using virtual reality/Concepts_for_operating_ground_based_rescue_robots_using_virtual_reality.pdf
  4. 2 2
      Concepts for operating ground based rescue robots using virtual reality/chapters/abstract.tex
  5. 2 2
      Concepts for operating ground based rescue robots using virtual reality/chapters/conclusion.tex
  6. 2 2
      Concepts for operating ground based rescue robots using virtual reality/chapters/discuss.tex
  7. 4 4
      Concepts for operating ground based rescue robots using virtual reality/chapters/evaluate.tex
  8. 5 5
      Concepts for operating ground based rescue robots using virtual reality/chapters/glossary.tex
  9. 14 14
      Concepts for operating ground based rescue robots using virtual reality/chapters/implementation.tex
  10. 14 13
      Concepts for operating ground based rescue robots using virtual reality/chapters/introduction.tex
  11. 4 4
      Concepts for operating ground based rescue robots using virtual reality/chapters/related_work.tex
  12. 46 28
      Concepts for operating ground based rescue robots using virtual reality/chapters/result.tex
  13. binární
      Concepts for operating ground based rescue robots using virtual reality/graphics/Robot Performance.png
  14. binární
      Concepts for operating ground based rescue robots using virtual reality/graphics/lab1.png
  15. binární
      Concepts for operating ground based rescue robots using virtual reality/graphics/tlx5.jpg
  16. binární
      Concepts for operating ground based rescue robots using virtual reality/graphics/tlx6.jpg
  17. binární
      Hector_v2/.DS_Store
  18. binární
      Hector_v2/Assets/.DS_Store
  19. binární
      Hector_v2/Assets/Scripts/.DS_Store
  20. binární
      User Study/.DS_Store
  21. 1 1
      User Study/TLX/statistic.py
  22. binární
      User Study/TLX/summary.jpg
  23. binární
      User Study/TLX/tlx4.jpg
  24. binární
      User Study/TLX/tlx5.jpg
  25. binární
      User Study/TLX/tlx6.jpg
  26. binární
      User Study/TestResult/.DS_Store
  27. binární
      User Study/TestResult/Rescue situation.png
  28. binární
      User Study/TestResult/Rescue situation2.png
  29. binární
      User Study/TestResult/Robot Performance.png
  30. binární
      User Study/TestResult/Robot Performance2.png
  31. 2 2
      User Study/TestResult/statistic.py

binární
.DS_Store


binární
Concepts for operating ground based rescue robots using virtual reality/.DS_Store


binární
Concepts for operating ground based rescue robots using virtual reality/Concepts_for_operating_ground_based_rescue_robots_using_virtual_reality.pdf


+ 2 - 2
Concepts for operating ground based rescue robots using virtual reality/chapters/abstract.tex

@@ -1,13 +1,13 @@
 \selectlanguage{english}
 \begin{abstract}
-Rescue robotics are increasingly being used to deal with crisis situations, such as in exploring areas that are too dangerous for humans, and have become a key research area. A number of studies use \gls{vr} as a platform for \gls{hri}, as this can improve the degree of immersion and situation awareness. However, there remains a need to explore an intuitive, easy-to-use interaction pattern, which increases the efficiency of search and rescue and allows the user to explore the environment intentionally. This paper presents a preliminary VR-based \gls{hri} system in terms of ground-based rescue robots, with the aim to find an ideal interaction pattern. The proposed system offers four different operation modes and corresponding test scenes imitating a post-disaster city. This paper includes a user study in which four different operation modes are tested in turn and compared. The conclusion is that the ideal interaction pattern should reduce the complexity of the operation as much as possible. Instead of allowing the user to adjust the robot's direction and speed themselves, it is recommended to set a target point and let the robot navigate to the target point automatically. In addition to this, some of the features that should be provided and the directions that should be investigated in the future are also mentioned in this paper.
+Rescue robotics have been increasingly used in crisis situations, such as exploring areas that are too dangerous for humans, and have become a key research area. A number of studies use \gls{vr} to improve the degree of immersion and situation awareness. However, there remains a need to explore an intuitive, easy-to-use interaction pattern, which increases the efficiency of search and rescue and allows the user to explore the environment intentionally. This thesis presents a preliminary VR-based \gls{hri} system in terms of ground based rescue robots, with the aim to find an ideal interaction pattern. The proposed system offers four different operation modes and corresponding test scenes imitating a post-disaster city. This thesis includes a user study in which four different operation modes are tested in turn. The user study reveals that the ideal interaction pattern should reduce the complexity of the operation as much as possible. Instead of allowing the user to adjust the robot's direction and speed themselves, it is recommended to set a target point and let the robot navigate to the target point automatically. 
 \end{abstract}
 
 
 
 \selectlanguage{ngerman}
 \begin{abstract}
-Rettungsroboter werden zunehmend zur Bewältigung von Krisensituationen eingesetzt z.B. dort, wo es für den Menschen zu gefährlich ist. Eine Reihe von Studien nutzt \gls{vr} als Plattform für die Mensch-Roboter-Interaktion, da dies die Immersion erhöhen und das Situationsbewusstsein verbessern kann. Es besteht jedoch nach wie vor ein Bedarf an einem intuitiven, einfach zu bedienenden Interaktionsmethoden zu entwickeln, um das Personal bei der Steuerung des Roboters zu entlasten und die Effizienz und Effektivität von Such- und Rettungsarbeiten zu erhöhen. In diesem Aufsatz wird ein vorläufiges \gls{vr}-basiertes Mensch-Roboter-Interaktionssystem in Bezug auf bodenbasierte Rettungsroboter vorgestellt. Ziel ist, ein möglichst intuitive Steuerungsmethode mit geringer mentaler Ermüdung zu finden. Das vorgeschlagene System bietet vier verschiedene Betriebsmodi und entsprechende Testszenen. Diese Arbeit beinhaltet eine Nutzerstudie (die vier Betriebsmodi werden nacheinander getestet und verglichen). Das Fazit ist, dass die ideale Interaktionsmethode die Komplexität der Steuerung so weit wie möglich zu reduzieren. Darüber hinaus werden in diesem Aufsatz die Richtungen der zukünftigen Untersuchungen erwähnt.
+Rettungsroboter werden zunehmend zur Bewältigung von Krisensituationen eingesetzt z.B. dort, wo es für den Menschen zu gefährlich ist. Eine Reihe von Studien nutzt \gls{vr} als Plattform für die Mensch-Roboter-Interaktion, da dies die Immersion erhöhen und das Situationsbewusstsein verbessern kann. Es besteht jedoch nach wie vor Bedarf für eine einfach zu bedienende Interaktionsmethode, um das Personal bei der Steuerung des Roboters zu entlasten und die Effizienz und Effektivität von Such- und Rettungsarbeiten zu erhöhen. In dieser Abschlussarbeit wird ein vorläufiges \gls{vr}-basiertes Mensch-Roboter-Interaktionssystem in Bezug auf bodenbasierte Rettungsroboter vorgestellt. Ziel ist, ein möglichst intuitive Steuerungsmethode mit geringer mentaler Ermüdung zu finden. Das vorgeschlagene System bietet vier verschiedene Betriebsmodi und entsprechende Testszenen. Diese Arbeit beinhaltet eine Nutzerstudie (die vier Betriebsmodi werden nacheinander getestet und verglichen). Das Fazit ist, dass die ideale Interaktionsmethode die Komplexität der Steuerung so weit wie möglich zu reduzieren.
 \end{abstract}
 
 \selectlanguage{english}

+ 2 - 2
Concepts for operating ground based rescue robots using virtual reality/chapters/conclusion.tex

@@ -1,6 +1,6 @@
 \chapter{Conclusion}
 \label{conclusion}
 
-A preliminary VR-based \gls{hri} system has been presented in terms of ground-based rescue robots. This work aims to find an ideal interaction method or provide a general direction for future development. For this purpose, the proposed system offers four different operation modes and corresponding test scenes imitating a post-disaster city. This paper shows an overview of the simulated robot, interaction techniques and the construction of the test environment. Eight participants were invited to conduct a user study. Based on the obtained results, it can be concluded that an ideal \gls{vr}-based robotics operation method should eliminate as much complexity as possible. An intelligent obstacle avoidance algorithm is recommended instead of the user operating the robot himself to steer and move forward. Additional functions, such as monitoring screens, need to be optimized so that they do not complicate the whole interaction process. The system also requires maps to show the user which areas have been detected and where the robot is located.
+This thesis presents a preliminary VR-based \gls{hri} system in terms of ground based rescue robots. This work aims to find an ideal interaction method or provide a general direction for future development. For this purpose, the proposed system offers four different operation modes and corresponding test scenes imitating a post-disaster city. This thesis shows an overview of the simulated robot, interaction techniques and the construction of the test environment. Eight participants were invited to conduct a user study. Based on the obtained results, it can be concluded that an ideal \gls{vr}-based robotics operation method should eliminate as much complexity as possible. An intelligent obstacle avoidance algorithm is recommended instead of the user operating the robot themselves to steer and move forward. Additional functions, such as monitoring screens, need to be optimized so that they do not complicate the whole interaction process. The system also requires maps to show the user which areas have been detected and where the robot is located.
 
-Future work should focus on the intelligent obstacle avoidance algorithm when using a real robot. The next stage is to develop a live telepresence and teleoperation system with the real robot. Considering that the system proposed in this paper only simulates the disaster rescue process, the conclusions obtained may not be entirely correct. Additional testing and user surveys should be carried out in the future after building a collaborative \gls{vr}-based system with real robots.
+Future work should focus on the intelligent obstacle avoidance algorithm when using a real robot. The next stage is to develop a live telepresence and teleoperation system with the real robot. Considering that the system proposed in this thesis only simulates the disaster rescue process, the conclusions obtained may be limited. Additional testing and user surveys should be carried out in the future after building a collaborative \gls{vr}-based system with real robots.

+ 2 - 2
Concepts for operating ground based rescue robots using virtual reality/chapters/discuss.tex

@@ -2,13 +2,13 @@
 \label{discuss}
 In general, an ideal \gls{vr}-based robotics operation method should eliminate as much complexity as possible for the user.
 
-For the Lab Mode, as the least favorite model of the participants and the one that they find very complicated and difficult to operate, it can be concluded that unless the \gls{vr} operating system is developed for training operators to learn to operate the robot in a real environment, a lab-like mode of operation is not desirable. Suppose one wants to develop an interaction approach like Handle Mode, where the robot is operated directly using the controller. In that case, it should be taken into account whether the user needs to move his position frequently. As mentioned before, if the user needs to control the exact movement of the robot themselves and at the same time change their position, the user may have a higher workload, more difficulty observing the details of the scene and concentrating on controlling the robot. The choice of \gls{vr} handle is also important. \gls{vr} Motion controllers with joysticks are recommended for better directional control.
+For the Lab Mode, the least favorite mode among the participants, it can be concluded that unless the \gls{vr} operating system is developed for training operators to learn to operate the robot in a real environment, a lab-like operation mode is not desirable. Suppose one wants to develop an interaction approach like Handle Mode, where the robot is operated directly using the controller. In that case, it should be taken into account whether the user needs to move their position frequently. As mentioned before, if the user needs to control the exact movement of the robot themselves and at the same time change their position, the user may have a higher workload, more difficulty in observing the details of the scene and concentrating on controlling the robot. The choice of \gls{vr} handle is also important. \gls{vr} Motion controllers with thumb sticks are recommended for better directional control.
 
 
 Interaction approaches like Remote Mode and UI Mode should be the future direction to focus on. Both of these operation modes significantly simplify the process of controlling the robot by using intelligent obstacle avoidance algorithms to allow the robot to automatically navigate to the desired destination. The proposed system currently uses the \gls{nav} component and simulates a post-disaster scene instead of reconstructing it by scanning the real site through \gls{lidar}. Therefore, there remains a need for an intelligent obstacle avoidance algorithm when using a real robot. Considering that the rescue efficiency and the possibility of the robot being damaged rely strongly on the algorithm, this algorithm should be accurate enough so that the user can entirely depend on the computer to control the robot.
 
 
-In some tests, the monitoring screens were not used. The additional features, which are provided by Remote and UI modes that allow the user to control the robot themselves were similarly, rated as redundant by participants. All the items mentioned above might somewhat complicate the interaction process. When too many features are available simultaneously, it seems difficult for the user to decide which feature to use in certain situations. Sometimes users may even forget functions that are already provided, such as monitoring screens can be switched off if they block the view.
+In some tests, the monitoring screens were not used. The additional features, which are provided by Remote and UI modes that allow the user to control the robot themselves, were also rated as redundant by participants. All the items mentioned above might somewhat complicate the interaction process. When too many features are available simultaneously, it seems difficult for the user to decide which feature to use in certain situations. Sometimes users may even forget functions that are already provided, such as monitoring screens can be switched off if they block the view.
 It should be noted that there were only eight participants in this user study and that all of them were unfamiliar with VR. It is thus uncertain whether different results would be obtained if the number of participants is expanded. Even so, the presence of the monitor screen is still necessary, considering that some participants responded it could provide valuable information. Further work is needed to optimize the monitoring screens so that they do not obscure the view, and adjusting the monitor does not complicate the whole interaction.
 
 Apart from that, extra maps should be provided. It was mentioned in the results part that participants often got lost in the scenes. They did not know if a location had been visited before and repeatedly searched for the same location. This may lead to a decrease in search efficiency and even, to a certain extent, discourage users from exploring unknown areas that have not yet been scanned by the \gls{lidar}. The map should hence indicate the fields that have been scanned by the \gls{lidar} and the user's own current location. The user can see the overall outline of the detected area and the trajectory he or she has taken from the map provided, thus giving the user a clear overview of the scene. In the user study, it was found that some participants often forget the location of the robot. Therefore, the relative coordinates of the user and the robot should also be provided on the map. How the map is presented is also worth considering. As noted before, the whole interaction pattern should be as simple as possible.

+ 4 - 4
Concepts for operating ground based rescue robots using virtual reality/chapters/evaluate.tex

@@ -1,12 +1,12 @@
 \chapter{Evaluation of User Experience}
 \label{evaluate}
 
-This chapter describes the design and detailed process of the user evaluation. The purpose of this user study is to measure the impact of four different modes of operation on rescue efficiency, robot driving performance, and psychological and physiological stress and fatigue, etc. For this purpose, participants are asked to find victims in a test scene using different operation modes and answer questionnaires after the test corresponding to each mode of operation.
+This chapter describes the design and detailed process of the user evaluation. The purpose of the performed user study is to measure the impact of four different operation modes on rescue efficiency, robot driving performance, and psychological and physiological stress and fatigue. For this purpose, participants are asked to find victims in a test scene using different operation modes and answer questionnaires after the test corresponding to each mode of operation.
 
 
 \section{Study Design}
 
-The evaluation for each mode of operation consists of two main parts. The first part is the data recorded during the process of the participant driving the robot in the \gls{vr} environment to find the victims. The recorded data includes information about the robot's collision and the speed of driving etc. The rescue of the victims was also considered as part of the evaluation. \gls{tlx} was used to measure the participant's subjective workload assessments. Additionally, participants were asked specific questions for each mode and were asked to select their favorite and least favorite operation mode. In order to reduce the influence of order effects on the test results, the Balanced Latin Square was used when arranging the test order for the four operation modes.
+The evaluation for each operation mode consists of two main parts. The first part is the data recorded during the process of the participant driving the robot in the \gls{vr} environment to find the victims. The recorded data includes information about the robot's collision and the speed of driving etc. The rescue of the victims was also recorded. The second part is the questionnaires that participants filled out after each test. \gls{tlx} was used to measure the participant's subjective workload assessments. Additionally, participants were asked specific questions for each mode and were asked to select their favorite and least favorite operation mode. In order to reduce the influence of order effects on the test results, the Balanced Latin Square was used when arranging the test order for the four operation modes.
 
 
 
@@ -17,7 +17,7 @@ Before the beginning of the actual testing process, participants were informed o
 
 
 \subsection{Entering the world of VR}
-After the essential introduction part, participants would directly put on the \gls{vr} headset and enter the \gls{vr} environment to complete the rest of the tutorial. Considering that participants might not have experience with \gls{vr} and that it would take time to learn how to operate the four different modes, the proposed system additionally sets up a practice pattern and places some models of victims in the practice scene. After entering the \gls{vr} world, participants first needed to familiarize themselves with the opening and closing menu, as well as using the motion controllers to try to teleport themselves, or raise themselves into mid-air. Finally, participants were asked to interact with the victim model through virtual hands. After this series of tutorials, participants were already generally familiar with the use of \gls{vr} and how to move around in the \gls{vr} world.
+After the essential introduction part, participants would put on the \gls{vr} headset and enter the \gls{vr} environment to complete the rest of the tutorial. Considering that participants might not have experience with \gls{vr} and that it would take time to learn how to operate the four different modes, the proposed system additionally sets up a practice pattern and places some models of victims in the practice scene. After entering the \gls{vr} world, participants first needed to familiarize themselves with the opening and closing menu, as well as using the motion controllers to try to teleport themselves, or raise themselves into mid-air. Finally, participants were asked to interact with the victim model through virtual hands. After this series of tutorials, participants were already generally familiar with the use of \gls{vr} and how to move around in the \gls{vr} world.
 
 
 
@@ -26,7 +26,7 @@ Given the different manipulation approaches for each mode and possible confusion
 
 The sequence of modes to be tested is predetermined. The order effect is an important factor affecting the test results. If the order of the operation modes to be tested would be the same for each participant, the psychological and physical exhaustion caused by the last operation mode would inevitably be more. In order to minimize the influence of the order effect on the results, the Balanced Latin Square with a size of four was used to arrange the test order of the four operation modes.
 
-Participants automatically entered the practice scene corresponding to the relevant operation mode in the predefined order. After attempting to rescue 1-2 victim models and the participant indicated that he or she was familiar enough with this operation mode, the participant would enter the test scene. In the test scene, participants had to save as many victims as possible in a given time limit. Participants were required to move the robot around the test scene to explore the post-disaster city and rescue victims. During this process, if the robot crashes with buildings or obstacles, besides the collision information being recorded as test data, participants would also receive sound and vibration feedback. The test will automatically end when time runs out or when all the victims on the scene have been rescued. Participants were required to complete the evaluation questionnaire and \gls{tlx} form at the end of each test. This process was repeated in each mode of operation. 
+Participants automatically entered the practice scene corresponding to the relevant operation mode in the predefined order. After attempting to rescue 1-2 victims and when participants indicated that they were familiar enough with this operation mode, they would enter the test scene. In the test scene, participants had to save as many victims as possible in a given time limit. Participants were required to move the robot around the test scene to explore the post-disaster city and rescue victims. During this process, if the robot crashes with buildings or obstacles, besides the collision information being recorded as test data, participants would also receive sound and vibration feedback. The test would automatically end when time ran out or when all the victims on the scene have been rescued. Participants were required to complete the evaluation questionnaire and \gls{tlx} form at the end of each test. This process was repeated in each mode of operation. 
 
 After all the tests were completed, participants were asked to compare the four operation modes and select the one they liked the most and the one they liked the least. In addition, participants could give their reasons for the choice and express their opinions as much as they wanted, such as suggestions for improvement or problems found during the operation.
 

+ 5 - 5
Concepts for operating ground based rescue robots using virtual reality/chapters/glossary.tex

@@ -37,7 +37,7 @@
 \newdualentry{mr} % label
 {MR}            % abbreviation
 {Mixed Reality}  % long form
-{\glsresetall \gls{mr} enhances the realism of the user experience by introducing realistic scene information into the virtual environment, bridging the gap between the virtual world, the real world and the user with interactive feedback information.
+{\glsresetall \gls{mr} enhances the realism of the user experience by introducing realistic scene information into the virtual environment, bridging the gap between the virtual world, the real world, and the user with interactive feedback information.
 }% description
 
 
@@ -51,13 +51,13 @@
 \newdualentry{ros} % label
 {ROS}            % abbreviation
 {Robot Operating System}  % long form
-{\glsresetall A set of software libraries and tools for robotics. \gls{ros} provides various services for operating system applications (e.g.  hardware abstraction, underlying device control, common function implementation, inter-process messaging, package management, etc.), as well as tools and functions for acquiring, compiling, and running code across platforms.ROS mainly uses loosely coupled peer-to-peer process network communication and currently mainly supports Linux systems.
+{\glsresetall A set of software libraries and tools for robotics. \gls{ros} provides various services for operating system applications (e.g.  hardware abstraction, underlying device control, common function implementation, inter-process messaging, package management, etc.), as well as tools and functions for acquiring, compiling, and running code across platforms. ROS mainly uses loosely coupled peer-to-peer process network communication and currently mainly supports Linux systems.
 }% description
 
 \newdualentry{tlx} % label
 {TLX}            % abbreviation
 {The Official NASA Task Load Index}  % long form
-{\glsresetall The NASA Task Load Questionnaire (NASA-TLX, NASA Task Load Index) is a subjective workload assessment tool whose primary purpose is to provide a subjective workload assessment of operators of various user interface systems. Using a multidimensional rating process, the total workload is divided into six subjective subscales. The user should choose the weighted score for each measurement. The simplified version, also called Raw TLX, has the same weighted score for all subscales. This simplified version is also used in this paper.
+{\glsresetall The NASA Task Load Questionnaire (NASA-TLX, NASA Task Load Index) is a subjective workload assessment tool whose primary purpose is to provide a subjective workload assessment of operators of various user interface systems. Using a multidimensional rating process, the total workload is divided into six subjective subscales. The user should choose the weighted score for each measurement. The simplified version, also called Raw TLX, has the same weighted score for all subscales and some of the subscales could be excluded. This simplified version is also used in this thesis.
 }% description
 
 
@@ -96,14 +96,14 @@
 
 \newglossaryentry{nav}
 {name={NavMeshAgent},
-	description={\glsresetall Navigation mesh agent provided by \gls{unity}. This component can be attached to game objects to intelligently navigate the destination and avoid obstacles. Terrain and objects within the scene must be set to static and marked with a moveable range before being used.}
+	description={\glsresetall Navigation mesh agent provided by \gls{unity}. This component can be attached to game objects to intelligently navigate to a destination and avoid obstacles. Terrain and objects within the scene must be set to static and marked with a moveable range before being used.}
 }
 
 
 
 \newglossaryentry{htc}
 {name={HTC VIVE},
-	description={\glsresetall A virtual reality headset developed by HTC and Valve. It has a resolution of 1080×1200 per eye, resulting in a total resolution of 2160×1200 pixels, a refresh rate of 90 Hz, and a field of view of 110 degrees. It includes two motion controllers and uses two Lighthouses to track the headset position and the motion controllers.}
+	description={\glsresetall A virtual reality headset developed by HTC and Valve. It has a resolution of 1080×1200 per eye, resulting in a total resolution of 2160×1200 pixels, a refresh rate of 90 Hz, and a field of view of 110 degrees. It includes two motion controllers and uses two Lighthouses to help track the headset position and the motion controllers.}
 }
 
 

+ 14 - 14
Concepts for operating ground based rescue robots using virtual reality/chapters/implementation.tex

@@ -6,16 +6,16 @@ In this chapter, the tools and techniques used in building this \gls{vr}-based \
 
 
 \section{Overview}
-The main goal of this work is to design and implement a \gls{vr}-based \gls{hri} system with different methods of operating the robot in order to find out which method of operation is more suitable to control the rescue robot. Further, it is to provide some fundamental insights for future development directions and to provide a general direction for finding an intuitive, easy-to-use and efficient interaction approach for \gls{hri}. Therefore, the proposed system was developed using \gls{unity}, including four operation modes and corresponding test scenes for simulating post-disaster scenarios. In each operation mode, the user has a different method to control the robot. In addition, in order to better simulate the process by which the robot scans its surroundings and the computer side cumulatively gets a reconstructed 3D virtual scene, the test environment was implemented in such a way that the scene seen by the user depends on the robot's movement and the trajectory it travels through.
+One of the aims of this thesis is to design and implement a \gls{vr}-based \gls{hri} system with different methods of operating the robot in order to find out which operation method is more suitable to control the rescue robot. Further, this thesis aims to provide some fundamental insights for future development directions and to provide a general direction for finding an intuitive, easy-to-use and efficient interaction approach for \gls{hri}. Therefore, the proposed system was developed using \gls{unity}, including four operation modes and corresponding test scenes for simulating post-disaster scenarios. In each operation mode, the user has a different method to control the robot. In addition, the system should simulate the process by which the robot scans its surroundings and the computer side cumulatively gets a reconstructed 3D virtual scene. The test environment was thus implemented in such a way that the scene seen by the user depends on the robot's movement and the trajectory it travels through.
 
 \section{System Architecture}
 % The proposed system runs on a computer with the Windows 10 operating system. This computer has been equipped with an Intel Core i7-8700K CPU, 32 GB RAM as well as a NVIDIA GTX 1080 GPU with 8 GB VRAM. \gls{htc} is used as a \gls{vr} device. It has a resolution of 1080 × 1200 per eye, resulting in a total resolution of 2160 × 1200 pixels, a refresh rate of 90 Hz, and a field of view of 110 degrees. It includes two motion controllers and uses two Lighthouses to track the headset's position and the motion controllers.
-The proposed system uses \gls{htc} as \gls{vr} device. It has a resolution of 1080 × 1200 per eye, resulting in a total resolution of 2160 × 1200 pixels, a refresh rate of 90 Hz, and a field of view of 110 degrees. It includes two motion controllers and uses two Lighthouses to track the headset's position and the motion controllers.
+The proposed system uses \gls{htc} as \gls{vr} device. It has a resolution of 1080 × 1200 per eye, resulting in a total resolution of 2160 × 1200 pixels, a refresh rate of 90 Hz, and a field of view of 110 degrees. It includes two motion controllers and uses two Lighthouses to help track the headset's position and the motion controllers.
 
-\gls{unity} was chosen as the platform to develop the system. \gls{unity} is a widely used game engine with \gls{steamvr} \footnote{https://assetstore.unity.com/packages/tools/integration/steamvr-plugin-32647}, which allows developers to focus on the \gls{vr} environment and interactive behaviors in programming, rather than specific controller buttons and headset positioning, making \gls{vr} development much simpler. Another reason why \gls{unity} was chosen as a development platform was the potential for collaboration with \gls{ros}, a frequently used operating system for robot simulation and manipulation, which is flexible, low-coupling, distributed, open source, and has a powerful and rich third-party feature set. In terms of collaboration between \gls{unity} and \gls{ros}, Siemens provides open-source software libraries and tools in C\# for communicating with ROS from .NET applications \footnote{https://github.com/siemens/ros-sharp}. Combining \gls{ros} and \gls{unity} to develop a collaborative \gls{hri} platform proved to be feasible \cite{Whitney:2018wk}. Since the focus of this work is on \gls{hri}, collaboration and synchronization of \gls{ros} will not be explored in detail here.
+\gls{unity} was chosen as the platform to develop the system. \gls{unity} is a widely used game engine with \gls{steamvr} \footnote{https://assetstore.unity.com/packages/tools/integration/steamvr-plugin-32647}, which allows developers to focus on the \gls{vr} environment and interactive behaviors in programming, rather than specific controller buttons and headset positioning, making \gls{vr} development much simpler. Another reason why \gls{unity} was chosen as a development platform was its potential for collaboration with \gls{ros}, a frequently used operating system for robot simulation and manipulation, which is flexible, low-coupling, distributed, open source, and has a powerful and rich third-party feature set. In terms of collaboration between \gls{unity} and \gls{ros}, Siemens provides open-source software libraries and tools in C\# for communicating with ROS from .NET applications \footnote{https://github.com/siemens/ros-sharp}. Combining \gls{ros} and \gls{unity} to develop a collaborative \gls{hri} platform proved to be feasible \cite{Whitney:2018wk}. Since the focus of this work is on \gls{hri}, collaboration and synchronization of \gls{ros} will not be explored in detail here.
 
 \section{Robot}
-The proposed system needs to simulate the process that a robot uses a \gls{lidar} remote sensor to detect the real environment and synchronize it to \gls{unity}. Thus, a sphere collision body was set up on the robot as seen in \ref{fig:robot}. The robot will transform the Layers of the objects in the scene into visible Layers by collision detection and a trigger event (onTriggerEnter function). The robot's driving performance, such as the number of collisions, average speed, total distance, will be recorded in each test. The detailed recorded information can be seen in Chapter \ref{result}. The movement of the robot depends on the value of the signal that is updated in each mode. In addition, the robot's Gameobject has the \gls{nav} \footnote{https://docs.unity3d.com/ScriptReference/AI.NavMeshAgent.html} component, which supports the robot's navigation to the specified destination with automatic obstacle avoidance in the test scene. The virtual robot has three cameras. One of the cameras is a simulation of a surveillance camera mounted on the robot, which can see all the items in the scene, although the distant items are not yet detected by LiDAR. Two of these cameras are set up in such a way that they can only see the area detected by \gls{lidar}. Each camera captures what it sees and modifies the bound image in real time. The four operation modes described later all use the camera viewport as a monitoring screen by rendering the camera viewport on UI canvas.
+The proposed system needs to simulate the process that a robot uses a \gls{lidar} remote sensor to detect the real environment and synchronize it to \gls{unity}. Thus, a sphere collision body was set up on the robot as seen in \ref{fig:robot}. The collider is an invisible \gls{unity} component that can interact with other colliders by collision detection and a trigger event (onTriggerEnter function). In this system, the sphere collision body defines the range of the robot's \gls{lidar} detection. It will interact with invisible objects in the scene, such as buildings, during the robot's movement, and transform the Layers of the objects into visible Layers. The robot's driving performance, such as the number of collisions, average speed, total distance, will be recorded in each test. The detailed recorded information can be seen in Chapter \ref{result}. The movement of the robot depends on the value of the signal that is updated in each mode. In addition, the robot's Gameobject has the \gls{nav} \footnote{https://docs.unity3d.com/ScriptReference/AI.NavMeshAgent.html} component, which supports the robot's navigation to the specified destination with automatic obstacle avoidance in the test scene. The virtual robot has three cameras. One of the cameras is a simulation of a surveillance camera mounted on the robot, which can see all the items in the scene, although the distant items are not yet detected by LiDAR. Two of these cameras are set up in such a way that they can only see the area detected by \gls{lidar}. Each camera captures what it sees and modifies the bound image in real time. The four operation modes described later all use the camera viewport as a monitoring screen by rendering the camera viewport on UI canvas.
 
 
 \begin{figure}[htbp]
@@ -52,19 +52,19 @@ In order to improve the reusability of the code and facilitate the management of
 \end{figure}
 
 \subsection{Handle Mode}
-In this mode, the user controls the robot's movement directly through the motion controller in the right hand. The touchpad of the motion controller determines the direction of rotation of the robot. The user can control the robot's driving speed by pulling the Trigger button. Figure \ref{fig:htc} shows the \gls{htc} motion controller. The robot rotation direction will read the value of the touchpad X-axis. The range of values is $[-1,1]$. Forward speed reads the Trigger button passed in as a variable of type SteamVR\_Action\_Single, and the range of the variable is $[0,1]$. With the right-hand menu button, the surveillance screen around the robot can be turned on or off. The monitor window can be adjusted to a suitable position by dragging and rotating it. In the literature dealing with \gls{vr} and \gls{hri}, many researchers have used a similar operational approach. Therefore, as a widely used, and in a sense default operation approach, this mode was designed and became one of the proposed operation modes.
+In this mode, the user controls the robot's movement directly through the motion controller in the right hand. The touchpad of the motion controller determines the direction of rotation of the robot. The user can control the robot's driving speed by pulling the Trigger button. Figure \ref{fig:htc} shows the \gls{htc} motion controller. The robot rotation direction will read the value of the touchpad X-axis. The range of values is $[-1,1]$. Forward speed reads the Trigger button passed in as a variable of type SteamVR\_Action\_Single, and the range of the variable is $[0,1]$. With the right-hand menu button, the surveillance screen can be turned on or off. The monitor window can be adjusted to a suitable position by dragging and rotating it. In the literature dealing with \gls{vr} and \gls{hri}, many researchers have used a similar operational approach. Therefore, as a widely used, and in a sense default operation approach, this mode was designed as one of the operation modes.
 
 \begin{figure}[htbp]
     \centering
     \begin{minipage}[t]{0.48\textwidth}
         \centering
-        \includegraphics[height=7cm]{graphics/handle1.png}
+        \includegraphics[height=5cm]{graphics/handle1.png}
         \caption{Handle Mode}
         \label{fig:handle}
     \end{minipage}
     \begin{minipage}[t]{0.48\textwidth}
         \centering
-        \includegraphics[height=7cm]{graphics/htc.png}
+        \includegraphics[height=5cm]{graphics/htc.png}
         \caption{HTC handle illustration}
         \label{fig:htc}
     \end{minipage}
@@ -72,15 +72,15 @@ In this mode, the user controls the robot's movement directly through the motion
 
 
 \subsection{Lab Mode}
-This pattern was designed with reference to the system proposed by \cite{Perez:2019ub}\cite{Matsas:2017aa}. Their frameworks are used to train operators to work with the robot, avoiding risks and saving learning costs. In addition, they also mentioned that being in a simulated factory or laboratory can improve immersion. Therefore, in this mode, a virtual laboratory environment is constructed, in which simulated buttons, controllers, and monitoring equipment are placed. The laboratory consists of two parts. The first part is the monitoring equipment: the monitoring screen is enlarged and placed at the front of the lab as a huge display. The second part is the operating console in the center of the laboratory, which can be moved by the user as desired. This is due to the fact that users have different heights and may wish to operate the robot in a standing or sitting position. The user can use the buttons on the table to lock the robot or let it walk forward automatically. In the middle of the console are two operating joysticks that determine the robot's forward motion and rotation respectively. The part that involves virtual joystick movement and button effects uses an open-source GitHub project VRtwix\footnote{https://github.com/rav3dev/vrtwix}. With the sliding stick on the left, the user can edit the speed of the robot's forward movement and rotation.
+This pattern was designed with reference to the system proposed by \cite{Matsas:2017aa}\cite{Perez:2019ub}. Their frameworks are used to train operators to work with the robot, avoiding risks and saving learning costs. In addition, they also mentioned that being in a simulated factory or laboratory can improve immersion. Therefore, in this mode, a virtual laboratory environment is constructed, in which simulated buttons, controllers, and monitoring equipment are placed. The laboratory consists of two parts. The first part is the monitoring equipment: the monitoring screen is enlarged and placed at the front of the lab as a huge display. The second part is the operating console in the center of the laboratory, which can be moved by the user as desired. This is due to the fact that users have different heights and may wish to operate the robot in a standing or sitting position. The user can use the buttons on the table to lock the robot or let it move forward automatically. In the middle of the console are two operating joysticks that determine the robot's forward motion and rotation respectively. The part that involves virtual joystick movement and button effects uses an open-source GitHub project VRtwix\footnote{https://github.com/rav3dev/vrtwix}. With the sliding stick on the left, the user can edit the speed of the robot's forward movement and rotation.
 
 \begin{figure}[htbp]
     \centering
     \subfigure[Overview]{
-        \includegraphics[height=5cm]{graphics/lab3.png}
+        \includegraphics[height=5cm]{graphics/lab1.png}
     }
     \subfigure[Operating console]{ 
-        \includegraphics[height=5cm]{graphics/lab4.png}
+        \includegraphics[height=5cm]{graphics/lab3.png}
     }
     \caption{Lab Mode}
     \label{fig:lab} 
@@ -88,7 +88,7 @@ This pattern was designed with reference to the system proposed by \cite{Perez:2
 
 \subsection{Remote Mode}
 In this mode, the user can set the driving target point directly or control the robot by picking up the remote control placed on the toolbar. The target point is set by the ray emitted by the right motion controller. This process is similar to setting a teleportation point. After the target point is set, a square representing the destination is shown in the scene, and the robot will automatically travel to the set destination. The entire driving process uses the \gls{nav} component and is therefore capable of automatic obstacle avoidance.
-A movable toolbar with remote control and a monitoring device can be opened by clicking on the menu button. The remote control is a safety precaution if the automatic navigation fails to navigate the target point properly. The user can adjust the direction of the robot's travel by using the remote control. The pickup and auto-release parts use the ItemPackage component available in the \gls{steamvr}.
+A movable toolbar with remote control and a monitoring device can be opened by clicking on the menu button. The remote control is a backup precaution if the automatic navigation fails to navigate the target point properly. The user can adjust the direction of the robot's travel by using the remote control. The pickup and auto-release parts use the ItemPackage component available in the \gls{steamvr}.
 
 \begin{figure}[htbp]
     \centering
@@ -104,7 +104,7 @@ A movable toolbar with remote control and a monitoring device can be opened by c
 
 
 \subsection{UI Mode}
-The virtual menu is also an interaction method that is often used in \gls{vr}, so this mode is proposed. In this mode, the user must interact with the virtual menu using the ray emitted by the right motion controller. The virtual menu is set up with buttons for the direction of movement, a speed controller, and buttons to open and close the monitor screen. In addition to this, an additional follow function is added to the menu, allowing the robot to follow the user's position in the virtual world. This is intended to let the user concentrate on observing the rendered \gls{vr} environment. Also, having a real robot following the user's location in the virtual world is a novel, unique \gls{hri} approach in \gls{vr}. The robot's automatic navigation uses the \gls{nav}.
+The virtual menu is also an interaction method that is often used in \gls{vr}, so this mode is proposed. In this mode, the user interacts with the virtual menu using the ray emitted by the right motion controller. The virtual menu is set up with buttons for the direction of movement, a speed controller, and buttons to open and close the monitor screen. In addition to this, an additional follow function is added to the menu, allowing the robot to follow the user's position in the virtual world. This is intended to let the user concentrate on observing the rendered \gls{vr} environment. Also, having a real robot following the user's location in the virtual world is a novel, unique \gls{hri} approach in \gls{vr}. The robot's automatic navigation uses the \gls{nav}.
 
 \begin{figure}[htbp]
     \centering
@@ -122,9 +122,9 @@ The virtual menu is also an interaction method that is often used in \gls{vr}, s
 \section{Test Scene}
 In order to simulate the use of rescue robots in disaster scenarios, the test scenes were built to mimic the post-disaster urban environment as much as possible. The POLYGON Apocalypse \footnote{https://assetstore.unity.com/packages/3d/environments/urban/polygon-apocalypse-low-poly-3d-art-by-synty-154193}, available on the \gls{unity} Asset Store, is a low poly asset pack with a large number of models of buildings, streets, vehicles, etc. This resource pack was used as a base. Additional collision bodies of the appropriate size were manually added to each building and obstacle after the resource pack was imported, which was needed to help track the robot's driving crash in subsequent tests.
 
-Considering that four operation modes need to be tested, four scenes with similar complexity and composition but different road conditions and placement of buildings were constructed. The similarity in complexity of the scenes ensures that the difficulty of the four tests is basically identical. The different scene setups ensure that the scene information learned by the user after one test will not make him understand the next test scene and thus affect the accuracy of the test data. 
+Considering that four operation modes need to be tested, four scenes with similar complexity and composition but different road layouts and placement of buildings were constructed. The similarity in complexity of the scenes ensures that the difficulty of the four tests is basically identical. The different scene setups ensure that the scene information learned by the user after one test will not make him understand the next test scene and thus affect the accuracy of the test data. 
 
-The entire scene is initially invisible, and the visibility of each object in the test scene is gradually updated as the robot drives along. Ten interactable sufferer characters were placed in each test scene. The placement place can be next to the car, the house side and some other reasonable places.
+The entire scene is initially invisible, and the visibility of each object in the test scene is gradually updated as the robot drives along. Ten interactable victim characters were placed in each test scene. The placement can be next to the car, the house side and some other reasonable places.
 
 \begin{figure}[htbp]
     \centering

+ 14 - 13
Concepts for operating ground based rescue robots using virtual reality/chapters/introduction.tex

@@ -1,31 +1,32 @@
 \chapter{Introduction}
 
-In recent years, natural disasters such as earthquakes, tsunamis and potentially nuclear explosives have seriously threatened the safety of human life and property. While the number of various disasters has increased, their severity, diversity and complexity have also gradually increased. The 72h after a disaster is the golden rescue time, but the unstructured environment of the disaster site makes it difficult for rescuers to work quickly, efficiently and safely.
+In recent years, natural disasters such as earthquakes, tsunamis and potentially nuclear explosives have seriously threatened the safety of human life and property. The number of disasters, as well as their severity and complexity, have gradually increased. The first 72h after a disaster is the golden rescue time, but the unstructured environment of disaster sites makes it difficult for rescuers to work quickly, efficiently and safely.
 
-Rescue robots have the advantages of high mobility and handling breaking capacity. They can work continuously to improve the efficiency of search and rescue. Also, those robots can achieve the detection of the graph, sound, gas and temperature within the ruins by carrying a variety of sensors.
-When rescue robots can assist or replace the rescuers, the injuries caused by the secondary collapse could be avoided, and risks faced by rescuers might be lower. Thus, rescue robots have become an important development direction.
+Rescue robots have the advantages of high mobility. They can work continuously to improve the efficiency of search and rescue. In addition, those robots can map the terrain and detect sound, gas and temperature within the ruins by carrying a variety of sensors.
+Rescue robots can assist or replace the rescuers, thus avoiding injuries caused by the secondary collapse and reducing risks for rescuers. 
 
-In fact, rescue robots have been put to use in a number of disaster scenarios. 
-\gls{crasar} used rescue robots for Urban Search and Rescue task during the World Trade Center collapse in 2001 \cite{Casper:2003tk} and has employed rescue robots at multiple disaster sites in the years since to assist in finding survivors, inspecting buildings and scouting the site environment etc \cite{Murphy:2012th}. Anchor Diver III was utilized as underwater support to search for bodies drowned at sea after the 2011 Tohoku Earthquake and Tsunami \cite{Huang:2011wq}.
 
-Considering the training time and space constraints for rescuers \cite{Murphy:2004wl}, and the goal of efficiency and fluency collaboration \cite{10.1145/1228716.1228718}, the appropriate \gls{hri} approach deserves to be investigated. Some of the existing \gls{hri} methods are Android software \cite{Sarkar:2017tt} \cite{Faisal:2019uu}, gesture recognition\cite{Sousa:2017tn} \cite{10.1145/2157689.2157818} \cite{Nagi:2014vu}, facial voice recognition \cite{Pourmehr:2013ta}, adopting eye movements \cite{Ma:2015wu}, \gls{ar} \cite{SOARES20151656} and \gls{vr}, etc.
+In fact, rescue robots have already been used in a number of disaster scenarios. 
+\gls{crasar} used rescue robots for Urban Search and Rescue task during the World Trade Center collapse in 2001 \cite{Casper:2003tk} and has employed rescue robots at multiple disaster sites to assist in finding survivors, inspecting buildings and scouting the site environment etc. \cite{Murphy:2012th}. Anchor Diver III was utilized as underwater support to search for bodies drowned at sea after the 2011 Tohoku Earthquake and Tsunami \cite{Huang:2011wq}.
+
+Considering the training time and space constraints for rescuers \cite{Murphy:2004wl}, and the goal of efficiency and fluency collaboration \cite{10.1145/1228716.1228718}, the appropriate \gls{hri} approach deserves to be investigated. Some of the existing \gls{hri} solutions include Android software \cite{Sarkar:2017tt} \cite{Faisal:2019uu}, gesture recognition\cite{Sousa:2017tn} \cite{10.1145/2157689.2157818} \cite{Nagi:2014vu}, facial voice recognition \cite{Pourmehr:2013ta}, adopting eye movements \cite{Ma:2015wu}, \gls{ar} \cite{SOARES20151656} and \gls{vr}.
 
 % VR and robot
-Among them, \gls{vr} has gained much attention due to its immersion and the interaction method that can be changed virtually. \gls{vr} is no longer a new word. With the development of technology in recent years, \gls{vr} devices are gradually becoming more accessible to users. With the improvement of hardware devices, the new generation of \gls{vr} headsets has higher resolution and a wider field of view. While \gls{vr} are often considered entertainment devices, \gls{vr} brings more than that. It plays an important role in many fields such as entertainment, training, education and medical care.
+Among them, \gls{vr} has gained much attention due to its immersion and the interaction method that can be changed virtually. \gls{vr} is no longer a new interaction technique. With the development of technology in recent years, \gls{vr} devices are gradually becoming more accessible to users. With the improvement of hardware devices, the new generation of \gls{vr} headsets has higher resolution and a wider field of view. While \gls{vr} headsets are often considered as entertainment devices, \gls{vr} brings more than that. It plays an important role in many fields such as training, education and medical care.
 
-The use of \gls{vr} in \gls{hrc} also has the potential. In terms of reliability, \gls{vr} is reliable as a novel alternative to \gls{hri}. The interaction tasks that users can accomplish with \gls{vr} do not differ significantly from those using real operating systems\cite{Villani:2018ub}. In terms of user experience and operational efficiency, \gls{vr} headsets can provide users with stereo viewing cues, which makes collaborative \gls{hri} tasks in certain situations more efficient and performance better \cite{Liu:2017tw}. A novel \gls{vr}-based practical system for immersive robot teleoperation and scene exploration can improve the degree of immersion and situation awareness for the precise navigation of the robot as well as the interactive measurement of objects within the scene. In contrast, this level of immersion and interaction cannot be reached with video-only systems \cite{Stotko:2019ud}.
+The use of \gls{vr} in \gls{hrc} also has the potential. In terms of reliability, \gls{vr} is reliable as a novel alternative to \gls{hri}. The interaction tasks that users can accomplish with \gls{vr} do not differ significantly from those using devices in real environments \cite{Villani:2018ub}. In terms of user experience and operational efficiency, \gls{vr} headsets can provide users with stereo viewing cues, which makes collaborative \gls{hri} tasks in certain situations more efficient and performance better \cite{Liu:2017tw}. A novel \gls{vr}-based practical system for immersive robot teleoperation and scene exploration can improve the degree of immersion and situation awareness, thus increasing the accuracy of the robot's navigation and the awareness of objects in the scene. In contrast, this level of immersion and interaction cannot be reached with video-only systems \cite{Stotko:2019ud}.
 
-However, there remains a need to explore \gls{hri} patterns and improve the level of Human-Robot Integration \cite{Wang:2017uy}. Intuitive and easy-to-use interactive patterns can enable the user to explore the environment as intentionally as possible and improve the efficiency of search and rescue. The appropriate interaction method should cause less mental and physical exhaustion, which also extends the length of an operation, making it less necessary for the user to frequently exit the \gls{vr} environment for rest.
+However, there remains a need to explore \gls{hri} patterns and improve the level of Human-Robot Integration \cite{Wang:2017uy}. Intuitive and easy-to-use interactive patterns can enable the user to explore the environment intentionally and improve the efficiency of search and rescue as much as possible. The appropriate interaction method should cause less mental and physical exhaustion, reducing the need for the user to frequently exit the \gls{vr} environment for rest.
 
 % What I have done (overview)
-For this purpose, this paper presents a preliminary \gls{vr}-based system that simulates the cooperation between ground based rescue robots and humans with four different operation modes and corresponding test scenes, which imitate a post-disaster city. The test scene simulates a robot collaborating with Unity to construct a virtual 3D scene. The robot has a simulated \gls{lidar} remote sensor, which makes the display of the scene dependent on the robot's movement. In order to find an interactive approach that is as intuitive and low mental fatigue as possible, a user study was executed after the development was completed.
+For this purpose, this thesis presents a preliminary \gls{vr}-based system that simulates the cooperation between ground based rescue robots and humans with four different operation modes and corresponding test scenes, which imitate a post-disaster city. The test scene simulates a robot leveraging \gls{unity} to construct a virtual 3D scene. The proposed system simulates the process that the robot uses \gls{lidar} to detect the surrounding environment. In order to find an interactive approach that is as intuitive and easy-to-use as possible, a user study was performed after the development was completed.
 
 
-% Paper Architecture
+% thesis Architecture
 In Chapter \ref{related}, related work involving the integration of \gls{vr} and \gls{hri} is presented.
 Chapter \ref{implementation} provides details of the proposed system, including the techniques used for the different interaction modes and the setup for test scenes.
-Chapter \ref{evaluate} explains the design and procedure of user study.
+Chapter \ref{evaluate} explains the design and procedure of the user study.
 Chapter \ref{result} and \ref{discuss} present the results of the user study and analyze the advantages and disadvantages of the different operation modes and the directions for future work.
-Finally, in Chapter \ref{conclusion}, the article is concluded.
+Finally, in Chapter \ref{conclusion}, the thesis is concluded.
 
 

+ 4 - 4
Concepts for operating ground based rescue robots using virtual reality/chapters/related_work.tex

@@ -5,11 +5,11 @@ In this chapter, some research on the integration of \gls{vr} and \gls{hri} will
 
 The topic of \gls{vr} and \gls{hri} is an open research topic with many kinds of focus perspectives.
 
-\gls{hri} platforms combined with virtual worlds have several application scenarios. It can be used, for example, to train operators. Elias Matsas et al. \cite{Matsas:2017aa} provided a \gls{vr}-based training system using hand recognition. Kinect cameras are used to capture the user's positions and motions, and virtual user models are constructed in the \gls{vr} environment based on the collected data. Users will operate robots and virtual objects in the \gls{vr} environment, and in this way, learn how to operate the real robot. The framework proposed by Luis Pérez et al. \cite{Perez:2019ub} is applied to train operators to learn to control the robot. Since the environment does not need to change in real time, but rather needs to recreate the factory scene realistically, a highly accurate 3D environment was constructed in advance using Blender after being captured with a 3D scanner.
+\gls{hri} platforms combined with virtual worlds have many applications. It can be used, for example, to train machine operators in factories. Elias Matsas et al. \cite{Matsas:2017aa} provided a \gls{vr}-based training system using hand recognition. Kinect cameras are used to capture the user's positions and motions, and virtual user models are constructed in the \gls{vr} environment based on the collected data. Users will operate robots and virtual objects in the \gls{vr} environment, and in this way, learn how to operate the real robot. The framework proposed by Luis Pérez et al. \cite{Perez:2019ub} is applied to train operators to learn to control the robot. Since the environment does not need to change in real time, but rather needs to recreate the factory scene realistically, a highly accurate 3D environment was constructed in advance using Blender in combination with a 3D scanner.
 
-Building 3D scenes in virtual worlds based on information collected by robots is also a research highlight. Wang, et al. \cite{Wang:2017uy} were concerned with the visualization of the rescue robot and its surroundings in a virtual environment. The proposed \gls{hri} system uses incremental 3D-NDT map to render the robot's surroundings in real time. The user can view the robot's surroundings in a first-person view through the \gls{htc} and send control commands through arrow keys on the motion controllers. A novel \gls{vr}-based practical system is presented in \cite{Stotko:2019ud} consisting of distributed systems to reconstruct the 3D scene. The data collected by the robot is first transmitted to the client responsible for reconstructing the scene. After the client has constructed the 3D scene, the set of actively reconstructed visible voxel blocks is sent to the server responsible for communication, which has a robot-based live telepresence and teleoperation system. This server will then broadcast the data back to the client used by the operator, thus enabling an immersive visualization of the robot within the scene.
+Building 3D scenes in virtual worlds based on information collected by robots is also a research highlight. Wang, et al. \cite{Wang:2017uy} were concerned with the visualization of the rescue robot and its surroundings in a virtual environment. The proposed \gls{hri} system uses the incremental 3D-NDT map to render the robot's surroundings in real time. The user can view the robot's surroundings in a first-person view through the \gls{htc} and send control commands through arrow keys on the motion controllers. A novel \gls{vr}-based practical system is presented in \cite{Stotko:2019ud} consisting of distributed systems to reconstruct the 3D scene. The data collected by the robot is first transmitted to the client responsible for reconstructing the scene. After the client has constructed the 3D scene, the set of actively reconstructed visible voxel blocks is sent to the server responsible for communication, which has a robot-based live telepresence and teleoperation system. This server will then broadcast the data back to the client used by the operator, thus enabling an immersive visualization of the robot within the scene.
 
-Others are more concerned about the manipulation of the robotic arm mounted on the robot. Moniri et al. \cite{Moniri:2016ud} provided a \gls{vr}-based operating model for the robotic arm. The user wearing a headset can see a simulated 3D scene at the robot's end and send pickup commands to the remote robot by clicking on the target object with the mouse. The system proposed by Ostanin et al. \cite{Ostanin:2020uo} is also worth mentioning. Although their proposed system for operating a robotic arm is based on \gls{mr}, the article is highly relevant to this paper, considering the correlation of \gls{mr} and \gls{vr} and the proposed system detailing the combination of \gls{ros} and robotics. In their system, the \gls{ros} Kinect was used as middleware and was responsible for communicating with the robot and the \gls{unity} side. The user can control the movement of the robot arm by selecting predefined options in the menu. In addition, the orbit and target points of the robot arm can be set by clicking on a hologram with a series of control points.
+Others are more concerned about the manipulation of the robotic arm mounted on the robot. Moniri et al. \cite{Moniri:2016ud} provided a \gls{vr}-based operating model for the robotic arm. The user wearing a headset can see a simulated 3D scene at the robot's end and send pickup commands to the remote robot by clicking on the target object with the mouse. The system proposed by Ostanin et al. \cite{Ostanin:2020uo} is also worth mentioning. Although their proposed system for operating a robotic arm is based on \gls{mr}, the article is highly relevant to this thesis, considering the correlation of \gls{mr} and \gls{vr} and the proposed system detailing the combination of \gls{ros} and robotics. In their system, \gls{ros} Kinect is used as middleware and responsible for communicating with the robot and the \gls{unity} side. The user can control the movement of the robot arm by selecting predefined options in the menu. In addition, the orbit and target points of the robot arm can be set by clicking on a hologram with a series of control points.
 
 %Summary
-To summarize, a large number of authors have studied methods and tools for \gls{vr}-based \gls{hri} and teleoperation. However, very few studies focus on the different interactive approaches for \gls{hri}.
+To summarize, previous work has studied methods and tools for \gls{vr}-based \gls{hri} and teleoperation. However, only few studies focus on the different interactive approaches for \gls{hri}.

+ 46 - 28
Concepts for operating ground based rescue robots using virtual reality/chapters/result.tex

@@ -11,50 +11,67 @@ In this chapter, the results of the user study, obtained by the method described
 
 \section{Participants}
 
-A total of 8 volunteers participated in the user study (3 females and 5 males between 22 and 32 years, mean age 25.75 years). Five of them were computer science students at the university. Four participants had previous experience with VR,  but had played it only a few times.
+A total of 8 volunteers participated in the user study (3 females and 5 males between 22 and 32 years, mean age 25.75 years). Five of them were computer science students at the university. Four participants had no previous experience with \gls{vr}, and the other four participants had only limited experience.
 
 \section{Quantitative Results}
 Part of the data for the quantitative analysis comes from the robot's performance and testing results, which were automatically recorded by the proposed system during the tests. The other part of the data comes from the questionnaires that the participants filled out after the test.
 
-
-
 \subsection{Robot Performance}
 The overall robot performance is reported in Figure \ref{fig:performance}.
+
+The number of collisions between the robot and objects reflects the probability of the robot being damaged. Lab Mode got the worst results with an average collision times of 26.75 with a standard error of 18.07. Handle Mode had the second-worst result with an average collision times of 21.5 with a standard error of 11.45. Remote Mode and UI Mode performed similarly, they both had a few collision times and a low standard error ($M_{Remote} = 2.25$, $SD_{Remote}= 1.79$, and $M_{UI} = 3.875$, $SD_{UI} = 1.96$). 
+
+In Lab Mode, the robot traveled the most distance and the most time. During the five-minute test period, the robot drove for an average of 243 seconds and covered a total of 753 meters. The average speed of the four modes did not differ significantly, but the standard error of the samples in Handle and Lab modes was significant.  In both modes, it was found that some participants drove the robot very slowly and cautiously, while some participants tended to drive at maximum speed on the road and braked as soon as they noticed a distressed person. In Remote and UI modes, the robot's driving route was mainly controlled by the computer, so the average speed was basically the same and the deviation values were very small.
 \begin{figure}[htbp]
     \centering
-    \includegraphics[width=\textwidth]{graphics/Robot Performance2.png}
+    \includegraphics[width=\textwidth]{graphics/Robot Performance.png}
     \caption{Robot Performance. (All error bars indicate the standard error.)} 
     \label{fig:performance}
 \end{figure}
-The number of collisions between the robot and objects reflects the probability of the robot being destroyed. Lab Mode got the worst results with an average collision times of 26.75 with a standard error of 18.07. Handle Mode has the second-worst result with an average collision times of 21.5 with a standard error of 11.45. Remote Mode and UI Mode perform similarly, they both have a few collision times and a low standard error ($M_{Remote} = 2.25$, $SD_{Remote}= 1.79$, and $M_{UI} = 3.875$, $SD_{UI} = 1.96$). 
-
-In Lab Mode, the robot travels the most distance and travels the most time. During the five-minute test period, the robot drove for an average of 243 seconds and covered a total of 753 meters. The average speed of the four modes did not differ significantly, but the standard error of the samples in Handle and Lab modes was significant.  In both modes, it was found that some participants drove the robot very slowly and cautiously, while some participants tended to drive at maximum speed on the road and braked as soon as they noticed a distressed person. In Remote and UI modes, the robot's driving route was mainly controlled by the computer, so the average speed was basically the same and the deviation values were very small.
-
 
 
 
 \subsection{Rescue situation}
 
-\begin{wrapfigure}{r}{0.4\textwidth}
-\flushright
-  \vspace{-80pt}    % 对应高度1
-  \includegraphics[height=7cm]{graphics/Rescue situation2.png}\\
-  \caption{Rescue situation. (All error bars indicate the standard error.)}
-  \label{fig:rescue}
-  \vspace{-80pt}    % 对应高度3
-\end{wrapfigure}
-The results of rescuing victims are shown in Figure \ref{fig:rescue}. In general, the average number of rescued victims was the highest in Remote and UI modes. In both modes, there were participants who rescued all the victims within the time limit, and even completed the rescue task from half a minute to one minute earlier. In the Lab Mode, the remaining visible victims are the most. This means that participants are more likely to overlook details in the scene, or even pass by the victim without noticing him. This could be attributed to the poor display in this scene, or it could be due to the complexity of the operation which makes the participant not have time to take into account every detail in the scene.
+% \begin{wrapfigure}{r}{0.4\textwidth}
+% \flushright
+%   \vspace{-80pt}    % 对应高度1
+%   \includegraphics[height=7cm]{graphics/Rescue situation2.png}\\
+%   \caption{Rescue situation. (All error bars indicate the standard error.)}
+%   \label{fig:rescue}
+%   \vspace{-80pt}    % 对应高度3
+% \end{wrapfigure}
+
+\begin{figure}[htbp]
+    \centering
+    \includegraphics[height=7cm]{graphics/Rescue situation2.png}
+    \caption{Rescue situation. (All error bars indicate the standard error.)} 
+    \label{fig:rescue}
+\end{figure}
+The results of rescuing victims are shown in Figure \ref{fig:rescue}. In general, the average number of rescued victims was the highest in Remote and UI modes. In both modes, some participants rescued all the victims within the time limit, and even completed the rescue task from half a minute to one minute earlier. In the Lab Mode, the remaining visible victims were the most. This means that participants were more likely to overlook details in the scene, or even pass by the victim without noticing them. This could be attributed to the poor display quality of the screen, or it could be due to the complexity of the operation which makes the participant not have time to take into account every detail in the scene.
 
 
 \subsection{TLX Score}
 \begin{figure}[htbp]
     \centering
-    \includegraphics[width=\textwidth]{graphics/tlx3.jpg}
+    \includegraphics[width=\textwidth]{graphics/tlx6.jpg}
     \caption{The average score of \gls{tlx}. (All error bars indicate the standard error.)}
     \label{fig:tlx} 
 \end{figure}
-The NASA Task Load Index (NASA-TLX) consists of six subjective subscales. In order to simplify the complexity of the assessment, the weighted scores for each subscale are the same when calculating the total score. Overall, the smaller the number, the less workload the operation mode brings to the participant. Figure \ref{fig:tlx} contains the six subjective subscales mentioned above as well as the total score. The graph shows the mean and standard error of each scale. The standard error is large for each scale because participants could only evaluate the workload of each mode of operation relatively, and they had different standard values in mind. As can be seen, the total workloads obtained in Remote and UI modes are the smallest. Similar total workloads can be found in Lab Mode and Handle Mode, and their values are worse. In both modes, they feel 
-more negative emotions, such as irritability, stress, etc. Participants said they needed to recall what buttons to press to achieve a certain effect, or needed to consider how to turn the robot to get it running to the desired position. This resulted in more mental activity for Handle Mode and Lab Mode. The time pressure in these two modes is also the highest. Participants demanded the most physical activity and effort in Lab Mode. It was observed that they frequently looked up and down to view different screens. In addition, some participants maintained their arms in a flat position while operating the virtual joystick, which also caused some fatigue on the arms. Participants' perceptions of the performance situation were similar across the four modes of operation. Overall, the Remote and UI modes received better scores on the NASA Task Load Index.
+The result of \gls{tlx} reveals the perceived cognitive workload. The smaller the score is, the less workload the operation mode brings to the participant. Figure \ref{fig:tlx} contains the mean and standard error of each scale. The standard error was significant for each scale because participants could only evaluate the workload of each mode of operation relatively, and they had different standard values in mind. 
+
+In this subsection, the total score will be analyzed. Three of the subjective subscales are selected additionally for detailed explanation.
+
+As can be seen, similar score could be found for each scale in Lab Mode and Handle Mode, and their values were significantly worse than the other two modes. In the majority of cases, the average load in these two modes was almost twice as high as in Remote and UI modes. ($total_{Handle} = 57.29$, $total_{Lab} = 61.67$, $total_{Remote} = 30.24$, $total_{UI} = 28.23$) 
+
+The difference in scores was huge in terms of mental demand. ($mental_{Handle} = 73.125$, $mental_{Lab} = 61.25$, $mental_{Remote} = 21.43$, $mental_{UI} = 15.63$) The Handle Mode required the most mental activity. Participants said they needed to recall what buttons to press to achieve a certain effect or consider how to turn the robot to get it moving towards the desired position. They also had to pay attention to the distance between themselves and the robots so that the robots would not leave their sight.
+
+ Physical demand in Lab Mode was highest. It was observed that they frequently looked up and down to view different screens. In addition, some participants maintained their arms in a flat position while operating the virtual joystick, which also caused some fatigue on the arms. 
+
+In Lab Mode and Handle Mode, they felt more negative emotions, such as irritability, stress, etc. ($frustration_{Handle} = 47.5$, $frustration_{Lab} = 54.38$, $frustration_{Remote} = 20$, $frustration_{UI} = 21.88$.)
+
+Overall, the Remote and UI modes received better scores on \gls{tlx}.
+
 
 % \begin{figure}[htbp]
 %     \centering
@@ -71,7 +88,7 @@ more negative emotions, such as irritability, stress, etc. Participants said the
 
 
 \subsection{Likert Questionnaire Results}
-After each test was completed, participants were also asked to fill out a questionnaire. The results can be seen in Figure \ref{fig:liker}. The questionnaire uses a 5-point Likert scale. 1 means that the participant considers that his or her situation does not fit the question description at all. 5 indicates that the situation fits perfectly.
+After each test was completed, participants were also asked to fill out a questionnaire. The results can be seen in Figure \ref{fig:liker}. The questionnaire uses a 5-point Likert scale. 1 means that participants considered that their situations did not fit the question description at all. 5 indicates that their situations fit perfectly.
 \begin{figure}[htbp]
     \centering
     \subfigure{
@@ -94,26 +111,27 @@ The Remote Mode received a highly uniform rating. All eight participants gave it
 
 
 \subsubsection{"I found it easy to concentrate on controlling the robot."}
-In Handle Mode, participants seemed to have more difficulty concentrating on controlling the robot. This could be partly attributed to the fact that the participants had to adjust their distance from the robot when it was about to leave their field of view. On the other hand, participants have to control the direction and movement of the cart, which is not as easy in the Handle Mode as in the Remote and UI modes. Therefore, although participants need to adjust their position while controlling the robot in all three modes, the other two modes score better than Handle Mode. Some participants thought that they could concentrate well in Lab Mode because they did not need to turn their heads and adjust their positions frequently as in the other 3 modes. However, some participants held the opposite opinion. They stated that they needed to recall how the lab operated, and that multiple screens would also cause distractions.
+
+In Handle Mode, participants seemed to have more difficulty concentrating on controlling the robot. This could be partly attributed to the fact that the participants had to adjust their distance from the robot when it was about to leave their field of view. On the other hand, participants had to control the direction and movement of the cart, which was not as easy in the Handle Mode as in the Remote and UI modes. Therefore, although participants needed to adjust their position while controlling the robot in all three modes, the other two modes scored better than Handle Mode. Some participants thought that they could concentrate well in Lab Mode because they did not need to turn their heads and adjust their positions frequently as in the other 3 modes. However, some participants held the opposite opinion. They stated that they needed to recall how the lab operated, and that multiple screens would also cause distractions.
 
 
 \subsubsection{"I found it easy to perceive the details of the environment."}
-The Lab Mode has the worst rating. This looks partly attributable to the use of a screen to show the rescue scene, rather than through immersion. Another reason is the poor display quality of the screen, which some participants felt was very blurred, making it impossible for them to observe the details of the scene. The results of Handle Mode differed greatly. Five participants gave a rating higher than 4. However, three participants gave a score of 2 or 3. The reasons were similar to what was mentioned before, the difficulty of operation and the need to alternate between controlling the robot and their own position made it difficult for them to focus on the scene.
+The Lab Mode had the worst rating. This could partially be explained by the use of a screen to show the scene, rather than through immersion. Another reason was the poor display quality of the screen, which some participants felt was very blurred, making it impossible for them to observe the details of the scene. The results of Handle Mode differed greatly. Five participants gave a rating higher than 4. However, three participants gave a score of 2 or 3. The reasons were similar to what was mentioned in last question. The difficulty of operation and the need to alternate between controlling the robot and their own position made it difficult for them to focus on the scene.
 
 
 \section{Qualitative Results}
 This section will discuss the feedback from participants. Overall, every participant gave positive comments about operating the robot in a \gls{vr} platform. They thought the proposed system was exciting and did allow them to perceive more details in the post-disaster environment than the traditional video-based manipulation. The feedbacks obtained from each mode will be listed next.
 
-70\% of participants ranked Lab Mode as the least preferred mode. Some experimenters were very unaccustomed to using \gls{vr} handles to grasp objects, which makes it difficult for them to operate the robot with virtual joysticks smoothly. For those who have \gls{vr} experience, even without any hints and learning, they subconsciously understood what each button and joystick represented and were able to operate the robot directly. Nevertheless, for the actual rescue experience in the test focus, both kinds of participants responded that the robot's operation was more complex and difficult than the other modes. Participants attributed the reasons to obstacles in the environment. One of the participants said:"\textit{There is no physical access to the joystick. So it is slightly tough for me to control the robot.}" In some cases, when the robot was stuck in a corner, it took them much effort to get the robot out of this situation. Also, since the Lab Mode uses a simulated screen, the Lab Mode is not as good as the other three in terms of observing the details of the scene. Participants felt that the simulated screen was blurred, and the frequent switching between multiple screens made them very tired. 
+70\% of participants ranked Lab Mode as the least preferred mode. Some participants were very unaccustomed to using \gls{vr} handles to grasp objects, which made it difficult for them to operate the robot with virtual joysticks smoothly. For those who have \gls{vr} experience, even without any hints and learning, they subconsciously understood what each button and joystick represented and were able to operate the robot directly. Nevertheless, for the actual rescue experience in the test focus, both kinds of participants responded that the robot's operation was more complex and difficult than the other modes. Participants attributed the reasons to obstacles in the environment. One of the participants said: "\textit{There is no physical access to the joystick. So it is slightly tough for me to control the robot.}" In some cases, when the robot was stuck in a corner, it took them much effort to get the robot out of this situation. Also, since the Lab Mode uses a simulated screen, the Lab Mode is not as good as the other three in terms of observing the details of the scene. Participants felt that the simulated screen was blurred, and the frequent switching between multiple screens made them very tired. 
 
 %Handle
-Handle Mode directly using motion controllers for moving robot, and the user can open and close the two monitoring screen through the button. The evaluation of this operation mode depends in large part on the construction of the motion controllers. More than half of the users thought that the \gls{htc} motion controllers made them less flexible when operating the robot's steering. Participants were often unable to accurately touch the correct position of the touchpad when using it, and it was very likely to be touched by mistake. At the end of the experiment, these participants were additionally invited to re-operate the robot using the \gls{vr} controller with joysticks, and said that using joysticks was easier for them to control the direction. Some participants said that they did not like the two monitoring screens provided by this mode. The additional surveillance screens made them subconsciously distracted to observe them, preventing them from concentrating on the rescue mission. Others, however, thought that the monitor was particularly helpful. As it was very difficult to control the robot while teleporting themselves, they first relied on the monitor screen to drive the robot to a place, and then teleported themselves to the location of the robot. The experiment also found that participants tended to forget that the two monitor screens could be closed, and they usually tried to drag the screens to places where they did not affect their view and dragged them back when they wanted to use them.
+Handle Mode directly uses motion controllers for moving the robot. The evaluation of this operation mode depends in large part on the design of the motion controllers. The proposed system uses the touchpad on the \gls{htc} motion controllers to control the direction of the robot. More than half of the users thought that the touchpad made them less flexible when operating the robot's steering. Participants were often unable to accurately touch the correct position of the touchpad when using it, and it was very likely to be touched by mistake. At the end of the experiment, these participants were additionally invited to re-operate the robot using the \gls{vr} controller with thumb sticks, and said that using thumb sticks was easier for them to control the direction. Some participants said that they did not like the two monitoring screens provided by this mode. The additional surveillance screens made them subconsciously distracted to observe them, preventing them from concentrating on the rescue mission. Others, however, thought that the monitor was particularly helpful. As it was very difficult to control the robot while teleporting themselves, they first relied on the monitor screen to drive the robot to a place, and then teleported themselves to the location of the robot. The experiment also found that participants tended to forget that the two monitor screens could be closed, and they usually tried to drag the screens to places where they did not affect their view and dragged them back when they wanted to use them.
 
-Remote Mode and UI Mode that use AI intelligent obstacle avoidance walking algorithm were most well-received. Participants felt that in both modes they did not need to worry about how to control the robot's steering and forward speed, but that the computer was responsible for everything, allowing them to focus on virtual world exploration.
+Remote Mode and UI Mode that use AI intelligent obstacle avoidance driving algorithm were most well-received. Participants felt that in both modes they did not need to worry about how to control the robot's steering and forward speed, but that the computer was responsible for everything, allowing them to focus on virtual world exploration.
 
-For the UI Mode, one of the participants remarked: "\textit{I can just let the robot follow me. I don't need to think about how to operate the robot. This way I can concentrate on the rescue.} " In the experiment, All participants can easily learn how to operate the UI menu. This may be explained by the fact that the menu interface was very familiar to them. It was observed that all participants did not use the direction buttons and monitoring screens in the virtual menu. At the beginning of the test, they all turned on the follow me function directly and adjusted the robot's driving speed to the maximum. After that, the robot was more like a moveable \gls{lidar} sensor. This therefore leads to the fact that these participants could completely disregard the location of the robot and just explore the \gls{vr} world on their own. One participant in the experiment teleported so fast that when he reached a location and had been waiting for a while, the robot was still on its way. In fact, the problem of not being able to find the robot happens in Handle Mode as well.
+For the UI Mode, one of the participants remarked: "\textit{I can just let the robot follow me. I don't need to think about how to operate the robot. This way I can concentrate on the rescue.}" In the user study, all participants could easily learn how to operate the UI menu. This may be explained by the fact that the menu interface was very familiar to them. It was observed that all participants did not use the direction buttons and monitoring screens in the virtual menu. At the beginning of the test, they all turned on the follow function directly and adjusted the robot's driving speed to the maximum. After that, the robot was more like a moveable \gls{lidar} sensor. This therefore led to the fact that these participants could completely disregard the location of the robot and just explore the \gls{vr} world on their own. One participant in the experiment teleported so fast that when he reached a location and had been waiting for a while, the robot was still on its way. In fact, the problem of not being able to find the robot happened in Handle Mode as well.
 
-In contrast, Remote Mode solves this problem of the robot not being in view. One participant stated that “\textit{The robot is always in sight, so I don't have to waste extra time looking for the robot. Control of the robot is also very easy}.” Another participant reflected that after setting the destination of the trolley operation, he would subconsciously observe the movement of the robots, thus making him always know where the robot was. They also thought it was very easy in this mode to operate the robot. Many participants alternated between using the right- and left-hand rays, first setting the robot's moving target point with the right-hand ray, and then teleporting themselves there with the left-hand ray. The security measures set up (remote controller) were almost not used in the actual test. When it came to the robot's inability to navigate automatically to the destination, the participants preferred to move the robot by resetting the destination point.
+In contrast, Remote Mode solves this problem of the robot not being in view. One participant stated that “\textit{The robot is always in sight, so I don't have to waste extra time looking for the robot. Control of the robot is also very easy}.” Another participant reflected that after setting the destination of the trolley operation, he would subconsciously observe the movement of the robots, thus making him always know where the robot was. They also thought it was very easy in this mode to operate the robot. Many participants alternated between using the right- and left-hand rays, first setting the robot's moving target point with the right-hand ray, and then teleporting themselves there with the left-hand ray. The backup measures (remote controller) were almost not used in the actual test. When it came to the robot's inability to navigate automatically to the destination, the participants preferred to move the robot by resetting the destination point.
 
 In addition to this, participants were found lost in each of the operation modes. They could forget whether the place was already visited by themselves. Similar behavior was observed in all participants. They passed by the same place over and over again, and sometimes simply stayed within the confines of a known scene and did not explore further.
 

binární
Concepts for operating ground based rescue robots using virtual reality/graphics/Robot Performance.png


binární
Concepts for operating ground based rescue robots using virtual reality/graphics/lab1.png


binární
Concepts for operating ground based rescue robots using virtual reality/graphics/tlx5.jpg


binární
Concepts for operating ground based rescue robots using virtual reality/graphics/tlx6.jpg


binární
Hector_v2/.DS_Store


binární
Hector_v2/Assets/.DS_Store


binární
Hector_v2/Assets/Scripts/.DS_Store


binární
User Study/.DS_Store


+ 1 - 1
User Study/TLX/statistic.py

@@ -86,7 +86,7 @@ def draw(scale):
 
 def drawTogether():
     scales = ["mental-demand","physical-demand","temporal-demand","performance", "effort","frustration","total"]
-    scales = ["mental-demand","physical-demand","temporal-demand", "effort","frustration","total"]
+    scales = ["total","mental-demand","physical-demand","frustration"]
     plt.figure(figsize=(15,7))
     x = np.arange(len(scales))
     total_width, n = 0.8, 4

binární
User Study/TLX/summary.jpg


binární
User Study/TLX/tlx4.jpg


binární
User Study/TLX/tlx5.jpg


binární
User Study/TLX/tlx6.jpg


binární
User Study/TestResult/.DS_Store


binární
User Study/TestResult/Rescue situation.png


binární
User Study/TestResult/Rescue situation2.png


binární
User Study/TestResult/Robot Performance.png


binární
User Study/TestResult/Robot Performance2.png


+ 2 - 2
User Study/TestResult/statistic.py

@@ -60,7 +60,7 @@ def writeSDCSV(filename):
         for condition in conditions:
             col = df_merged.groupby('condition').get_group(condition)
             col = col[scale]
-            temp.appewithnd(get_standard_deviation(col))
+            temp.append(get_standard_deviation(col))
         dict[scale] = temp
     df = pd.DataFrame(dict) 
     df.to_csv(filename)
@@ -105,7 +105,7 @@ def drawRobotPerformance():
     plt.bar(conditions, file["Total driving time"], width=0.35, color=colors,alpha=a,yerr=std_err,error_kw=error_params)
 
     plt.subplot(224)
-    plt.title("Adverage speed",fontsize=15)
+    plt.title("Average speed",fontsize=15)
     std_err = sd["Adverage speed"]
     plt.ylabel('Speed(m/s)')
     plt.bar(conditions, file["Adverage speed"], width=0.35, color=colors,alpha=a,yerr=std_err,error_kw=error_params)