ソースを参照

backup thesis

Jia, Jingyi 3 年 前
コミット
e01e5c814e

BIN
.DS_Store


BIN
Concepts for operating ground based rescue robots using virtual reality/.DS_Store


BIN
Concepts_for_operating_ground_based_rescue_robots_using_virtual_reality.pdf → Concepts for operating ground based rescue robots using virtual reality/Concepts_for_operating_ground_based_rescue_robots_using_virtual_reality.pdf


+ 2 - 2
Concepts for operating ground based rescue robots using virtual reality/chapters/abstract.tex

@@ -1,13 +1,13 @@
 \selectlanguage{english}
 \begin{abstract}
-Rescue robotics are increasingly being used to deal with crisis situations, mainly in exploring areas that are too dangerous for humans, and have become a key research area. A number of studies use \gls{vr} as a platform for \gls{hri}, as this can improve the degree of immersion and situation awareness. However, there remains a need for an intuitive, easy-to-use interaction pattern, which increases the efficiency of search and rescue and allows the user to explore the environment intentionally. This paper presents a preliminary VR-based \gls{hri} system in terms of ground-based rescue robots, with the aim to find an ideal interaction pattern. The proposed system offers four different operation modes and corresponding test scenes imitating a post-disaster city. This paper includes a user study in which four different operation modes are tested in turn and compared. The conclusion is that the ideal interaction pattern should reduce the complexity of the operation as much as possible. Instead of allowing the user to adjust the robot's direction and speed themselves, it is recommended to set a target point and let the robot navigate to the target point automatically. In addition to this, some of the features that should be provided and the directions that should be investigated in the future are also mentioned in this paper.
+Rescue robotics are increasingly being used to deal with crisis situations, such as in exploring areas that are too dangerous for humans, and have become a key research area. A number of studies use \gls{vr} as a platform for \gls{hri}, as this can improve the degree of immersion and situation awareness. However, there remains a need to explore an intuitive, easy-to-use interaction pattern, which increases the efficiency of search and rescue and allows the user to explore the environment intentionally. This paper presents a preliminary VR-based \gls{hri} system in terms of ground-based rescue robots, with the aim to find an ideal interaction pattern. The proposed system offers four different operation modes and corresponding test scenes imitating a post-disaster city. This paper includes a user study in which four different operation modes are tested in turn and compared. The conclusion is that the ideal interaction pattern should reduce the complexity of the operation as much as possible. Instead of allowing the user to adjust the robot's direction and speed themselves, it is recommended to set a target point and let the robot navigate to the target point automatically. In addition to this, some of the features that should be provided and the directions that should be investigated in the future are also mentioned in this paper.
 \end{abstract}
 
 
 
 \selectlanguage{ngerman}
 \begin{abstract}
-Rettungsrobotik findet immer häufiger Anwendung bei der Bewältigung von Krisensituationen, meist bei der Exploration von Gebieten, welche zu gefährlich für Menschen sind. Eine Reihe von Studien nutzt \gls{vr} als Plattform für die Mensch-Roboter-Interaktion, da dies den Grad der Immersion und das Situationsbewusstsein verbessern kann. Es besteht jedoch nach wie vor ein Bedarf an einem intuitiven, einfach zu bedienenden Interaktionsmethoden zu entwickeln, um das Personal bei der Steuerung des Roboters zu entlasten und die Effizienz von Such- und Rettungsarbeiten zu erhöhen. In diesem Aufsatz wird ein vorläufiges \gls{vr}-basiertes Mensch-Roboter-Interaktionssystem in Bezug auf bodenbasierte Rettungsroboter vorgestellt, mit dem Ziel, ein möglichst intuitive Steuerungsmethode mit geringer mentaler Ermüdung zu finden. Das vorgeschlagene System bietet vier verschiedene Betriebsmodi und entsprechende Testszenen, die eine Katastrophenstadt nachbilden. Diese Arbeit beinhaltet eine Nutzerstudie, in der vier verschiedene Betriebsmodi nacheinander getestet und verglichen werden. Die Schlussfolgerung ist, dass das ideale Interaktionsmethoden die Komplexität der Steuerung so weit wie möglich reduzieren sollte. Anstatt dem Benutzer die Aufgabe zu geben, Richtung und Geschwindigkeit des Roboters selbst einzustellen, wird empfohlen, einen Zielpunkt zu setzen und den Roboter automatisch zum Zielpunkt navigieren zu lassen. Darüber hinaus werden in diesem Aufsatz auch einige der Funktionen, die bereitgestellt werden sollten, sowie die Richtungen, die in Zukunft untersucht werden sollten, erwähnt.
+Rettungsroboter werden zunehmend zur Bewältigung von Krisensituationen eingesetzt z.B. dort, wo es für den Menschen zu gefährlich ist. Eine Reihe von Studien nutzt \gls{vr} als Plattform für die Mensch-Roboter-Interaktion, da dies die Immersion erhöhen und das Situationsbewusstsein verbessern kann. Es besteht jedoch nach wie vor ein Bedarf an einem intuitiven, einfach zu bedienenden Interaktionsmethoden zu entwickeln, um das Personal bei der Steuerung des Roboters zu entlasten und die Effizienz und Effektivität von Such- und Rettungsarbeiten zu erhöhen. In diesem Aufsatz wird ein vorläufiges \gls{vr}-basiertes Mensch-Roboter-Interaktionssystem in Bezug auf bodenbasierte Rettungsroboter vorgestellt. Ziel ist, ein möglichst intuitive Steuerungsmethode mit geringer mentaler Ermüdung zu finden. Das vorgeschlagene System bietet vier verschiedene Betriebsmodi und entsprechende Testszenen. Diese Arbeit beinhaltet eine Nutzerstudie (die vier Betriebsmodi werden nacheinander getestet und verglichen). Das Fazit ist, dass die ideale Interaktionsmethode die Komplexität der Steuerung so weit wie möglich zu reduzieren. Darüber hinaus werden in diesem Aufsatz die Richtungen der zukünftigen Untersuchungen erwähnt.
 \end{abstract}
 
 \selectlanguage{english}

+ 6 - 5
Concepts for operating ground based rescue robots using virtual reality/chapters/result.tex

@@ -19,14 +19,14 @@ Part of the data for the quantitative analysis comes from the robot's performanc
 
 
 \subsection{Robot Performance}
-The overall robot performance are reported in Figure \ref{fig:performance}.
+The overall robot performance is reported in Figure \ref{fig:performance}.
 \begin{figure}[htbp]
     \centering
     \includegraphics[width=\textwidth]{graphics/Robot Performance2.png}
     \caption{Robot Performance. (All error bars indicate the standard error.)} 
     \label{fig:performance}
 \end{figure}
-The number of collisions between the robot and objects reflects the probability of the robot being destroyed. Lab Mode got the worst results with an average collision times of 26.75 with a standard error of 18.07. Handle Mode has the second worst result with an average collision times of 21.5 with a standard error of 11.45. Remote Mode and UI Mode perform similarly, they both have a few collision times and a low standard error ($M_{Remote} = 2.25$, $SD_{Remote}= 1.79$, and $M_{UI} = 3.875$, $SD_{UI} = 1.96$). 
+The number of collisions between the robot and objects reflects the probability of the robot being destroyed. Lab Mode got the worst results with an average collision times of 26.75 with a standard error of 18.07. Handle Mode has the second-worst result with an average collision times of 21.5 with a standard error of 11.45. Remote Mode and UI Mode perform similarly, they both have a few collision times and a low standard error ($M_{Remote} = 2.25$, $SD_{Remote}= 1.79$, and $M_{UI} = 3.875$, $SD_{UI} = 1.96$). 
 
 In Lab Mode, the robot travels the most distance and travels the most time. During the five-minute test period, the robot drove for an average of 243 seconds and covered a total of 753 meters. The average speed of the four modes did not differ significantly, but the standard error of the samples in Handle and Lab modes was significant.  In both modes, it was found that some participants drove the robot very slowly and cautiously, while some participants tended to drive at maximum speed on the road and braked as soon as they noticed a distressed person. In Remote and UI modes, the robot's driving route was mainly controlled by the computer, so the average speed was basically the same and the deviation values were very small.
 
@@ -41,7 +41,7 @@ In Lab Mode, the robot travels the most distance and travels the most time. Duri
   \includegraphics[height=7cm]{graphics/Rescue situation2.png}\\
   \caption{Rescue situation. (All error bars indicate the standard error.)}
   \label{fig:rescue}
-  \vspace{-70pt}    % 对应高度3
+  \vspace{-80pt}    % 对应高度3
 \end{wrapfigure}
 The results of rescuing victims are shown in Figure \ref{fig:rescue}. In general, the average number of rescued victims was the highest in Remote and UI modes. In both modes, there were participants who rescued all the victims within the time limit, and even completed the rescue task from half a minute to one minute earlier. In the Lab Mode, the remaining visible victims are the most. This means that participants are more likely to overlook details in the scene, or even pass by the victim without noticing him. This could be attributed to the poor display in this scene, or it could be due to the complexity of the operation which makes the participant not have time to take into account every detail in the scene.
 
@@ -53,7 +53,8 @@ The results of rescuing victims are shown in Figure \ref{fig:rescue}. In general
     \caption{The average score of \gls{tlx}. (All error bars indicate the standard error.)}
     \label{fig:tlx} 
 \end{figure}
-The NASA Task Load Index (NASA-TLX) consists of six subjective subscales. In order to simplify the complexity of the assessment, the weighted scores for each subscale are the same when calculating the total score. Overall, the smaller the number, the less workload the operation mode brings to the participant. Figure \ref{fig:tlx} contains the six subjective subscales mentioned above as well as the total score. The graph shows the mean and standard error of each scale. The standard error is large for each scale because participants could only evaluate the workload of each mode of operation relatively, and they had different standard values in mind. As can be seen, the workloads obtained in Remote and UI modes are the smallest. Similar values can be found in Lab Mode and Handle Mode, and their values are worse. Participants said they needed to recall what buttons to press to achieve a certain effect, or needed to consider how to turn the robot to get it running to the desired position. This resulted in more mental activity for Handle Mode and Lab Mode. The time pressure in these two modes is also the highest. Participants demanded the most physical activity in Lab Mode. It was observed that they frequently looked up and down to view different screens. In addition, some participants maintained their arms in a flat position while operating the virtual joystick, which also caused some fatigue on the arms. Overall, the Remote and UI modes received better scores on the NASA Task Load Index.
+The NASA Task Load Index (NASA-TLX) consists of six subjective subscales. In order to simplify the complexity of the assessment, the weighted scores for each subscale are the same when calculating the total score. Overall, the smaller the number, the less workload the operation mode brings to the participant. Figure \ref{fig:tlx} contains the six subjective subscales mentioned above as well as the total score. The graph shows the mean and standard error of each scale. The standard error is large for each scale because participants could only evaluate the workload of each mode of operation relatively, and they had different standard values in mind. As can be seen, the total workloads obtained in Remote and UI modes are the smallest. Similar total workloads can be found in Lab Mode and Handle Mode, and their values are worse. In both modes, they feel 
+more negative emotions, such as irritability, stress, etc. Participants said they needed to recall what buttons to press to achieve a certain effect, or needed to consider how to turn the robot to get it running to the desired position. This resulted in more mental activity for Handle Mode and Lab Mode. The time pressure in these two modes is also the highest. Participants demanded the most physical activity and effort in Lab Mode. It was observed that they frequently looked up and down to view different screens. In addition, some participants maintained their arms in a flat position while operating the virtual joystick, which also caused some fatigue on the arms. Participants' perceptions of the performance situation were similar across the four modes of operation. Overall, the Remote and UI modes received better scores on the NASA Task Load Index.
 
 % \begin{figure}[htbp]
 %     \centering
@@ -110,7 +111,7 @@ Handle Mode directly using motion controllers for moving robot, and the user can
 
 Remote Mode and UI Mode that use AI intelligent obstacle avoidance walking algorithm were most well-received. Participants felt that in both modes they did not need to worry about how to control the robot's steering and forward speed, but that the computer was responsible for everything, allowing them to focus on virtual world exploration.
 
-For the UI Mode, one of the participants remarked: "\textit{I can just let the robot follow me. I don't need to think about how to operate the robot. This way I can concentrate on the rescue.} " In the experiment, it was observed that all participants did not use the direction buttons and monitoring screens in the virtual menu. At the beginning of the test, they all turned on the follow me function directly and adjusted the robot's driving speed to the maximum. After that, the robot was more like a moveable \gls{lidar} sensor. This therefore leads to the fact that these participants could completely disregard the location of the robot and just explore the \gls{vr} world on their own. One participant in the experiment teleported so fast that when he reached a location and had been waiting for a while, the robot was still on its way. In fact, the problem of not being able to find the robot happens in Handle Mode as well.
+For the UI Mode, one of the participants remarked: "\textit{I can just let the robot follow me. I don't need to think about how to operate the robot. This way I can concentrate on the rescue.} " In the experiment, All participants can easily learn how to operate the UI menu. This may be explained by the fact that the menu interface was very familiar to them. It was observed that all participants did not use the direction buttons and monitoring screens in the virtual menu. At the beginning of the test, they all turned on the follow me function directly and adjusted the robot's driving speed to the maximum. After that, the robot was more like a moveable \gls{lidar} sensor. This therefore leads to the fact that these participants could completely disregard the location of the robot and just explore the \gls{vr} world on their own. One participant in the experiment teleported so fast that when he reached a location and had been waiting for a while, the robot was still on its way. In fact, the problem of not being able to find the robot happens in Handle Mode as well.
 
 In contrast, Remote Mode solves this problem of the robot not being in view. One participant stated that “\textit{The robot is always in sight, so I don't have to waste extra time looking for the robot. Control of the robot is also very easy}.” Another participant reflected that after setting the destination of the trolley operation, he would subconsciously observe the movement of the robots, thus making him always know where the robot was. They also thought it was very easy in this mode to operate the robot. Many participants alternated between using the right- and left-hand rays, first setting the robot's moving target point with the right-hand ray, and then teleporting themselves there with the left-hand ray. The security measures set up (remote controller) were almost not used in the actual test. When it came to the robot's inability to navigate automatically to the destination, the participants preferred to move the robot by resetting the destination point.
 

BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/ui2.png


BIN
Concepts for operating ground based rescue robots using virtual reality/graphics/ui3.png


BIN
User Study/.DS_Store


+ 2 - 1
User Study/TLX/statistic.py

@@ -86,6 +86,7 @@ def draw(scale):
 
 def drawTogether():
     scales = ["mental-demand","physical-demand","temporal-demand","performance", "effort","frustration","total"]
+    scales = ["mental-demand","physical-demand","temporal-demand", "effort","frustration","total"]
     plt.figure(figsize=(15,7))
     x = np.arange(len(scales))
     total_width, n = 0.8, 4
@@ -105,7 +106,7 @@ def drawTogether():
     # plt.title("TLX Average",fontsize=15)
     plt.xticks(x+width/2,scales)
     #plt.show()
-    
+    plt.ylabel('score')
     plt.savefig("summary.jpg",dpi=300)
 
 

BIN
User Study/TLX/summary.jpg