result.tex 8.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114
  1. \chapter{Results and discussion}
  2. \label{result}
  3. \section{Participants}
  4. A total of 8 volunteers participated in the user study (3 females and 5 males between 22 and 32 years, mean age xxx years). Four participants had previous experience with VR, but had played it only a few times. .......
  5. \section{Quantitative Results}
  6. Part of the data for the quantitative analysis comes from the robot's performance and testing results, which were automatically recorded by the proposed system during the tests. The other part of the data comes from the questionnaires that the participants filled out after the test.
  7. \subsection{Robot Performance}
  8. [introduce what was recorded]
  9. \begin{figure}[htbp]
  10. \centering
  11. \includegraphics[width=\textwidth]{graphics/Robot Performance.png}
  12. \caption{Robot Performance}
  13. \label{fig:performance}
  14. \end{figure}
  15. [analysis]
  16. \newpage
  17. \subsection{Rescue situation}
  18. [introduce what was recorded]
  19. \begin{wrapfigure}{r}{0.4\textwidth}
  20. \flushright
  21. \includegraphics[height=7cm]{graphics/Rescue situation.png}\\
  22. \caption{Rescue situation}
  23. \label{fig:rescue}
  24. \vspace{-30pt} % 对应高度3
  25. \end{wrapfigure}
  26. [analysis]
  27. % \begin{wrapfigure}{r}{0cm}
  28. % \vspace{-15pt} % 对应高度1
  29. % \includegraphics[width=0.5\textwidth]{graphics/Rescue situation.png}\\
  30. % \vspace{-15pt} % 对应高度2
  31. % \label{fig:rescue}
  32. % \vspace{-15pt} % 对应高度3
  33. % \end{wrapfigure}
  34. \subsection{TLX Score}
  35. [explain tlx]
  36. \begin{figure}[htbp]
  37. \centering
  38. \subfigure{
  39. \includegraphics[width=\textwidth]{graphics/summary.jpg}
  40. }
  41. \subfigure{
  42. \includegraphics[width=\textwidth]{graphics/total.png}
  43. }
  44. \caption{TLX Score.expain...}
  45. \label{fig:tlx}
  46. \end{figure}
  47. [analysis]
  48. \subsection{Likert Questionnaire Results}
  49. A questionnaire was used to get their feedback:
  50. \begin{enumerate}
  51. \item I found it easy to move the robot in desired position.
  52. \item I found it easy to concentrate on controlling the robot.
  53. \item I found it easy to perceive the details of the environment.
  54. \end{enumerate}
  55. \begin{figure}[htbp]
  56. \centering
  57. \subfigure{
  58. \includegraphics[height=7cm]{graphics/I found it easy to move robot in desired position.jpg}
  59. }
  60. \subfigure{
  61. \includegraphics[height=7cm]{graphics/I found it easy to concentrate on controlling the robot.jpg}
  62. }
  63. \subfigure{
  64. \includegraphics[height=7cm]{graphics/I found it easy to perceive the details of the environment.jpg}
  65. }
  66. \caption{Likert Questionnaire Results.(1:strongly disagree, 5:strongly agree)}
  67. \label{fig:liker}
  68. \end{figure}
  69. [analysis]
  70. \section{Qualitative Results}
  71. This section will discuss the feedback from participants. Overall, every participant gave positive comments about operating the robot in a \gls{vr} platform. They thought the proposed system was exciting and did allow them to perceive more details in the post-disaster environment than the traditional video-based manipulation. The feedbacks obtained from each mode will be listed next.
  72. 70\% of participants ranked Lab Mode as the least preferred mode. Some experimenters were very unaccustomed to using \gls{vr} handles to grasp objects, which makes it difficult for them to operate the robot with virtual joysticks smoothly. For those who have \gls{vr} experience, even without any hints and learning, they subconsciously understood what each button and joystick represented and were able to operate the robot directly. Nevertheless, for the actual rescue experience in the test focus, both kinds of participants responded that the robot's operation was more complex and difficult than the other modes. Participants attributed the reasons to obstacles in the environment. One of the participants said:"\textit{There is no physical access to the joystick. So it is slightly tough for me to control the robot.}" In some cases, when the robot was stuck in a corner, it took them much effort to get the robot out of this situation. Also, since the lab mode uses a simulated screen, the lab mode is not as good as the other three in terms of observing the details of the scene. Participants felt that the simulated screen was blurred, and the frequent switching between multiple screens made them very tired.
  73. %Handle
  74. Handle mode directly using motion controllers for moving robot, and the user can open and close the two monitoring screen through the button. The evaluation of this operation mode depends in large part on the construction of the motion controllers. More than half of the users thought that the \gls{htc} motion controllers made them less flexible when operating the robot's steering. Participants were often unable to accurately touch the correct position of the touchpad when using it, and it was very likely to be touched by mistake. At the end of the experiment, these participants were additionally invited to re-operate the robot using the \gls{vr} controller with joysticks, and said that using joysticks was easier for them to control the direction. Some participants said that they did not like the two monitoring screens provided by this mode. The additional surveillance screens made them subconsciously distracted to observe them, preventing them from concentrating on the rescue mission. Others, however, thought that the monitor was particularly helpful. As it was very difficult to control the robot while teleporting themselves, they first relied on the monitor screen to drive the robot to a place, and then teleported themselves to the location of the robot. The experiment also found that participants tended to forget that the two monitor screens could be closed, and they usually tried to drag the screens to places where they did not affect their view and dragged them back when they wanted to use them.
  75. Remote Mode and UI Mode that use AI intelligent obstacle avoidance walking algorithm were most well-received. Participants felt that in both modes they did not need to worry about how to control the robot's steering and forward speed, but that the computer was responsible for everything, allowing them to focus on virtual world exploration.
  76. For the UI model, one of the participants remarked: "\textit{I can just let the robot follow me. I don't need to think about how to operate the robot. This way I can concentrate on the rescue.} " In the experiment, it was observed that all participants did not use the direction buttons and monitoring screens in the virtual menu. At the beginning of the test, they all turned on the follow me function directly and adjusted the robot's driving speed to the maximum. After that, the robot was more like a moveable \gls{lidar} sensor. This therefore leads to the fact that these participants could completely disregard the location of the robot and just explore the \gls{vr} world on their own. One participant in the experiment teleported so fast that when he reached a location and had been waiting for a while, the robot was still on its way. In fact, the problem of not being able to find the robot happens in Handle Mode as well.
  77. In contrast, Remote mode solves this problem of the robot not being in view. One participant stated that “\textit{The robot is always in sight, so I don't have to waste extra time looking for the robot. Control of the robot is also very easy}.” Another participant reflected that after setting the destination of the trolley operation, he would subconsciously observe the movement of the robots, thus making him always know where the robot was. They also thought it was very easy in this mode to operate the robot. Many participants alternated between using the right- and left-hand rays, first setting the robot's moving target point with the right-hand ray, and then teleporting themselves there with the left-hand ray. The security measures set up (remote controller) were almost not used in the actual test. When it came to the robot's inability to navigate automatically to the destination, the participants preferred to move the robot by resetting the destination point or moving themselves.
  78. In addition to this, participants were found lost in each of the operational modes. They would forget whether the place was already visited by themselves.
  79. \section{Discussion}