|
@@ -14,7 +14,21 @@ The proposed system runs on a computer with the Windows 10 operating system. Thi
|
|
|
\gls{unity} was chosen as the platform to develop the system. \gls{unity} is a widely used game engine with \gls{steamvr} \footnote{https://assetstore.unity.com/packages/tools/integration/steamvr-plugin-32647}, which allows developers to focus on the \gls{vr} environment and interactive behaviors in programming, rather than specific controller buttons and headset positioning, making \gls{vr} development much simpler. Another reason why \gls{unity} was chosen as a development platform was the potential for collaboration with \gls{ros}, a frequently used operating system for robot simulation and manipulation, which is flexible, low-coupling, distributed, open source, and has a powerful and rich third-party feature set. In terms of collaboration between \gls{unity} and \gls{ros}, Siemens provides open-source software libraries and tools in C\# for communicating with ROS from .NET applications \footnote{https://github.com/siemens/ros-sharp}. Combining \gls{ros} and \gls{unity} to develop a collaborative \gls{hri} platform proved to be feasible \cite{Whitney:2018wk}. Since the focus of this work is on \gls{hri}, collaboration and synchronization of \gls{ros} will not be explored in detail here.
|
|
|
|
|
|
\section{Robot}
|
|
|
-The proposed system needs to simulate the process that a robot uses a \gls{lidar} remote sensor to detect the real environment and synchronize it to \gls{unity}. Thus, a sphere collision body was set up on the robot. The robot will transform the Layers of the objects in the scene into visible Layers by collision detection and a trigger event (onTriggerEnter function). The robot's driving performance, such as the number of collisions, average speed, total distance, will be recorded in each test. The detailed recorded information can be seen in Chapter \ref{result}. The movement of the robot depends on the value of the signal that is updated in each mode. In addition, the robot's Gameobject has the \gls{nav} \footnote{https://docs.unity3d.com/ScriptReference/AI.NavMeshAgent.html} component, which supports the robot's navigation to the specified destination with automatic obstacle avoidance in the test scene. The virtual robot has three cameras. One of the cameras is a simulation of a surveillance camera mounted on the robot, which can see all the items in the scene, although the distant items are not yet detected by LiDAR. Two of these cameras are set up in such a way that they can only see the area detected by \gls{lidar}. Each camera captures what it sees and modifies the bound image in real time. The four operation modes described later all use the camera viewport as a monitoring screen by rendering the camera viewport on UI canvas.
|
|
|
+The proposed system needs to simulate the process that a robot uses a \gls{lidar} remote sensor to detect the real environment and synchronize it to \gls{unity}. Thus, a sphere collision body was set up on the robot as seen in \ref{fig:robot}. The robot will transform the Layers of the objects in the scene into visible Layers by collision detection and a trigger event (onTriggerEnter function). The robot's driving performance, such as the number of collisions, average speed, total distance, will be recorded in each test. The detailed recorded information can be seen in Chapter \ref{result}. The movement of the robot depends on the value of the signal that is updated in each mode. In addition, the robot's Gameobject has the \gls{nav} \footnote{https://docs.unity3d.com/ScriptReference/AI.NavMeshAgent.html} component, which supports the robot's navigation to the specified destination with automatic obstacle avoidance in the test scene. The virtual robot has three cameras. One of the cameras is a simulation of a surveillance camera mounted on the robot, which can see all the items in the scene, although the distant items are not yet detected by LiDAR. Two of these cameras are set up in such a way that they can only see the area detected by \gls{lidar}. Each camera captures what it sees and modifies the bound image in real time. The four operation modes described later all use the camera viewport as a monitoring screen by rendering the camera viewport on UI canvas.
|
|
|
+
|
|
|
+
|
|
|
+\begin{figure}[htbp]
|
|
|
+ \centering
|
|
|
+ \subfigure[LiDAR Collider]{
|
|
|
+ \includegraphics[height=5cm]{graphics/robot2.png}
|
|
|
+ }
|
|
|
+ \subfigure[Surveillance camera]{
|
|
|
+ \includegraphics[height=5cm]{graphics/robot4.png}
|
|
|
+ }
|
|
|
+ \caption{Robot}
|
|
|
+ \label{fig:robot}
|
|
|
+\end{figure}
|
|
|
+
|
|
|
|
|
|
|
|
|
\section{Interaction techniques}
|
|
@@ -41,22 +55,68 @@ In this mode, the user controls the robot's movement directly through the motion
|
|
|
|
|
|
\begin{figure}[htbp]
|
|
|
\centering
|
|
|
- \includegraphics[height=10cm]{graphics/htc.png}
|
|
|
- \caption{HTC handle illustration.}
|
|
|
- \label{fig:htc}
|
|
|
+ \begin{minipage}[t]{0.48\textwidth}
|
|
|
+ \centering
|
|
|
+ \includegraphics[height=7cm]{graphics/handle1.png}
|
|
|
+ \caption{Handle Mode}
|
|
|
+ \label{fig:handle}
|
|
|
+ \end{minipage}
|
|
|
+ \begin{minipage}[t]{0.48\textwidth}
|
|
|
+ \centering
|
|
|
+ \includegraphics[height=7cm]{graphics/htc.png}
|
|
|
+ \caption{HTC handle illustration}
|
|
|
+ \label{fig:htc}
|
|
|
+ \end{minipage}
|
|
|
\end{figure}
|
|
|
|
|
|
+
|
|
|
\subsection{Lab Mode}
|
|
|
This pattern was designed with reference to the system proposed by \cite{Perez:2019ub}\cite{Matsas:2017aa}. Their frameworks are used to train operators to work with the robot, avoiding risks and saving learning costs. In addition, they also mentioned that being in a simulated factory or laboratory can improve immersion. Therefore, in this mode, a virtual laboratory environment is constructed, in which simulated buttons, controllers, and monitoring equipment are placed. The laboratory consists of two parts. The first part is the monitoring equipment: the monitoring screen is enlarged and placed at the front of the lab as a huge display. The second part is the operating console in the center of the laboratory, which can be moved by the user as desired. This is due to the fact that users have different heights and may wish to operate the robot in a standing or sitting position. The user can use the buttons on the table to lock the robot or let it walk forward automatically. In the middle of the console are two operating joysticks that determine the robot's forward motion and rotation respectively. The part that involves virtual joystick movement and button effects uses an open-source GitHub project VRtwix\footnote{https://github.com/rav3dev/vrtwix}. With the sliding stick on the left, the user can edit the speed of the robot's forward movement and rotation.
|
|
|
|
|
|
+\begin{figure}[htbp]
|
|
|
+ \centering
|
|
|
+ \subfigure[Overview]{
|
|
|
+ \includegraphics[height=5cm]{graphics/lab3.png}
|
|
|
+ }
|
|
|
+ \subfigure[Operating console]{
|
|
|
+ \includegraphics[height=5cm]{graphics/lab4.png}
|
|
|
+ }
|
|
|
+ \caption{Lab Mode}
|
|
|
+ \label{fig:lab}
|
|
|
+\end{figure}
|
|
|
+
|
|
|
\subsection{Remote Mode}
|
|
|
In this mode, the user can set the driving target point directly or control the robot by picking up the remote control placed on the toolbar. The target point is set by the ray emitted by the right motion controller. This process is similar to setting a teleportation point. After the target point is set, a square representing the destination is shown in the scene, and the robot will automatically travel to the set destination. The entire driving process uses the \gls{nav} component and is therefore capable of automatic obstacle avoidance.
|
|
|
A movable toolbar with remote control and a monitoring device can be opened by clicking on the menu button. The remote control is a safety precaution if the automatic navigation fails to navigate the target point properly. The user can adjust the direction of the robot's travel by using the remote control. The pickup and auto-release parts use the ItemPackage component available in the \gls{steamvr}.
|
|
|
|
|
|
+\begin{figure}[htbp]
|
|
|
+ \centering
|
|
|
+ \subfigure[Overview]{
|
|
|
+ \includegraphics[height=5cm]{graphics/remote2.png}
|
|
|
+ }
|
|
|
+ \subfigure[Set the destination]{
|
|
|
+ \includegraphics[height=5cm]{graphics/remote3.png}
|
|
|
+ }
|
|
|
+ \caption{Remote Mode}
|
|
|
+ \label{fig:remote}
|
|
|
+\end{figure}
|
|
|
+
|
|
|
|
|
|
\subsection{UI Mode}
|
|
|
The virtual menu is also an interaction method that is often used in \gls{vr}, so this mode is proposed. In this mode, the user must interact with the virtual menu using the ray emitted by the right motion controller. The virtual menu is set up with buttons for the direction of movement, a speed controller, and buttons to open and close the monitor screen. In addition to this, an additional follow function is added to the menu, allowing the robot to follow the user's position in the virtual world. This is intended to let the user concentrate on observing the rendered \gls{vr} environment. Also, having a real robot following the user's location in the virtual world is a novel, unique \gls{hri} approach in \gls{vr}. The robot's automatic navigation uses the \gls{nav}.
|
|
|
|
|
|
+\begin{figure}[htbp]
|
|
|
+ \centering
|
|
|
+ \subfigure[Overview]{
|
|
|
+ \includegraphics[height=5cm]{graphics/ui2.png}
|
|
|
+ }
|
|
|
+ \subfigure[Follow Function]{
|
|
|
+ \includegraphics[height=5cm]{graphics/ui3.png}
|
|
|
+ }
|
|
|
+ \caption{UI Mode}
|
|
|
+ \label{fig:ui}
|
|
|
+\end{figure}
|
|
|
+
|
|
|
|
|
|
\section{Test Scene}
|
|
|
In order to simulate the use of rescue robots in disaster scenarios, the test scenes were built to mimic the post-disaster urban environment as much as possible. The POLYGON Apocalypse \footnote{https://assetstore.unity.com/packages/3d/environments/urban/polygon-apocalypse-low-poly-3d-art-by-synty-154193}, available on the \gls{unity} Asset Store, is a low poly asset pack with a large number of models of buildings, streets, vehicles, etc. This resource pack was used as a base. Additional collision bodies of the appropriate size were manually added to each building and obstacle after the resource pack was imported, which was needed to help track the robot's driving crash in subsequent tests.
|
|
@@ -64,3 +124,15 @@ In order to simulate the use of rescue robots in disaster scenarios, the test sc
|
|
|
Considering that four operation modes need to be tested, four scenes with similar complexity and composition but different road conditions and placement of buildings were constructed. The similarity in complexity of the scenes ensures that the difficulty of the four tests is basically identical. The different scene setups ensure that the scene information learned by the user after one test will not make him understand the next test scene and thus affect the accuracy of the test data.
|
|
|
|
|
|
The entire scene is initially invisible, and the visibility of each object in the test scene is gradually updated as the robot drives along. Ten interactable sufferer characters were placed in each test scene. The placement place can be next to the car, the house side and some other reasonable places.
|
|
|
+
|
|
|
+\begin{figure}[htbp]
|
|
|
+ \centering
|
|
|
+ \subfigure[Obstacle]{
|
|
|
+ \includegraphics[height=5cm]{graphics/testCollider2.png}
|
|
|
+ }
|
|
|
+ \subfigure[Victims]{
|
|
|
+ \includegraphics[height=5cm]{graphics/testVictim4.png}
|
|
|
+ }
|
|
|
+ \caption{Test Scene}
|
|
|
+ \label{fig:testscene}
|
|
|
+\end{figure}
|