123456789101112131415 |
- \chapter{Related Work}
- \label{related}
- In this chapter, some research on the integration of \gls{vr} and \gls{hri} will be discussed. The relevant literature and its contributions will be briefly presented.
- The topic of \gls{vr} and \gls{hri} is an open research topic with many kinds of focus perspectives.
- \gls{hri} platforms combined with virtual worlds have many applications. It can be used, for example, to train machine operators in factories. Elias Matsas et al. \cite{Matsas:2017aa} provided a \gls{vr}-based training system using hand recognition. Kinect cameras are used to capture the user's positions and motions, and virtual user models are constructed in the \gls{vr} environment based on the collected data. Users will operate robots and virtual objects in the \gls{vr} environment, and in this way, learn how to operate the real robot. The framework proposed by Luis Pérez et al. \cite{Perez:2019ub} is applied to train operators to learn to control the robot. Since the environment does not need to change in real time, but rather needs to recreate the factory scene realistically, a highly accurate 3D environment was constructed in advance using Blender in combination with a 3D scanner.
- Building 3D scenes in virtual worlds based on information collected by robots is also a research highlight. Wang, et al. \cite{Wang:2017uy} were concerned with the visualization of the rescue robot and its surroundings in a virtual environment. The proposed \gls{hri} system uses the incremental 3D-NDT map to render the robot's surroundings in real time. The user can view the robot's surroundings in a first-person view through the \gls{htc} and send control commands through arrow keys on the motion controllers. A novel \gls{vr}-based practical system is presented in \cite{Stotko:2019ud} consisting of distributed systems to reconstruct the 3D scene. The data collected by the robot is first transmitted to the client responsible for reconstructing the scene. After the client has constructed the 3D scene, the set of actively reconstructed visible voxel blocks is sent to the server responsible for communication, which has a robot-based live telepresence and teleoperation system. This server will then broadcast the data back to the client used by the operator, thus enabling an immersive visualization of the robot within the scene.
- Others are more concerned about the manipulation of the robotic arm mounted on the robot. Moniri et al. \cite{Moniri:2016ud} provided a \gls{vr}-based operating model for the robotic arm. The user wearing a headset can see a simulated 3D scene at the robot's end and send pickup commands to the remote robot by clicking on the target object with the mouse. The system proposed by Ostanin et al. \cite{Ostanin:2020uo} is also worth mentioning. Although their proposed system for operating a robotic arm is based on \gls{mr}, the article is highly relevant to this thesis, considering the correlation of \gls{mr} and \gls{vr} and the proposed system detailing the combination of \gls{ros} and robotics. In their system, \gls{ros} Kinect is used as middleware and responsible for communicating with the robot and the \gls{unity} side. The user can control the movement of the robot arm by selecting predefined options in the menu. In addition, the orbit and target points of the robot arm can be set by clicking on a hologram with a series of control points.
- %Summary
- To summarize, previous work has studied methods and tools for \gls{vr}-based \gls{hri} and teleoperation. However, only few studies focus on the different interactive approaches for \gls{hri}.
|