\chapter{Discussion} \label{discuss} In general, an ideal \gls{vr}-based robotics operation method should eliminate as much complexity as possible for the user. For the Lab Mode, the least favorite mode among the participants, it can be concluded that unless the \gls{vr} operating system is developed for training operators to learn to operate the robot in a real environment, a lab-like operation mode is not desirable. Suppose one wants to develop an interaction approach like Handle Mode, where the robot is operated directly using the controller. In that case, it should be taken into account whether the user needs to move their position frequently. As mentioned before, if the user needs to control the exact movement of the robot themselves and at the same time change their position, the user may have a higher workload, more difficulty in observing the details of the scene and concentrating on controlling the robot. The choice of \gls{vr} handle is also important. \gls{vr} Motion controllers with thumb sticks are recommended for better directional control. Interaction approaches like Remote Mode and UI Mode should be the future direction to focus on. Both of these operation modes significantly simplify the process of controlling the robot by using intelligent obstacle avoidance algorithms to allow the robot to automatically navigate to the desired destination. The proposed system currently uses the \gls{nav} component and simulates a post-disaster scene instead of reconstructing it by scanning the real site through \gls{lidar}. Therefore, there remains a need for an intelligent obstacle avoidance algorithm when using a real robot. Considering that the rescue efficiency and the possibility of the robot being damaged rely strongly on the algorithm, this algorithm should be accurate enough so that the user can entirely depend on the computer to control the robot. In some tests, the monitoring screens were not used. The additional features, which are provided by Remote and UI modes that allow the user to control the robot themselves, were also rated as redundant by participants. All the items mentioned above might somewhat complicate the interaction process. When too many features are available simultaneously, it seems difficult for the user to decide which feature to use in certain situations. Sometimes users may even forget functions that are already provided, such as monitoring screens can be switched off if they block the view. It should be noted that there were only eight participants in this user study and that all of them were unfamiliar with VR. It is thus uncertain whether different results would be obtained if the number of participants is expanded. Even so, the presence of the monitor screen is still necessary, considering that some participants responded it could provide valuable information. Further work is needed to optimize the monitoring screens so that they do not obscure the view, and adjusting the monitor does not complicate the whole interaction. Apart from that, extra maps should be provided. It was mentioned in the results part that participants often got lost in the scenes. They did not know if a location had been visited before and repeatedly searched for the same location. This may lead to a decrease in search efficiency and even, to a certain extent, discourage users from exploring unknown areas that have not yet been scanned by the \gls{lidar}. The map should hence indicate the fields that have been scanned by the \gls{lidar} and the user's own current location. The user can see the overall outline of the detected area and the trajectory he or she has taken from the map provided, thus giving the user a clear overview of the scene. In the user study, it was found that some participants often forget the location of the robot. Therefore, the relative coordinates of the user and the robot should also be provided on the map. How the map is presented is also worth considering. As noted before, the whole interaction pattern should be as simple as possible.