Information Retrieval in Virtual Reality
VR applications often include textual information, or labels, to augment the virtual objects in the 3D immersive Virtual Environments (VEs).   These annotations are associated with specific objects and provide relevant information to the user.  For example, a VE representing a store could embed pricing information in the merchandise.  The user can select an object and reveal information pertaining to that objects such as price, country of origin or any attributes of that object.  The problem is that the annotations can quickly clutter the view due overlapping and occlusion.   Labels can occlude each other and the objects in the VE.   
 
In this research we describe a dynamic labeling management system based on the eye movements of the user.  We propose a new mechanism for revealing annotations based on where a user is looking and deliver content based on gaze location.  We compare this new method to existing methods to show the utility of this new technique
The methods tested were:
Direct (D)(new method) uses a clamped squared relationship between the label anchor point and the tracked eye position on screen 
 
Spatial Hysteresis (S) uses a smaller trigger (entry) radius and a larger exit radius.  The annotation is then turned on if the gaze is contained in the trigger radius or off when the gaze position is outside the exit radius.  
 
Temporal Hysteresis (T) guarantees the label will be visible for a minimum amount of time once triggered. 
 
Spatial and Temporal Hysteresis (ST) uses a smaller trigger radius and a larger exit radius and guarantees the label will be visible for a minimum amount of time.
Back to Top