Abstract
We rely on our sight when manipulating objects. When objects are occluded, manipulation becomes difficult. Such occluded objects can be shown via augmented reality to re-enable visual guidance. However, it is unclear how to do so to best support object manipulation. We compare four views of occluded objects and their effect on performance and satisfaction across a set of everyday manipulation tasks of varying complexity. The best performing views were a see-through view and a displaced 3D view. The former enabled participants to observe the manipulated object through the occluder, while the latter showed the 3D view of the manipulated object offset from the object’s real location. The worst performing view showed remote imagery from a simulated hand-mounted camera. Our results suggest that alignment of virtual objects with their real-world location is less important than an appropriate point-of-view and view stability.
Original language | English |
---|---|
Title of host publication | CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems |
Number of pages | 12 |
Publisher | Association for Computing Machinery |
Publication date | 2019 |
Article number | 446 |
ISBN (Electronic) | 9781450359702 |
DOIs | |
Publication status | Published - 2019 |
Event | 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019 - Glasgow, United Kingdom Duration: 4 May 2019 → 9 May 2019 |
Conference
Conference | 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019 |
---|---|
Country/Territory | United Kingdom |
City | Glasgow |
Period | 04/05/2019 → 09/05/2019 |
Sponsor | ACM SIGCHI |
Keywords
- Augmented reality
- Finger-camera
- Manipulation task