ebook img

Augmented Reality Interfaces for Enabling Fast and Accurate Task Localization PDF

198 Pages·2017·11.87 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Augmented Reality Interfaces for Enabling Fast and Accurate Task Localization

Augmented Reality Interfaces for Enabling Fast and Accurate Task Localization Mengu Sukan Submittedinpartialfulfillmentofthe requirementsforthedegreeof DoctorofPhilosophy intheGraduateSchoolofArtsandSciences COLUMBIAUNIVERSITY 2017 ©2017 MenguSukan Allrightsreserved ABSTRACT AugmentedRealityInterfaces for EnablingFastandAccurateTaskLocalization MenguSukan Changing viewpoints is a common technique to gain additional visual information aboutthespatialrelationsamongtheobjectscontainedwithinanenvironment. Inmany cases,allofthenecessaryvisualinformationisnotavailablefromasinglevantagepoint, due to factors such as occlusion, level of detail, and limited field of view. In certain in- stances, strategic viewpoints may need to be visited multiple times (e.g., after each step of an iterative process), which makes being able to transition between viewpoints pre- ciselyandwithminimumeffortadvantageousforimprovedtaskperformance(e.g.,faster completiontime,fewererrors,lessdependenceonmemory). Many augmented reality (AR) applications are designed to make tasks easier to per- form by supplementing a user’s first-person view with virtual instructions. For those tasksthatbenefitfrombeingseenfrommorethanasingleviewpoint,ARuserstypically have to physically relocalize (i.e., move a see-through display and typically themselves since those displays are often head-worn or hand-held) to those additional viewpoints. However, this physical motion may be costly or difficult, due to increased distances or obstaclesintheenvironment. We have developed a set of interaction techniques that enable fast and accurate task localization in AR. Our first technique, SnapAR, allows users to take snapshots of aug- mentedscenesthatcanbevirtuallyrevisitedatlatertimes. Thesystemstoresstillimages of scenes along with camera poses, so that augmentations remain dynamic and interac- tive. Our prototype implementation features a set of interaction techniques specifically designed to enable quick viewpoint switching. A formal evaluation of the capability to manipulatevirtualobjectswithinsnapshotmodeshowedsignificantsavingsintimespent andgaininaccuracywhencomparedtophysicallytravelingbetweenviewpoints. Forcaseswhenauserhastophysicallytraveltoastrategicviewpoint(e.g.,toperform maintenanceandrepaironalargephysicalpieceofequipment),wepresentParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing direc- tionsandestablishesconstraintsonarangeofacceptablelocationsfortheuser’seyesand a range of acceptable angles in which the user’s head can be oriented. Providing toler- anceintheallowableviewingpositionsanddirectionsavoidsburdeningtheuserwiththe need to assume a tightly constrained 6dof pose when it is not required by the task. We describe two visualization techniques, ParaFrustum-InSitu and ParaFrustum-HUD, that guide a user to assume one of the poses defined by a ParaFrustum. A formal user study corroborated that speed improvements increase with larger tolerances and reveals inter- estingdifferencesinparticipanttrajectoriesbasedonthevisualizationtechnique. When the object to be operated on is smaller and can be handheld, instead of being large and stationary, it can be manually rotated instead of the user moving to a strategic viewpoint. Examplesofsuchsituationsincludetasksinwhichoneobjectmustbeoriented relativetoasecondpriortoassemblyandtasksinwhichobjectsmustbeheldinspecific waystoinspectthem. Researchershaveinvestigatedguidancemechanismsforsome6dof tasks, using wide–field-of-view (FOV), stereoscopic virtual and augmented reality head- worn displays (HWDs). However, there has been relatively little work directed toward smaller FOV lightweight monoscopic HWDs, such as Google Glass, which may remain more comfortable and less intrusive than stereoscopic HWDs in the near future. In our Orientation Assistance work, we have designed and implemented a novel visualization approachandthreeadditionalvisualizationsrepresentingdifferentparadigmsforguiding unconstrained manual 3dof rotation, targeting these monoscopic HWDs. This chapter includes our exploration of these paradigms and the results of a user study evaluating the relative performance of the visualizations and showing the advantages of our new approach. Insummary,weinvestigatedwaysofenablinganARusertoobtainvisualinformation from multiple viewpoints, both physically and virtually. In the virtual case, we showed how one can change viewpoints precisely and with less effort. In the physical case, we explored how we can interactively guide users to obtain strategic viewpoints, either by moving their heads or re-orienting handheld objects. In both cases, we showed that our techniques help users accomplish certain types of tasks more quickly and with fewer er- rors,comparedtowhentheyhavetochangeviewpointsfollowingalternative,previously suggestedmethods. Table of Contents ListofFigures vi ListofTables xiv Acknowledgements xv 1 Introduction 1 1.1 ResearchQuestionsandDissertationGoals . . . . . . . . . . . . . . . . . 4 1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 StructureofDissertation . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2 RelatedWork 11 2.1 SwitchingAmongMultipleViewpoints . . . . . . . . . . . . . . . . . . . 12 2.2 PresentingMultipleViewpointsSimultaneously . . . . . . . . . . . . . . 15 2.3 SavingandSelectingViewpoints . . . . . . . . . . . . . . . . . . . . . . . 17 2.4 AugmentingStaticImages . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 GuidanceforPhysicallyTransitioningtoaViewpoint . . . . . . . . . . . 19 2.6 TaskAssistanceUsingAugmentedReality . . . . . . . . . . . . . . . . . 21 3 SnapAR 24 i 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.2 RelatedWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3 Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3.1 CreatingandStoringSnapshots . . . . . . . . . . . . . . . . . . . 29 3.3.2 SelectingandViewingSnapshots . . . . . . . . . . . . . . . . . . 30 3.3.3 Heads-UpDisplay . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.3.4 ManipulatingVirtualObjects . . . . . . . . . . . . . . . . . . . . 34 3.4 UserStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.4.1 PilotStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.4.2 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.4.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.5.1 CompletionTime . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.5.2 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.5.3 Questionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.5.4 UsagePatternAnalysis . . . . . . . . . . . . . . . . . . . . . . . . 53 3.5.5 GeneralizationofFindings . . . . . . . . . . . . . . . . . . . . . . 55 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4 ParaFrustum 58 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2 RelatedWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.2.1 CallingAttentiontoa3dTarget . . . . . . . . . . . . . . . . . . . 64 ii 4.2.2 SpecifyingPositionandOrientationRelativetoa3dTarget . . . 66 4.2.3 SpecifyingaConstrainedSetofPositionsandOrientationsin3d 67 4.3 DefinitionandRules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4 ParaFrustum-InSitu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.4.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.5 ParaFrustum-HUD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.6 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.7 UserStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.7.1 PilotStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.7.2 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.7.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.8.1 CompletionTime . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.8.2 MotionAnalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.8.3 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.8.4 Questionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.9 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5 OrientationAssistance 96 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.2 RelatedWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.3 Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.3.1 CommonComponents . . . . . . . . . . . . . . . . . . . . . . . . 101 iii 5.3.2 SingleAxisVisualization . . . . . . . . . . . . . . . . . . . . . . . 105 5.3.3 EulerVisualization . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.3.4 AnimateVisualization . . . . . . . . . . . . . . . . . . . . . . . . 110 5.3.5 HandlesVisualization . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.4 UserStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.4.1 ControlCondition . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.4.2 PilotStudies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.4.3 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.4.4 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 5.5.1 TaskDuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.5.2 UserFeedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 6 ConclusionsandFutureWork 134 6.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 6.2 LessonsLearned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.3 FutureWork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.3.1 SnapAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.3.2 ParaFrustum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 6.3.3 OrientationAssistance . . . . . . . . . . . . . . . . . . . . . . . . 140 6.4 FinalThoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Bibliography 142 iv Appendix 153 SnapARQuestionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 ParaFrustumQuestionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 OrientationAssistanceQuestionnaire . . . . . . . . . . . . . . . . . . . . . . . 169 v

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.