An onboarding UX framework that provides guidance to users for different types of mobile AR apps.
The framework adopts the idea of having instructional UI shown with an instructional goal in mind. One common use of this is to get the user to move their device around with the goal to find a plane. Once the goal is reached the UI fades out. There is also a secondary instruction UI and an API that allows developers to add any number of additional UI and goals that will go into a queue and be processed one at a time.
A common two step UI / Goal is to instruct the user to find a plane. Once a plane is found you can instruct the user to tap in order to place an object. Once an object is placed fade out the UI.
The instructional UI consist of the following animations / videos
Cross Platform Find a plane
Find a Face
Find a Body
Find an Image
Find an Object
ARKit Coaching Overlay
Tap to Place
None
All of the instructional UI (except the ARKit coaching overlay) is an included .webm video encoded with VP8 codec in order to support transparency.
With the following goals to fade out UI
Found a plane
Found Multiple Planes
Found a Face
Found a Body
Found an Image
Found an Object
Placed an Object
None
2.0 Updates
Added Tracking reasons visualization
Added Localization support for English, French, German, Italian, Spanish, Portuguese, Russian, Simplified Chinese, Korean, Japanese, and Hindi
Full Documentation on the github repo
Please log issues, put in feature request and see development on the github repo