Master the fundamentals of design: Augmented Reality (Part 1)

Alishba Imran
6 min readNov 7, 2018

--

We all know that person who just does things just to do things, whether that be in school or working in a company. With the recent hype around AR/VR in the past few years, it can be understood that it’s on everyone's radar. However, the first step in designing an effective AR application is not just designing it but to identify and understand the problems that need solving. Often times these problems have constraints that may be overcome by thinking spatially. The first step in actually finding out if AR is the right medium is by identifying the users and their needs.

A few things we can consider when trying to find out if AR is the right medium include:

  • Does the issue maybe need immersing the user in real time?
  • Do they need assistant in space with something physically engaging?
  • Are there maybe any physical constraints that currently prevent them from being successful?

If most of these situations apply to what you are trying to do, then there’s a good chance that Augmented Reality can add value to the solution.

Type of Content

The type of content being used can play a critical role when defining a users experience.

Here are a few main examples of some of the more popular content types used within AR:

  • Static: Content that is still and has no sort of major user interaction.
  • Animated: Content that moves on a timeline or follows a sequence.
  • 3D: Content with width, height and depth; showing dimension.
  • Dynamic: Adaptive content that can change with interaction or over time.
  • Procedural: Content that is generated automatically or algorithmically.

It is essential to understand the formats of each type of content so that the designer can properly articulate what they are trying to do. For example, for a design that requires a vase to reveal a price tag upon clicking: The vase is a dynamic 3D object that exposes a static tag. If the experience then involves clicking on the tag and making a purchase, the tag now becomes dynamic.

Understanding Interactions

When mapping out specific behaviors and relationships (user interactions) in AR, it is helpful to be specific about where and how to treat the content. Some things to mention would be the location, type of content and the state of that content.

More precise in describing the experience = better.

STATIC & FIXED ON GLASS

We can see that as the phone moves we have a static (still) graphic overlay that is fixed on the top left the glass(screen) at all times. This is useful for permanent elements that need to be within the users reach at all times. An example of this is a menu or return prompt.

STATIC & LOCKED IN SPACE

These elements are locked in space to something specific but can be seen in the view of the user. This can be useful for things like labels and materials in the space.

DYNAMIC & FLEXIBLE ON GLASS

In this case static becomes a dynamic content type. This convention allows for users to position, drag and move certain assets in custom or specific areas. This is helpful for target based or drag and drop elements.

DYNAMIC 3D & FLEXIBLE IN SPACE

The 3D object stays static in a specific location but is very engaging as users get to move all see 3D models from all different perspectives; understanding its components. Most commonly used for educational purposes.

DYNAMIC 3D & PROPORTIONATE IN SPACE

Can be helpful when allowing a user to see an object in a specific spot or environment to be able to deserve things like lighting and measurement considerations. Often used in commerce platforms.

Testing

​​Exploring and understanding the experience before hand makes it easy to scope the type of content that will need to be produced. There are various different things that need to be considered before testing it out. The main things include: design, position, text, indicators and blend models.

Design

Obviously, designers want to create elements that work in various different atmospheres. Which is why designers usually create elements that are agnostic to the users environment and works on all hues and levels of contrast. However, since AR is essentially layering data over a live camera feed, and we can’t control what that live feed displays.

In this case, the shadow ensures that the triangle is still visible upon whatever background it is put on.

Position

A few common observations about position are:

  • Fixed elements are usually situated on the top and/or bottom of the screen. This makes these elements easily accessible when holding a mobile device and allows the user focus on the center of the camera and composition.
  • Additional prompts and elements that do not focus on content remain close to the bottom. Things like additional options.

Text

  • Text is usually used as a caption or label and in a simple easy to read font like sans-serif since it’s easier to read.
  • Text usually has an opaque or semi-opaque container to improve legibility.
  • Text without containers are treated with a soft shadows and/or a subtle stroke to make them easier to read.

Indicators

  • Indicators range from being super minimal to being complex and animated.
  • Indicators are dynamic and adjust accordingly, and are easy to work with as they disappear when an action has taken place.

Blend Modes

  • An active, user-controlled camera.
  • Interface is dynamic, and constantly changes to adapt to user needs over time.
  • There is often times a heavy use of icons and graphical elements to keep the user alert and focused on the environment.
  • Designers can consider adding blend modes to their graphic elements as the method will allow the user to still see parts of the background without completely obstructing the view.

User AR Interfaces

AR can manifest itself in several different interfaces. Listed here are some common ones:

  • Graphical User Interface (GUI): Interacting with data through graphic and visual indicators.

e.g., pressing an X to exist out of something.

  • Tangible User Interface (TUI): Data influenced by interacting with the physical world.

e.g., Tracking your daily steps with a fitbit

  • Voice User Interface (VUI): Interacting with data through voice or speech.

e.g., Asking Siri to set up an item on your daily schedule

  • Heads Up Display (HUD): Interacting with data layered over a fixed transparent display.
    e.g., Lines (guides) on a back up camera in a car.

Please be aware that this is part 1 of a 2 part series. Stay tuned this week for the next part.

If you enjoyed reading this article, please press the👏 button, and follow me to stay updated on my future articles. Also, feel free to share this article with others!

--

--

Alishba Imran

Machine learning and hardware developer working on accelerating problems in robotics and renewable energy!