Master the fundamentals of design: Augmented Reality (Part 1)

We all know that person who just does things just to do things, whether that be in school or working in a company. With the recent hype around AR/VR in the past few years, it can be understood that it’s on everyone's radar. However, the first step in designing an effective AR application is not just designing it but to identify and understand the problems that need solving. Often times these problems have constraints that may be overcome by thinking spatially. The first step in actually finding out if AR is the right medium is by identifying the users and their needs.

A few things we can consider when trying to find out if AR is the right medium include:

  • Does the issue maybe need immersing the user in real time?

If most of these situations apply to what you are trying to do, then there’s a good chance that Augmented Reality can add value to the solution.

Type of Content

The type of content being used can play a critical role when defining a users experience.

Here are a few main examples of some of the more popular content types used within AR:

  • Static: Content that is still and has no sort of major user interaction.

It is essential to understand the formats of each type of content so that the designer can properly articulate what they are trying to do. For example, for a design that requires a vase to reveal a price tag upon clicking: The vase is a dynamic 3D object that exposes a static tag. If the experience then involves clicking on the tag and making a purchase, the tag now becomes dynamic.

Understanding Interactions

When mapping out specific behaviors and relationships (user interactions) in AR, it is helpful to be specific about where and how to treat the content. Some things to mention would be the location, type of content and the state of that content.

More precise in describing the experience = better.


We can see that as the phone moves we have a static (still) graphic overlay that is fixed on the top left the glass(screen) at all times. This is useful for permanent elements that need to be within the users reach at all times. An example of this is a menu or return prompt.


These elements are locked in space to something specific but can be seen in the view of the user. This can be useful for things like labels and materials in the space.


In this case static becomes a dynamic content type. This convention allows for users to position, drag and move certain assets in custom or specific areas. This is helpful for target based or drag and drop elements.


The 3D object stays static in a specific location but is very engaging as users get to move all see 3D models from all different perspectives; understanding its components. Most commonly used for educational purposes.


Can be helpful when allowing a user to see an object in a specific spot or environment to be able to deserve things like lighting and measurement considerations. Often used in commerce platforms.


​​Exploring and understanding the experience before hand makes it easy to scope the type of content that will need to be produced. There are various different things that need to be considered before testing it out. The main things include: design, position, text, indicators and blend models.


Obviously, designers want to create elements that work in various different atmospheres. Which is why designers usually create elements that are agnostic to the users environment and works on all hues and levels of contrast. However, since AR is essentially layering data over a live camera feed, and we can’t control what that live feed displays.

In this case, the shadow ensures that the triangle is still visible upon whatever background it is put on.


A few common observations about position are:

  • Fixed elements are usually situated on the top and/or bottom of the screen. This makes these elements easily accessible when holding a mobile device and allows the user focus on the center of the camera and composition.


  • Text is usually used as a caption or label and in a simple easy to read font like sans-serif since it’s easier to read.


  • Indicators range from being super minimal to being complex and animated.

Blend Modes

  • An active, user-controlled camera.

User AR Interfaces

AR can manifest itself in several different interfaces. Listed here are some common ones:

  • Graphical User Interface (GUI): Interacting with data through graphic and visual indicators.

e.g., pressing an X to exist out of something.

  • Tangible User Interface (TUI): Data influenced by interacting with the physical world.

e.g., Tracking your daily steps with a fitbit

  • Voice User Interface (VUI): Interacting with data through voice or speech.

e.g., Asking Siri to set up an item on your daily schedule

  • Heads Up Display (HUD): Interacting with data layered over a fixed transparent display.
    e.g., Lines (guides) on a back up camera in a car.

Please be aware that this is part 1 of a 2 part series. Stay tuned this week for the next part.

If you enjoyed reading this article, please press the👏 button, and follow me to stay updated on my future articles. Also, feel free to share this article with others!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alishba Imran

Machine learning and hardware developer working on accelerating problems in robotics and renewable energy!