LAST PROJECT

NEXT PROJECT

UI Designer
August 2019 — October 2019




DURATION
24 Weeks
September 2017 - April 2018
TOOLS
WHAT
Create a speculative design system using the technology behind augmented reality to help the visually impaired navigate everyday life.
ROLE
The entire project from concepts to design, coding, and animations were created by Michael Calcada.

Thesis
Create a speculative design system using the technology behind augmented reality to help the visually impaired navigate everyday life.
Challenge
This project is very special to me as I have low-vision in one eye as a result of a very bad stigmatism since birth. This means if I close my good eye, I can only see shades of colour and light.

In 2012, I went on a trip with my family to San Francisco, California. My dad had a friend who works at Google and we got a tour of the Google headquarters. As a tech enthusiast and as someone who is passionate about new and emerging technologies, this was a dream come true for me. I got to walk around the campus of what I imagined to be a dream job and see new and upcoming products. My dad's friend was wearing Google Glass while taking us on the tour, and as someone who loves new technology, I eagerly asked if I could try it on. He agreed and with the biggest smile on my face I put on the device and asked him how to turn them on.

The look of confusion on his face made my heart drop as I realized I could not use Google Glass because of a visual impairment in my right eye as the lens only projects into the right eye.

Looking back, it is ironic that I wanted to do a thesis in my design program revolving around augmented reality, and my only experience with it was cut short because of my impairment. This is what ultimately influenced my undergraduate thesis topic.
Project Goals
The main goal for this project was to create an auditory design system so visually impaired users will be able to utilize a speculative augmented reality headset. This project had a timeline of 12 weeks of research and 12 weeks of creating the final products. I wanted to learn as much as I could about the technology behind augmented reality and it’s intersection with accessibility. The final project goals are listed below.


The Problem
As technology is getting faster, cheaper, and smaller at exponential rates, the technology to create an all-in-one wearable AR solution is approaching. I believe designers such as myself as a recent graduate of design school will have the responsibility to design for mass-market wearable AR devices. As a result, we have a moral obligation to design for new technologies with accessible, human-centered designs at the forefront of the design process.

This project is my first attempt at solving how to utilize the technology of augmented reality to provide most of its benefits and uses for visually impaired users through auditory assistance.
Research
What is Augmented Reality?
Augmented Reality is a “technology that allows users to view and interact in real time with virtual images seamlessly superimposed over the real world”(5). The most popular AR device, Google Glass, uses a small liquid crystal on silicon array projected on a prism, displaying a digital overlay atop the real world.


What differentiates designing for AR headsets of the past such as Google Glass compared to newer solutions such as Microsofts’ HoloLens or MagicLeap is the ability to accurately create “a 3D model of it's environment while also tracking the camera pose”(6) using a method called Simultaneous Localization and Mapping, (abbreviated as SLAM).
AR utilizing SLAM for object recognition allows for a better understanding of a physical space by "providing a human-scale understanding of space and motion", creating the foundation for new experience’s to be built and designed on(7).


Who is currently using AR?
Many of the worlds biggest tech companies such as Apple, Google, Microsoft, and many others have seen the potential AR has, and have begun creating AR SDK’s and platforms for developers to create and design AR content. Some of the most popular platforms for upcoming AR development are Apple’s ARKit, Google’s ARCore, Facebook’s CameraEffects, and Microsoft’s HoloLens SDK, bringing AR capabilities to the masses.(12) There are also other companies working on wearable AR devices as seen below.


What is Apple doing with AR?
With the race for AR market domination beginning, Apple acquired AR startup Metaio(13), a German augmented reality software maker that helped create ARKit(14). ARKit is the largest AR platform in the world” with ARKit and iOS 11, the company is investing in both software and hardware for domination in the future AR market(15). With a DXO mark of 97, the latest iPhone X boasts powerful cameras, showing Apple’s years of innovations and upgrades to mobile phone cameras are setting the foundation for smartphone AR(16). With iPhone X’s depth sensing front facing camera, this technology will be beneficial in the future for recognizing and being able to identify physical objects(17).


What is Microsoft doing with AR?
Both the iPhone X and Microsoft Kinect systems use “a depth-sensing camera to see the world in three-dimensions”. Microsoft has been developing and designing for AR since X-Box Kinect, and Alex Kipman, the primary inventor of more than 100 patents, has lead this project as well as HoloLens. Kipman states in an interview with the National Public Radio that AR is a "monumental shift where we move the entire computer industry from this old world, where we have to understand technology, into this new world, where technology disappears and it starts more fundamentally understanding us."


This is critical for wearable AR of the future as it should be so unintrusive that it feels like its not even there. The HoloLens is “the first fully self contained, holographic computer, enabling you to interact with high definition programs in your world.” Microsoft buying many AR patents, as them own a “large number of pending applications indicates that this dominant position will only get stronger."

What is Google doing with AR?
Google has been investing in AR hardware and software after its unsuccessful launch of Google Glass. Glass does “not use cutting-edge technologies, but rather combines standard technologies in a cutting-edge manner”(22)This is because Glass projects a 2D screen in front of the users eyes, without spatial awareness of the physical world.Glass is still being developer however under a new name, Glass Enterprise Edition. The company hasn’t stopped there however as Google now has “212 issued and 438 pending US patents directed towards augmented reality”(23) proving its commitment to the AR space(24). With Google’s “$500-plus million investment in Magic Leap, which recently announced an additional funding round of $1 billion”, showing Google also wants control in the AR market.(25)

Google is also making advancements with smartphones utilizing the Google Assistant AI with Google Lens. Lens is a visual search engine capable of looking at something and using object recognition to gather more information about it. This use of computer vision and machine learning will be fundamental to the development of AR, and Google investing into this platform is a step in the right direction.


How AR is advancing
AR is evolving everyday and increasing its popularity through smartphone applications is evident through Snapchat, Pokémon Go, and many other apps. These smartphones have become faster and more powerful throughout the years and offer features such as phone calls, text messages, internet access over wi-fi and cellular, web browsing, video calls, fast processing speeds, GPS, advanced cameras and many more features that are now considered essential. Ultimately it is what developers and designers use this technology for that will present the opportunities and potential AR has.

Smartphone Apps
Pokémon GO is a great example that showcases the potential of augmented reality on smartphones, and the impact this technology has when brought to the masses in an enjoyable way. Pokémon Go is an app where people all around the world are able to come together to play a video game utilizing augmented reality to catch and train Pokémon.Pokémon GO became the post popular mobile game ever as the game had 65 million users within the first week and 90 days after launch, the game generated $600 million in revenue(29).


There are many other smartphone apps utilizing AR in unique and creative ways that help raise awareness of the technology. Snapchat offers lenses and filters that appear to fit to the shape of the users face using ARkit, exposing many of Snapchat’s daily users to the technology. More examples of this technology can be anything from measuring something in the real world with only an AR smartphone app, to placing a virtual couch in your living room with IKEA Place. In today’s digital age of utilizing the newest and most innovative technology in fun and engaging ways, businesses will begin to utilize augmented reality as it has proved to be useful and effective at engaging with customers through digital content.

Human-Centered Artificial Intelligence
Mark Riedl, an associate professor in the College of Computing: School of Interactive Computing, states that human-centred artificial intelligence ”is the recognition that the way AI systems solve problems — especially using machine learning”(30). As AI focuses on helping the user with their personal content such as calendars and events, photos, videos, friends, messages, reservations and more, we use personal assistants and their AI to solve human-centred problems by constantly learning through us(31). If an action takes place that results in something functional or helpful, then it is a good example of compelling content with a human-centred design. An example of this type of human-centred AI is a company Google acquired called Word Lens, which allows the user to “point their smartphone at printed text in a foreign language and translate it to a language of your choice”(32) providing intuitive solutions through AI and machine learning. This technology is extremely useful when combined with a camera and proves to be useful for people with visual impairments. I have created an example of ways AI & AR can work together to solve everyday tasks and problems.


The intersection of AI & AR
Genevieve Bell, an anthropologist and Senior Fellow at Intel who focuses on culture and technologies intersection also discusses AR and AI. Genevieve believes that artificial intelligence will be key to AR powered devices as “they are going to be more intuitive about who we are, they’re going to have a memory of us, and as a result not be so much of an interaction, but a relationship…where they might anticipate what we are doing, where they might deliberately do things on our behalves" (36).

This would be beneficial to users with visual impairments as they would not have to remember things like where they placed something as AI with a contextual memory alleviates the stress of having limited vision. Having AI that utilizes machine learning to mold itself to an individuals personality will be critical to creating a wearable all in one AR device that can help the user before they even know they need help.

Target Demographic
An auditory design system would be primarily used by people with low or no vision, although there is no one specific demographic the device and system would be limited to. This would work by contextually understanding the users environment through the devices cameras and sensors and relaying the information through an auditory personal assistant to the user(over the device speakers). Overtime with machine learning, the device will be able to understand what the user wants to accomplish by anticipating the needs of the user and only presenting relevant information, limiting auditory bombardment.
User Research
During my process of creating ARSight, I spoke with different people with different levels of visual impairments to create an experience suitable for all. As I have a strong astigmatismin one eye, I was also able to test if the low-vision website worked by closing my good eye and navigating with sounds and shades of colour. Getting feedback from people with a variety of vision impairments allowed me to create the final product.



Research Conclusions
What I ultimately learned from my research is the potential for wearable AR is limitless, but the technology is not yet availablefor an all-in-one mass market wearable device. This is because such a device would require four key components:

- Object Recognition

- Simultaneous Location and Mapping

- Machine Learning

- An Advanced Personal Assistant

Augmented reality pioneer Ronald Azuma stated in his 1997 A Survey of Augmented Reality, that augmented reality “allows the user to see the real world with virtual objects superimposed or composited with the real world. Therefore, AR supplements reality, rather than completely replacing it” (Azuma 1997).

As Azuma states, augmented reality should supplement reality and through my research, I found this technology can be used to assist visually impaired users in everyday life as opposedto adding visual digital overlays. This is because augmented reality can be anything that augments an aspect of ones life, such as an auditory system that relays visual information to the user.

To see my bibliography for this project, click here.

Creating ARSight

After researching the technology, I wanted to focus on the intersection of accessibility and augmented reality. As designers, we have a moral obligation to design for everyone, and with new and emerging technologies such as augmented reality, accessible needs need to be met.

I began creating ARSight by brainstorming different instances that having a camera relay visual information auditory can help visually impaired users. The first area I researched was communication, as this would be a critical component of ARSight. I created a website analyzing the history of the telephone to better understand the technology, available to view below.


This research lead to me making ARSight exclusively auditory and requiring a personal assistant to help communicate the users world to them. Next, I had to understand what information the average user with good vision would utilize wearable AR for and began to translate how these experiences could work auditory. I created a visual design system so I could work backwards to create these visual experiences through sounds.



I began creating instances of interactions that would benefit the user by helping them navigate their physical world safely.


This augmented hearing would provide contextual information about whatever the user needs to know about their physical or digital worlds. This can range from knowing when to stop pouring a drink or doing everyday tasks such as setting alarms and reminders while being aware of the users location.



Final Deliverables
The final deliverables for this project were a website that changes depending on what your type of vision, as well as a video that demonstrates the use cases for ARSight.


Thanks for your time and check out the final deliverables below.





Check Out My Other Projects







© Copyright Michael Calcada 2020