👨‍🦯Aiding runners who are blind – A UX journey.

“The users do not know about the product until it is shown to them”

Timeline: January 2020 – Present

Our team at the University of Maryland, Baltimore County is looking at ways in which blind runners can be supported. By looking at technologies that aid in obstacle detection while running and enable free pathing, our aim is to create a prototype from the feedback and data collected from the interviews of blind runners coupled with the information collected from the design workshops and literature collected.

My role: Design and develop a prototype that can be worn by blind runners and aid them in obstacle avoidance on their course.

Process

We at UMBC are following a 5 step process. The process includes the Exploration phase, the Define phase followed by the Ideation phase. The prototyping phase is handled by myself and another team member. The last phase is the evaluation phase where we send out our prototype to the users and evaluate them based on specific parameters.

Exploring… 🔍

We wanted through information about the background of the users who use similar products, what they look out for, the strengths and weaknesses, etc., We also looked at the different products in the market today to analyze where they stand in terms of affordability and the types of functionalities offered to the users. In addition to that, we gathered information through articles and papers published on various prototypes by researchers.

FOCUS – User identification

Sighted people need limited help when it comes to identifying obstacles. However, people who are blind often find themselves wary of bumping into someone, and this is especially true for runners who are blind. Hence, our main focus is to develop a multi-functional prototype for them. So our userbase is very clear right from the beginning.

Existing Product Analysis

Once we had a clear idea about who the users are, we conducted a thorough analysis of the existing products available on the market and drew insights on the same. These parallels gave us a better idea of what the products offer and what they do not. Platforms like Reddit were used in gaining valuable information on where the products are lacking and where they really struck a chord with the target audience.

Image of Sunu Band
SUNU BAND
Image of BuzzClip
BuzzClip

All the data collected was put in an Excel Sheet for analysis. This would prove to be an important step before we started the initial round of design discussions.

Design Discussions (iterated)

Although some of the participants have expressed their anxiety about the effectiveness of the running belt combined with the subsequent existence of good products on the market, we explore a myriad of ideas and possible opportunities in our design discussion session.

However, this distance is subject to the working of the sensors and their sensing capabilities. The team also believed that multiple distances are necessary as the preferences are different for different runners.

The workshop kindled ideas from all over with respect to blind runners and dragged the various ways in which they can be supported. Ideas like tugging from behind while the obstacle approaches the user, thermal socks, discrete haptic came into the foray. From the design workshops, we learned that we needed more user data and proceeded to interview them.


Defining… 🗎

After collecting substantial information on what was required, who the users are, and the products available in the market, we proceeded to interview participants to gain qualitative information on the devices that they use and the prospect of a new prototype eventually coming their way.

What do we know from our participants? – Qualitative interviews

It is all in the details. DIVERSITY IN THOUGHT PROCESS

Image of two people having a discussion

It depends how well you know, the trail. I’ve gotten to the point where they just. Give me like a really short warning. I’ll be OK because I’m kind of prepping for it anyways. If that makes sense.

The DeafBlind cannot hear the bells as a location indicator. Again the SUNU band already does this and it was tested in a crowded race type environment.

What my guide and I have done, we go. So depending on what park we’re going to, we walk the trail first and we walk it together. And if I like someone walking, like stepping in, a pothole won’t be a supplemental when I’m running. So if I, like, find something under my feet or something like, I’ll be like, try to remember that right here, this exhibit, or we try to walk it together and just discuss things before we ever run it that way. We both have kind of a mental picture in our heads about like where it is when we’re going to turn and stuff like that.

Our design idea stems from the fact that the belt would have a greater reach in detection.

Research questions

Before the creation and evaluation of the prototype, we came up with research questions that could put us in the right direction before the users will don it eventually. Some of the things we are looking to test on the prototype are :

  1. Concept validation – We are looking to see whether the solution that is under consideration works best for the situations given.
  2. Functionality check – We are looking to check whether the functionalities as intended.

We want to generate research questions because we are trying to find answers related to the feasibility and access of the prototype at this moment in time. Our users would be asked to try out the prototype by carrying out different scenarios. The key outcome would be a design decision that would influence how the current prototype will be shaped overall in the future.

Some of the research questions are:

  • We would like to check if the users can look for the information that they need after using our prototype.
  • We would like to check if the current iteration of the prototype can be safely used given the current global scenario.
  • Checking if the prototype can clearly distinguish obstacles in each path.
  • We would like to check if the prototype can be tested on any given terrain with optimal efficiency.
  • We would like to check how the current prototype fares concerning the products available in the market today.
  • Checking if the current iteration of the prototype can safely reach the users without causing havoc.
  • We would like to check if the users would be able to connect and run the setup successfully should there be a reassembly.

Scenario Development

We brainstormed and came up with a couple of scenarios that would aid in the future about the type of functionalities that the prototype would offer. The scenarios were based on the imaginary characters in a confined setting of running. Click on the button below to read one such scenario.

Omar is a 67-year-old policy consultant who identifies as having a visual impairment. He loves to run to keep fit, frequenting different trails in local parks. He looks up these trails online to find out more information about the varying terrains. While running, Omar likes to be aware of his surroundings like the buildings he is passing, information about localities, etc. He relies heavily on a mobile device with him to receive the information he needs while running. The phone is placed in the belt attached with a pouch on the rear.
He prefers to be aware of inclines and turns in advance, to anticipate what is coming ahead with minimal distraction. At these testing times, he wants to make optimal use of his technology to maintain a minimum of 6 feet while either being seated on the benches or using the restrooms or standing in line to grab something to munch on, post his run. His main concern while running is to know the distinct line and the gap between the track and side grass. He also would like to know his speed while running and prefers to receive information about changes in the terrain through his phone to adjust his pace accordingly.
He prefers haptic feedback to understand the position of the obstacles and avoids audio feedback since he finds it distracting while running. He knows after a period of time, that he needs to rest and stretch, and would benefit from identifying the position of benches and water refilling stations around him. Omar knows that he cannot burden himself by using multiple wearables at the same time and hence he prefers to keep it at a minimum.


ideation… 💡

Once we had all the parameters and the data synthesized, we proceeded to bring all the data points to life. To generate ideas with respect to user movement, we performed the activity of Rolestorming. To narrow down all the data collected and generate themes, we decided to do an affinity mapping exercise. We went back to our sensor literature and decided on the sensors to be used. We put together all the ideas for the physical prototype in this phase of the project.

Rolestorming

We performed the activity of Rolestorming after a rigorous session of brainstorming to try and generate empathy and to gain a better understanding of how the users may encounter a situation. We discussed a myriad of ideas and some of the members of the team enacted through roleplay. We were also careful to avoid any script that involved ideas that were not feasible. Although while generating thoughts based on the scenarios, we managed to come up with umpteen ideas both whacky as well as doable ones.

Affinity mapping

We collected a lot of information regarding runners who are blind, the different products in the market and so on., However, we wanted to streamline the information and prioritize the information obtained for a more user-focused approach when we would eventually build the prototype. The streamlining of all the ideas and the information obtained from the literature, the brainstorming session, inputs obtained from the rolestorming activity was put into an affinity map.

Affinity diagram streamlining ideas

Once we prioritized the ideas and bundled our thoughts through the affinity diagramming exercise, we set out to find out different sensors that detect obstacles, and aid people who are blind.

Sensor selection and design matrix

We narrowed down on a set of ideas with sensors to aid runners with obstacle detection. This was based on what our participants told us and our literature. We studied the literature of existing prototypes in great detail and decided to make use of three sensors in tandem to achieve the best results.

Ultrasonic sensors
IR sensor
LIDAR

Additionally, the team made a 2×2 Design Space Matrix, with Low cost vs High cost on the Y-axis, and a low range of detection vs high range of detection on the X-axis. We put in different works from the literature that employed sensors to detect obstacles. This gave us a heads up on how different models are structured given two parameters.

This was done mainly to understand how the projects are projected when it comes to a user purchase point of view.


design decision

After all the data points were covered, combined with what our participants have told us concerning different wearable devices, we concluded that a RUNNING BELT incorporating multiple sensors would be a good fit as a non-intrusive device. This would help in obstacle detection in different directions given the cardinal positioning of the sensors.


Prototyping 🎮

The team was ready to make the actual physical prototype for the runners who are blind. We brought in all the ideas to make the running belt that will be evaluated later by users.

Our prototype

The initial setup included a single ultrasonic sensor connected to an Arduino Uno while we prepared to order running belts.

Single Sensor

Post that, we decided to integrate more than a single sensor to the setup. We owe this to the fact that we can accommodate multiple sensors in numerous positions on the belt.

Multiple sensors

We are integrating the sensors onto the belt so that we mirror what we were working towards – A running belt with sensors worn by runners who are blind to detect obstacles. Velcro is used for ease of use.

We are currently working towards a more intuitive design for the running belt with the consideration of a headband and a Just in Time output. The actual prototype is still a work in progress and a video will be uploaded soon. Here is a snippet of the sensors working in tandem.

Our prototype (Before affixing it to the belt)


Evaluation 🧑🏼‍🤝‍🧑🏼

Once the participants have been recruited, we plan to tell them about the course in an open environment that has been developed for them to run on. An open environment is chosen because of the difficulty to run indoors or finding large spaces in the latter. Before we tell them about the course, we would tell them about the running belt and what it does. This would take ten minutes. To have clarity of thought, we have a voice recording of us explaining the purpose and usage of the belt as a user guide. The type of interactions is going to be simple. The device works on vibrations presented to the user. 5 different places where the vibrations occur to give a sense of direction to the user.

Description for the interaction –

“Omar is a 67-year-old policy consultant who identifies as having a visual impairment.  He loves to run to keep fit, frequenting different trails in local parks. He looks up these trails online to find out more information about the varying terrains.  First Omar is presented with information about the running belt and its functionalities. He is also told about the course that he is running for familiarization. Post which, he would run with the belt. When he comes across an obstacle on the way, like a low-lying tree branch, the sensor detects the tree branch and presents vibrational cues in an appropriate direction on the belt. Omar takes a few more steps closer, the sensor detects and sends vibrational cues if the user is closer to the same obstacle, the vibration patterns would intensify in the direction of the obstacle.  Omar then reaches his hand out to feel for branch, and lowers his head to avoid branch, or would move the body in this case to either left or right to continue on his path, and the sensors reset. He then proceeds to continue with his running. If he encounters an array of obstacles in front of him, the sensors work in tandem, presenting information at once in the form of vibrational cues across the area of the obstacle, thereby alerting Omar that the obstacle is not a small unidirectional one, and he has to be extra careful. He would slow down, stop, and put his hand in front to check for the obstacle. With the movement of his hand, he would then notice the free side where there is no obstacle and would move accordingly. The sensors would reset once he is away from the obstacle.”

The interactions with other obstacles along the way would be like the ones presented above.

With the setup firmly in place, the evaluators would mark out an area on the track and place “obstacles”. The plan for this course is to have levels of difficulty. The first level is where there is minimal to no obstacles. The second is the added difficulty of narrowing paths and increased obstacles, while the third level would be the most difficult of the lot. Participants would be told about the track in advance to get them acclimatized. We would be telling the participants that only the evaluators would be next to them along the track so that they would not be left alone at any cost. The participants would also be told on what they would be evaluated on, and more so the prototype. Although we think it would be hard, a think-aloud protocol would be put into effect while the participant is running to get the thoughts about the prototype from the users on the fly. The evaluators are looking at areas where the prototype can be evaluated.

Some of the evaluation areas include:

  • How much time the user takes to complete the course.
  • The time that is taken by the user to familiarize with the course i.e. Time is taken to know where each obstacle is.
  • The number of obstacles successfully detected by the running belt. This is done using a fly on the wall technique where the participant is observed from a distance for any qualitative comment about the detection.