“The users do not know about the product until it is shown to them.”

Timeline: January 2020 – Present
Image of the prototype


So I would like you to imagine a scenario where you are jogging a long route. You are working hard to maintain your pace, focusing on your breathing, and listening to loud upbeat music to motivate yourself to keep going. With all that’s going on, it can be easy to miss other runners or cyclists passing behind you and dips or hazards on the path which can impede your run. These factors can prove even more of a challenge if you are blind. At UMBC, we have been working on the TRIDENT system, a running accessory to support environmental awareness while on the go. TRIDENT stands for Tri-Positional Detection and Navigational Technology. The key features of the system include enhanced awareness of surroundings, detection of obstacles above and below the waist, modes for long and short range detection and choice of output. These features have been determined from interviews conducted with blind joggers and runners

My role: Design and develop a prototype that can be worn by blind runners and aid them in obstacle avoidance on their course.

The Team: Dr Ravi Kuber, Tristan King, Mei Ann Vader, Anirudh Nagraj, Apoorva Bendigeri

We wanted to get the most out of the prototype so we knew that without doing the groundwork of research and a thorough literature summary, it would be foolish to directly jump into the prototyping. We brainstormed the process and the subprocesses that came with it. Set timelines for the same and proceeded further with the project.

We at UMBC are following a 5 step process. The process includes the Exploration phase, the Define phase followed by the Ideation phase. The prototyping phase is handled by myself and another team member. The last phase is the evaluation phase where we send out our prototype to the users and evaluate them based on specific parameters.


Exploring… 🔍

We wanted thorough information about the background of the users who use similar products, what they look out for, the strengths and weaknesses, etc.. We also looked at the different products in the market today to analyze where they stand in terms of affordability and the types of functionalities offered to them. In addition to that, we gathered information through articles and papers published on various prototypes by researchers.

FOCUS – User identification

Sighted people need limited help when it comes to identifying obstacles. However, blind people often find themselves wary of bumping into someone, which is especially true for runners who are blind. Hence, our main focus is to develop a multi-functional prototype for them. So our userbase is apparent right from the beginning.

Existing Product Analysis

Once we had a clear idea about who the users are, we conducted a thorough analysis of the existing products available on the market and drew insights. These parallels gave us a better idea of what the products offer and what they do not. Platforms like Reddit were used in gaining valuable information on where the products are lacking and where they really struck a chord with the target audience.

Image of Sunu Band
Image of BuzzClip

All the data collected was put in an Excel Sheet for analysis. This would prove to be an important step before we started the initial round of design discussions.

Design Discussions (iterated)

Although some of the participants have expressed their anxiety about the effectiveness of the running belt combined with the subsequent existence of good products on the market, we explore a myriad of ideas and possible opportunities in our design discussion session.

However, this distance is subject to the working of the sensors and their sensing capabilities. The team also believed that multiple distances are necessary as the preferences are different for different runners.

The workshop kindled ideas from all over with respect to blind runners and dragged the various ways in which they can be supported. Ideas like tugging from behind while the obstacle approaches the user, thermal socks, discrete haptic came into the foray. From the design workshops, we learned that we needed more user data and proceeded to interview them.

Defining… 🗎

After collecting substantial information on what was required, who the users are, and the products available in the market, we interviewed participants to gain qualitative information on the devices they use and the prospect of a new prototype eventually coming their way.

What do we know from our participants? – Qualitative interviews

It is all in the details. DIVERSITY IN THOUGHT PROCESS

Image of two people having a discussion

It depends on how well you know the trail. I’ve gotten to the point where they just. Give me a really short warning. I’ll be OK because I’m prepping for it anyways. If that makes sense.

The DeafBlind cannot hear the bells as a location indicator. Again the SUNU band already does this and it was tested in a crowded race type environment.

What my guide and I have done, we go. So depending on what park we’re going to, we walk the trail first and we walk it together. And if I like someone walking, like stepping in, a pothole won’t be a supplemental when I’m running. So if I, like, find something under my feet or something like, I’ll be like, try to remember that right here, this exhibit, or we try to walk it together and just discuss things before we ever run it that way. We both have kind of a mental picture in our heads about like where it is when we’re going to turn and stuff like that.

Research questions

Before the creation and evaluation of the prototype, we came up with research questions that could put us in the right direction before the users will don it eventually. Some of the things we are looking to test on the prototype are :

  1. Concept validation – We are looking to see whether the solution that is under consideration works best for the situations given.
  2. Functionality check – We are looking to check whether the functionalities as intended.

We want to generate research questions because we are trying to find answers related to the feasibility and access of the prototype at this moment in time. Our users would be asked to try out the prototype by carrying out different scenarios. The key outcome would be a design decision that would influence how the current prototype will be shaped overall in the future.

Some of the research questions are:

  • We want to check if the users can look for the information they need after using our prototype.
  • We want to check if the prototype’s current iteration can be safely used, given the current global scenario.
  • Checking if the prototype can clearly distinguish obstacles in each path.
  • We want to check if the prototype can be tested on any given terrain with optimal efficiency.
  • We want to check how the current prototype fares concerning the products available in the market today.
  • Would the users be able to connect and run the setup successfully should there be a reassembly.

Scenario Development

We brainstormed and came up with a couple of scenarios that would aid in the future type of functionalities that the prototype would offer. The scenarios were based on the imaginary characters in a confined setting of running. Click on the button below to read one such scenario.

Omar is a 67-year-old policy consultant who identifies as having a visual impairment. He loves to run to keep fit, frequenting different trails in local parks. He looks up these trails online to find out more information about the varying terrains. While running, Omar likes to be aware of his surroundings like the buildings he is passing, information about localities, etc. He relies heavily on a mobile device with him to receive the information he needs while running. The phone is placed in the belt attached with a pouch on the rear.
He prefers to be aware of inclines and turns in advance, to anticipate what is coming ahead with minimal distraction. At these testing times, he wants to make optimal use of his technology to maintain a minimum of 6 feet while either being seated on the benches or using the restrooms or standing in line to grab something to munch on, post his run. His main concern while running is to know the distinct line and the gap between the track and side grass. He also wants to know his speed while running and prefers to receive information about changes in the terrain through his phone to adjust his pace accordingly.
He prefers haptic feedback to understand the obstacles’ position and avoids audio feedback since he finds it distracting while running. After a period of time, he knows that he needs to rest and stretch and would benefit from identifying the position of benches and water refilling stations around him. Omar knows that he cannot burden himself by using multiple wearables simultaneously, and hence he prefers to keep it at a minimum.


Once we had all the parameters and the data synthesized, we proceeded to bring all the data points to life. To generate ideas concerning user movement, we performed the activity of Rolestorming. To narrow down all the data collected and generate themes, we decided to do an affinity mapping exercise. We went back to our sensor literature and decided on the sensors to be used. We put together all the ideas for the physical prototype in this phase of the project.


We performed the activity of Rolestorming after a rigorous session of brainstorming to try and generate empathy and gain a better understanding of how the users may encounter a situation. We discussed a myriad of ideas, and some of the members of the team acted out scenes through roleplay. We were also careful to avoid any script that involved ideas that were not feasible. Although generating thoughts based on the scenarios, we managed to come up with umpteen ideas, both whacky and doable ones.

Affinity mapping

We collected a lot of information regarding runners who are blind, the different products in the market, etc. However, we wanted to streamline the information and prioritize the information obtained for a more user-focused approach when building the prototype. The streamlining of all the ideas and the information obtained from the literature, the brainstorming session, inputs obtained from the role storming activity was put into an affinity map.

Affinity diagram streamlining ideas

Once we prioritized the ideas and bundled our thoughts through the affinity diagramming exercise, we set out to find out different sensors that detect obstacles, and aid people who are blind.

Sensor selection, LITERATURE REVIEW and design matrix

We narrowed down on a set of ideas with sensors to aid runners with obstacle detection. This was based on what our participants told us and our literature. We studied the literature of existing prototypes in great detail and decided to use three sensors in tandem to achieve the best results.

Ultrasonic sensors
IR sensor

Additionally, the team made a 2×2 Design Space Matrix, with Low cost vs High cost on the Y-axis, and a low range of detection vs high range of detection on the X-axis. We put in different works from the literature that employed sensors to detect obstacles. This gave us a heads up on how different models are structured given two parameters.

This was done mainly to understand how the projects are projected when it comes to a user purchase point of view.

We searched for existing literature that showed us different prototypes employing sensors at different angles, and we wanted to check what we could utilize to bring in the best results. All the papers were collected in an excel sheet and analyzed before making the design decision.

Now we were ready to make the design decision.


After all the data points were covered, combined with what our participants have told us concerning different wearable devices, we concluded that a RUNNING BELT incorporating multiple sensors would be a good fit as a non-intrusive device. This would help in obstacle detection in different directions given the cardinal positioning of the sensors.

Prototyping 🎮

The team was ready to make the actual physical prototype for the runners who are blind. We brought in all the ideas to make the running belt that will be evaluated later by users.

Our prototype

The initial setup included a single ultrasonic sensor connected to an Arduino Uno while we prepared to order running belts.

Post that, we decided to integrate more than a single sensor into the setup. We owe this to the fact that we can accommodate multiple sensors in numerous positions on the belt.

We are integrating the sensors onto the belt to mirror what we were working towards – A running belt with sensors worn by runners who are blind to detect obstacles. Velcro is used for ease of use. We are currently working towards a more intuitive design for the running belt with the consideration of a headband and a Just in Time output. The actual prototype is still a work in progress, and a video will be uploaded soon. Here is a snippet of the sensors working in tandem.

Prototype Video

Evaluation 🧑🏼‍🤝‍🧑🏼

Once the participants have been recruited, we plan to tell them about the course in an open environment developed for them to run on. An open environment is chosen because of the difficulty of running indoors or finding large spaces in the latter. Before we tell them about the course, we would tell them about the running belt and what it does. This would take ten minutes. To have clarity of thought, we have a voice recording of us explaining the purpose and usage of the belt as a user guide. The type of interactions is going to be simple. The device works on vibrations presented to the user. 4 different places where the vibrations occur to give a sense of direction to the user.

With the setup firmly in place, the evaluators would mark out an area on the track and place “obstacles.” The plan for this course is to have levels of difficulty. The first level is where there is minimal to no obstacles. The second is the added difficulty of narrowing paths and increased obstacles, while the third level would be the most difficult of the lot. Participants would be told about the track in advance to get them acclimatized. We would be telling the participants that only the evaluators would be next to them along the track so that they would not be left alone at any cost. The participants would also be told what they would be evaluated on, and more so the prototype. Although we think it would be hard, a think-aloud protocol would be put into effect while the participant is running to get the thoughts about the prototype from the users on the fly. The evaluators are looking at areas where the prototype can be evaluated.

Some of the evaluation areas include:

  • How much time the user takes to complete the course.
  • The time taken by the user to familiarize yourself with the course, i.e., Time is taken to know where each obstacle is.
  • The number of obstacles successfully detected by the running belt. This will be done using a fly-on-the-wall technique where the participant is observed from a distance for any qualitative comment about the detection.

(More coming soon)