MadAbility, the University of Wisconsin-Madison Accessibility Lab, is working to develop technologies that allow blind and low-vision individuals equal access to information and to the world.
In an era of visual information, like TikTok videos, YouTube and virtual reality headsets, MadAbility aims to create high-tech solutions for the blind and visually impaired.
Principal Investigator Yuhang Zhao told The Daily Cardinal that modern society is built on — and many of our technologies are centered around — visual information. Exhibit A: smartphones.
“When you use a smartphone, you have a very fancy touch display and vivid colors that you can interact with,” Zhao said. “The whole [of] virtual reality is based on visual experience. But all of these emerging technologies leave blind and low vision people out.”
One technology MadAbility is developing is A11yBits, a “tangible toolkit” allowing blind and low-vision people to assemble their own personalized devices to support their unique needs.
Each kit includes a set of four sensing modules and four feedback modules that can be mixed and matched like LEGO pieces. The sensing modules detect environmental information and user commands: a motion module detects movement, a voice module recognizes speech, a timer module tracks time and a temperature module detects the current temperature. The feedback modules send auditory, visual and vibration alerts based on the input.
Zhao said customizable access technology like AllyBits is very important because an individual's visual abilities, living conditions and prior experiences can differ in a multitude of ways.
Because AllyBits’ digital components may be challenging for blind or low vision individuals to use without a technical background, MadAbility also developed an AI agent to help users understand the modules’ functionalities and create effective solutions with them.
Zhao said she doesn’t think access technology should merely assist people with disabilities. Rather, she sees the individual and the technology as collaborators.
“A lot of our technologies follow that principle: what are people's current abilities and what are their preferences? We can [then] leverage those to build our technologies,” she said.
Recipe walkthroughs, cooking safety and AI in the kitchen
MadAbility also developed an AI system called AROMA that enables blind individuals to better follow video recipes in the kitchen. The user wears their phone in front of their chest to capture the cooking process as an AI agent describes information from a chosen video recipe, responds to input from the user and issues alerts or corrective suggestions if the user made an error.
For example, if a person using AROMA to make a pepperoni pizza accidentally added pepperoni before cheese, the system would recognize the mistake and generate an alert, Zhao said.
CookAR is AROMA’s “sister” system for those with low vision, an Augmented Reality (AR) system that enables low vision individuals to cook in the kitchen more safely and efficiently by wearing AR glasses that highlight “grabbable areas,” such as the handle of a knife, in green and “hazardous areas,” such as the blade of a knife, in red.
“The fact is, low vision people still have vision, and they want to use their vision,” Zhao said.
Zhao said a lot of existing technologies see blind and low vision people as “the same,” providing only audio and haptic feedback despite those that are low vision still retaining partial sight. CookAR and AROMA aim to meet the needs of both different groups.
Zhao said she wants to continue exploring how AI and people with disabilities, especially those that are blind or low vision, can collaborate with each other to complete tasks “more smoothly, efficiently, safely and confidently.”
AllyBits, AROMA and CookAR were developed in collaboration with a professor from the University of Texas-Dallas, a team from Notre Dame and a student from University of Washington, respectively.
The MadAbility Lab also investigates how their technologies can be applied to generalized audiences in areas like mental health and gender identity and expression. They are planning to host a workshop in early May for people in the blind and low vision community to try out their technologies and provide feedback.





