A New Dog Learns Old Tricks
Donghyun Kim develops a robot guide dog to serve the visually impaired
Professor Donghyun Kim tied on his blindfold and pawed the air in search of his colleague’s shoulder. Over the next few hours, Kim and several fellow researchers took turns guiding each other— and stumbling—along miles of pedestrian pathways in downtown Amherst. Though it may seem like the premise of a comedy sketch, the team’s blindfolded outing marked the birth of a project that could prove to be a game changer for the visually impaired community.
In 2021, Kim came to the UMass Amherst faculty with an impressive background in quadruped robotics at MIT. But after years spent developing dog-sized robots capable of technical feats, he now wanted an application that would make a difference in the lives of those his technology served. He found the right project, and team, at the Manning College of Information and Computer Sciences (CICS). Together, Kim and his collaborators are developing a robot guide dog that could give biological quadrupeds a run for their money.
Four Legs Good
On a crisp fall morning in 2018, a group of MIT students watched as a four-legged robot the size of a duffel bag jogged in circles around the university quad, got back on its feet after getting kicked over, and executed a flawless backflip. This last trick is the robot’s claim to fame, making it the first quadruped robot to land such a maneuver. The “mini-cheetah” robot employs 12 electric motors with a tremendous range of motion and a torque output of 17 Newton meters. As a postdoctoral research assistant at MIT, Donghyun Kim was part of the team that developed the mini-cheetah, and remains proud of the pint-size powerhouse he had a hand in creating. “There’s only a 0.5-second window in which the robot needs to perceive where it is in the air and adjust itself,” explains Kim. “And it has to do this procedure many times in order to successfully land.”
On the heels of the mini-cheetah’s success, Kim joined the UMass faculty in 2021 as an assistant professor and director of the Dynamic and Autonomous Robotic Systems (DARoS) lab. Although his work on the mini-cheetah had proved his mettle as a robotic engineer, a technical challenge alone would not be enough for his next project; he needed to find a humanitarian objective he could believe in. As one colleague put it, Kim had a huge hammer and was in search of a nail. Although military applications often fuel innovation in the field of robotics, Kim wasn’t inclined to pursue that option. “Especially with the war in Ukraine going on, I’d feel really bad if I saw any quadrupeds walking around,” he admits. He soon realized that a robotic dog could be designed to do for people what some highly trained biological dogs do—serve as eyes and ears for those living with visual impairment. With that epiphany, the guide dog robot project was born.
Kim’s first task was to find a baseline robot model he could work with. Since the development of the mini-cheetah, several companies have brought affordable robotic dogs to market. Rather than reinvent the proverbial wheel, Kim purchased one such robot from Unitree Robotics, a Chinese firm offering high-performance robotic dogs comparable to the mini-cheetah. At 22 pounds and two feet in height, Kim felt that its weight and dimensions were adequate to guide a person without posing a safety risk if it became unstable. He could also modify this off-the-shelf robot to suit his project’s needs. For starters, he would need to add cameras and sensors to process the robot’s surroundings and inform its movements. But that was a no-brainer; what other capabilities would a robot guide dog need to win the trust of a visually impaired person? To answer that question, Kim would need to resist the urge to problem solve what he didn’t fully understand—and simply listen.
An Ounce of Prevention
Before world-renowned architect Frank Gehry breaks ground on any project, he first asks his clients, “What are you hoping this building will do?” Donghyun Kim approached his guide dog project in much the same way by asking, “What functionality would a visually impaired person need from a robot guide dog?” His first attempt at answering that question led him to the blindfolded session with his colleagues—but as the expression goes, more (and better) tests were needed.
Kim and his doctoral assistant, Hochul Hwang, surveyed those living with blindness to better understand their day-to-day activities. They then followed up their survey with more than 20 in-depth interviews—a key factor in attracting fellow researchers to the project, including CICS professor Sunghoon Ivan Lee. As an expert in adaptive technologies and human-centered design, Lee brought a sense for user experience to the project. From the outset, he was impressed by his colleague’s willingness to listen first and design later. “Often, engineers will say, ‘Look, we know this is what people with impairments need, so let’s make it.’ But when we actually talk to stakeholders, their needs can be quite different,” says Lee. The team’s qualitative study became a project unto itself and will soon be published for the benefit of other researchers working on adaptive technologies.
Among those Kim interviewed was Gail Gunn, who worked for years as a computer programmer at UMass. Gunn lost her sight in her late 20s, and in the decades that followed, enjoyed the support of four guide dogs, including her most recent German Shepherd, Brawny. Kim and his colleagues spent hours exploring Gunn’s relationship with Brawny, learning how the guide dog gives her autonomy in life, including guiding her to and from work. With the researchers in tow, Gunn and Brawny modeled their daily walking route across campus to her office, weaving around foot traffic and halting for passing skateboards. Whether navigating a busy parking lot or a cluttered hallway, Brawny’s capabilities would be difficult to replicate. “I can tell my dog to go right and he’ll go right. I can tell him to find the outside and he’ll take me right to the door,” she explains. Kim saw that his task was not dissimilar to that of Tesla’s autonomous cars.
"It wouldn't be difficult to integrate Google Maps with this project, but that's not what the handler wants. They want control over the navigation, at least until they develop a sense of trust."
—Donghyun Kim
Through the course of their observational sessions, the team identified several working assumptions that proved to be off target. For starters, Kim and his team assumed that a handler would want a soft leash for comfort and flexibility. However, handlers preferred a rigid leash, which gave a stronger haptic connection to the robot and allowed them to better predict its movements.
Additionally, although the Unitree robot possessed cutting-edge maneuverability, it lacked a capability critical in the real world—it could not climb stairs. It also struggled to keep up with Gunn and Brawny’s fast walking pace. “Its legs are too short,” admits Kim. “But we can fix that.” The checklist for augmentations grew: quieter motors, greater battery power, and more sensors. Kim prioritized the growing list in a spreadsheet, but remained confident that he had a solution for each technical challenge. However, to match a biological guide dog’s functionality, his robot would need to be able to navigate autonomously. For that task, Kim would need back up.
Show Me the Way
In the project’s early days, Kim imagined a guide dog that, when provided a destination, could map and execute the route on its own. However, through qualitative research, the project team learned that guide dog handlers wanted a greater level of control. “It wouldn’t be difficult to integrate Google maps with this project, but that’s not what the handler wants,” says Kim. “They want control over the navigation, at least until they develop a sense of trust.”
Lee explains that an autonomous navigation system that determines a route and simply guides a visually impaired person from point to point could prove to be disorienting and frightening. “Even with [biological] guide dogs, the human has full control of the decision making. They tell the dog where to go, and it’s responsible for short-term navigation— avoiding potholes or an emergency they couldn’t predict,” says Lee. A successful robot guide dog would also need the ability to make localized, short-term decisions. This realization was a game changer for Kim’s project. Rather than developing a robot that took total control of navigation, it would need to take cues from its owner, intervening only when an unexpected obstacle required a detour or response.
Of course, even short-term navigation posed a new set of problems in a real-world setting. For that challenge, Kim turned to Joydeep Biswas, a former assistant professor at CICS who now leads the Autonomous Mobile Robotics Laboratory at the University of Texas at Austin. Biswas specializes in creating artificial intelligence (AI) algorithms that allow robots to navigate in unstructured environments like a crowded hallway or busy sidewalk. His first step in training the robot’s algorithm was to allow a human subject to operate it with a joystick, modeling how to navigate through various environments. Through machine learning, the robot would then analyze the human’s choices. “We call this representational learning,” says Biswas. “[The system] compares the route it would choose to the route the human took and says, ‘There must be some reason the human made that decision.’” From there, the algorithm creates a visual representation of the landscape to distinguish physical elements that influenced the human’s choices. Using this method, the robot learns to distinguish grass from a sidewalk, as well as Above: Still-frames demonstrating the robot guide dog’s obstacles that need to be avoided.
Guide Dog 2.0
Last March, Kim and his team debuted their work at Cybathon, an international competition for adaptive technologies. The competition required a dog handler with blindness to cross a room with the help of a robot, weaving around obstacles like boxes and chairs. The robot performed well while navigating around obstructions, but quickly revealed its limitations when it dodged a water bottle and hit a wall— literally and figuratively. “It can’t back up yet,” Gunn notes.
Despite their lackluster results, Kim saw the competition as invaluable in teasing out the next set of issues his team needs to resolve. “Our handlers gave us very honest feedback as we got closer to the competition, telling us what worked and what didn’t,” he says. “In some ways, the handlers worked even harder to prepare than we did, which told us how much adaptability we could expect from a person using the technology.”
By any measure, the project team has a long way to go in making their robot guide dog competitive with its biological counterpart. But with a project timeline of 10 to 12 years, Kim remains hopeful for an end product that will make a difference to the visually impaired community. He envisions the robot as progressing through four levels of development: First, it needs to be able to identify a sidewalk and stop for obstacles. Next, it must develop a more sophisticated understanding of its environment, learning the difference between pavement and a sidewalk. (The project team is now working to master this level.) The third level marks the point at which the robot surpasses the capabilities of a biological guide dog, identifying room numbers, memorizing the names of train stations, and becoming a truly autonomous navigation system. At this level, Kim expects his robot to be capable of the bidirectional communication Biswas imagines. He also projects that it will be able to perform complex tasks like retrieving dropped items. “At that point, the robot will have gone beyond just being a navigational system,” says Kim. “It’s more of a life assistant.”
"Often engineers will say, 'Look, we know this is what people with impairments need, so let's make it.' But when we actually talk to stakeholders their needs can be quite different."
—Sunghoon Ivan Lee
There’s little doubt that a robot guide dog would be an asset to the visually impaired community; the time and expense of training a guide dog makes them hard to come by for many. According to Kim, the average guide dog costs more than $50,000 in training, food, and medical care—not to mention the heartbreak of their limited life span. A robot could bring significant cost savings, and when it had reached the end of its usable life, a replacement could simply be programmed to pick up where its predecessor left off.
However, a cost-benefit analysis may not ultimately decide the success of Kim’s innovation; that may be determined by how willing people with visual impairments are to allow a robot to handle tasks their pet previously managed. Kim is keenly aware of this challenge, but does not see his robot as an either/or choice. “We’re not trying to replace a dog, but less than 2 percent of people with blindness use a guide dog because of the supply issue,” says Kim. “If we could meet that need, it would be worth it. And I expect many people would still want a dog as a companion.”
As for Gunn, would she consider giving up a living guide dog for a robotic one? “I don’t know,” she muses, cradling her chin in one hand. “If I’ve learned anything, it’s to never say never.”
This story originally appeared in the Winter 2024 issue of Significant Bits Magazine.