Kristopher Sturgis

February 10, 2016

3 Min Read
Could This Virtual 'Guide Dog' Help the Visually Impaired?

The new technology, developed at MIT, was designed to process 3-D camera data, and could help visually impaired patients navigate the world using an innovatively designed braille interface.

Kristopher Sturgis

MIT Virtual Guide Dog

The system includes a braille interface that users could use that communicates with the navigation system and conveys to the user information about obstacles in their environment around them. (Image courtesy of MIT)

Those suffering from impaired vision may soon be navigating the world in an entirely new way through the use of a new navigation system powered by an advanced low-power chip developed at MIT's Computer Science and Artificial Intelligence laboratory (CSAIL). The chip only consumes roughly one thousandth of the power used by a conventional computer processor that executes the same algorithms.

"Working on this project, we have met many visually impaired people and most of them were truly excited about this project," says Dongsuk Jeon, a postdoc at MIT's Microsystems Research Laboratories and first author on the paper describing the work. "This kind of navigation system can help them walk around without any external help or any other easily noticeable, bulky device--which is a huge step for them."

The chip works in tandem with a special 3-D camera that is worn around the neck, developed by Texas Instruments. The user then carries a small mechanical braille interface that can convey information about the distance to the nearest obstacle in the direction the user is moving.

Jeon said that the biggest challenge for him and his group was computing very complex algorithms with low power consumption. This remained a sticking point until the group proposed an efficient hardware architecture, as well as the optimization of existing algorithms so that they could fit them into hardware. The result was a chip that consumed only 8 MW while processing 3-D imaging data at an astounding speed of 30 frames per second.

"The power consumption of the time-of-flight camera was one of the biggest challenges," Jeon said. "Since we are assuming a battery-powered system, high power consumption can significantly reduce lifetime between recharging. We implemented algorithms to reduce power consumption dynamically to resolve the issue."

One of the most significant algorithm modifications they made was adjusting how the chip fetches data from main memory. The team was able to adjust how memory is pulled, so that it doesn't have to go back to main memory to fetch data each time. They even designed the chip to be able to read when the user isn't moving, and send a signal to the 3-D camera to lower its frame rate, saving additional power while it operates.

The final piece of the puzzle was installing a braille interface that users could use that communicates with the navigation system and conveys to the user information about obstacles in their environment around them.

The concept, in the end, works quite simply--and although the current prototype is much more subtle than its predecessors, Jeon and his colleagues believe it's possible to miniaturize the device even further.

"We already have a practical system," Jeon says. "Now we are working to miniaturize the system further based on the same technology. If there is enough financial support, this may be available in the market within a few years."

Kristopher Sturgis is a contributor to Qmed and MPMN.

Like what you're reading? Subscribe to our daily e-newsletter.

About the Author(s)

Kristopher Sturgis

Kristopher Sturgis is a freelance contributor to MD+DI.

Sign up for the QMED & MD+DI Daily newsletter.

You May Also Like