Disability inspired his vision

Thursday, August 5, 2010

Three years ago, Arizona State University student David Hayden decided to add a math degree to the computer science degree he was already working on. But the frenetic pace of note taking in senior-level coursework proved frustrating. Hayden, born with a condition in which his optic nerves never fully developed, is legally blind, and has trouble keeping up with note taking.

Hayden, working this summer in the Artificial Intelligence Group under a JPL Education Office-sponsored internship called Motivating Undergraduates in Science and Technology, had used assistive technologies in the classroom with limited success to that point. But he couldn’t wait for the state of the art to catch up to his needs, so he decided to do something about it himself.

The result, a device he built called the Note-Taker, has not only helped him in class but has earned him and his team members a major prize in the recent Microsoft-sponsored Imagine Cup, an international technology innovation competition held this year in Warsaw, Poland that included about 300,000 student entrants from more than 100 countries.

The competition was stiff in the Hayden team’s category, touch and tablet accessibility—50 teams from around the world had entered. Teams were called upon to use touch and tablet technologies to improve access to education, which proved a natural for Hayden’s team.

“It was as if the challenge was created just for our project,” Hayden said. “The reason the tablet PC was so important to our device is that handwritten notes are critical for STEM classes; what happens when you run into figures, diagrams or math notation?”

After an initial down-select to 10 and then finally two in the Warsaw final, Hayden and Arizona State teammate Andrew Kelley took home first place. The six team members garnered a prize of $8,000 and tablet computers.

“The basic problem the Note-Taker solves is that, unlike existing assistive technologies, it’s portable, requires no lecture adaptation or building infrastructure, and there’s no delay when transitioning between taking your notes and viewing the board,” Hayden said. “Both are on the same screen. This allows low-vision students to keep up with note taking compared with their sighted peers.”

The Note-Taker sits flat on a desk. On one half of the screen is a digital note pad, where users enter handwritten notes; on the other half is live, streaming video from a camera that points at a target such as a chalkboard. In the video window the user can “drag” the live picture and the motors on the camera will pan and tilt to readjust its position.

The device’s first-generation prototype was an amalgamation of commercial, off-the-shelf technology but showed enough promise that Hayden received some seed funding from the Center for Cognitive Ubiquitous Computing, where he had been volunteering. The team is currently working in the second of a two-year grant from the National Science Foundation.

Hayden said a second-generation prototype is now being lab-tested. A third generation, which he said is much closer to a marketable product, will be completed this fall. “We’re going to distribute the third-generation prototype to a dozen low-vision or legally blind students for an extended user study,” he said. “Based on the results, we’ll design a fourth-generation device that would be ready for manufacturing.”

This is Hayden’s third summer working in JPL’s Artificial Intelligence Group, where he studies how to autonomously sort remotely-sensed imagery according to specific science objectives.

“In many cases, spacecraft can collect far more data than could be sent back for human observation,” he said, “So, how do we run programs onboard that can pre-select data according to autonomous measures of their scientific value?”

Does Hayden see this type of work in his future? “Absolutely,” he said. “Machine learning is the fundamental domain I’m interested in. There’s a little bit of that type of work in Note-Taker—it’s using computer vision; the work I’m doing with Steve Chien and David Thompson is as much computer vision as machine learning.”

Hayden, who plans on beginning a Ph.D. program in computer science in fall 2011, is interested in developing devices that will help many more than those with low vision. Ultimately, he sees his research bringing computers on or in the body to assist human perception, cognition and mobility; i.e., wearable computers or prosthetics. “I’m particularly interested in the applications of machine learning and computer vision to those types of technologies,” he said.

For example, the Note-Taker allows users to text-search and select their handwritten and typed notes. Selected notes can then cue audio or video that was being recorded at the time the notes were being taken. “Once we get that into a nice user interface, and slim down the camera peripheral, the Note-Taker will become more attractive for fully-sighted students.”

“It’s nice if you can design technology to help a small portion of the population, but it’s even better when you can generalize it so that it can help everyone. The ideal of that would be: Can normal human vision ever be enhanced beyond its current state? It’s not clear that it’s possible, but it’s something I’m interested in considering.”
 


Story by Mark Whalen Republished with permission from The Universe, JPL’s monthly internal news digest.