Cognitive scientists in the United Kingdom and the Netherlands have published initial research results about the “vOICe,” a sensory substitution device (SSD) that could someday compensate for vision loss, in the June 2013 issue of Frontiers in Cognitive Science. The vOICe (at left) is a visual-to-auditory SSD that encodes images taken by a camera worn by the user into “soundscapes,” enabling users to extract information about their surroundings. The Frontiers group is an open-access, peer-reviewed academic publisher and research network.
About the Research
The study, entitled How well do you see what you hear? The acuity of visual-to-auditory sensory substitution, was authored by Alastair Haigh, David J. Brown, Peter Meijer, and Michael J. Proulx, who represent the following institutions: Queen Mary University of London; Metamodal BV, Eindhoven, Netherlands; and the University of Bath, UK.
[Please note: To date, the research team has utilized the experimental device only with fully sighted, blindfolded subjects. The success of the device with blind subjects and in a variety of sound-producing environments has yet to be determined.]
Here is more information about the study from the article introduction:
Do we see with the eyes or with the brain? Is vision a discrete form of perception, distinct from others such as audition and touch? Is it possible for those who have lost their eyesight or have been born without vision to experience visual sensation or perception? Questions such as these have occupied the minds of philosophers and scientists for centuries and now lie at the heart of modern cognitive neuroscience.
Sensory substitution devices (SSDs) aim to compensate for the loss of a sensory modality, typically vision, by converting information from the lost modality into stimuli in a remaining modality. Here we utilized sensory substitution to examine how the very first stages of learning to “see with sound” occurs, and the quality of the information transfer from vision to audition as assessed with a test of acuity.
More about the vOICe’s Potential for Visually Impaired Persons
The study participants were adult volunteers without experience with the vOICe (4 male, 22 female, mean age 22.6 years, age range 19–32 years). All reported normal vision, with some subjects wearing corrective lenses. The following information about the vOICe testing procedures and research is aggregated from summary reports in The A to Z of Sensors, the University of Bath News, and the The NL Times:
The vOICe device is designed to capture an image by using a camera lens and converting this information into a cluster of natural sounds (“soundscapes”) that are delivered directly to the participant via headphones.
![]()
The mechanism behind this conversion of visual-to-audio signal is dependent upon the left to right scan of an image. When the camera scans the image or a visual scene, a specific frequency represents this scan for feedback information and to determine how bright the image is. This frequency is experienced by the user as “snapshots” of that visual scene.
Subjects were asked to perform a standard eye chart test, the Snellen Tumbling E test (at left), which asked participants to view the letter E turned in four different directions and in various sizes. Regular best-corrected visual acuity is considered 20/20, calculated in terms of the distance (in feet) and the size of the E on the eye chart.
Blindfolded, sighted test subjects were able to achieve 20/400 vision; still severely impaired sight, but an improvement over any existing techniques, said lead researcher Michael Proulx.
More Ongoing Eye and Brain Research
Two additional research projects are shedding new light on the way the brain’s visual processing center functions in people who are blind.
Congenital Blindness and the Visual Cortex
The first report, entitled Language processing in the occipital cortex of congenitally blind adults, examines how brain regions that are thought to have evolved for vision can take on language processing as a result of early experience.
According to the authors, studies in the past have shown that people who have been blind since birth also use the visual cortex during verbal tasks such as reading braille, and have good verbal long-term memory. It was unclear, however, if the visual cortex processed complex language, such as sentences, in the same way as in the classic language regions in the brain.
Braille Reading and the Brain’s Visual Word Form Area
The second study, entitled A Ventral Visual Stream Reading Center Independent of Visual Experience, examines the brain’s visual word form area (VWFA), a section of the brain that develops expertise for visual reading:
The visual word form area (VWFA) … is activated across writing systems and scripts and encodes strings of letters irrespective of case, font, or location in the visual field. In the blind, comparable reading expertise can be achieved using braille. This study investigated which area plays the role of the VWFA in the blind.
In an interview with the New York Times, Amir Amedi, Ph.D., a neuroscientist from the Hebrew University of Jerusalem and one of the study authors, summarized the implications of their research:
‘It doesn’t matter if people are reading with their eyes or by their hands,’ said Dr. Amedi. ‘They are processing words. What we suggest is that what this area is doing is building the shape of the words, even though we call it the visual word form area.’
He and his colleagues belong to a small community of neuroscientists who are trying to demonstrate that the brain’s regions are multisensory. Although the theory has not become mainstream, it has been gaining acceptance in the past decade.
‘We hope that this paper will be another break in convincing people,’ Dr. Amedi said. ‘But one or two or 10 papers is not enough to change the textbook. It might take another decade, so we can prove that we haven’t missed something.’
“vOICe” image from Frontiers in Cognitive Science under a Creative Commons CC-BY license.