Cued Speech for the Deaf

Jun 12 2013 David Titmus
Filters

Popular posts

Cell phone laying on a desk near a computer keyboard with the Twitch logo displayed on the phone screen
How to Add Captions to Twitch How to Add Captions to Twitch
lamp on desk
So You Want to Be a VITAC Realtime Captioner… So You Want to Be a VITAC Realtime Captioner…

Related posts

VITAC, a Verbit Company logo, on a television screen. The screen is atop a white cabinet against a white wall backdrop.
VITAC and ENCO Form Strategic Partnership to Expand Caption Encoding and Delivery Options for Broadcasters VITAC and ENCO Form Strategic Partnership to Expand Caption Encoding and Delivery Options for Broadcasters
A girl sitting on the couch with tv remote pointed at the tv screen that has picture of tree branches and open captions
VITAC Combines AI Technology, Experience to Deliver Exceptional Caption and Transcription Quality  VITAC Combines AI Technology, Experience to Deliver Exceptional Caption and Transcription Quality 
Share
Copied!

Most profoundly deaf individuals benefit in some way from lip-reading. Though the skill is sometimes viewed as a “can” or “can’t ” ability that one either possesses or does not, nearly every Deaf, hard-of-hearing and hearing individual could probably tell the difference between understanding their boss speaking to them face-to-face, and talking to someone in darkness. Like all human perception, speech recognition draws from a range of senses not limited to speech and hearing.

Though many advocates of Deaf rights incorrectly state that only 30% of English words can be discered through lip-reading (the study states that 30% of English phonemes cannot be distinguished, but does not discuss contextualized speech) the skill of lip-reading is just that — a capability learned through practice. Lip-reading, also called “speech-reading,” is an important link to the hearing world for Deaf and hard-of-hearing individuals, but also one that can be difficult and frustrating. Consonents often sound similar, such as p and b, s and z, and f and v, and while context provides clues as to the speaker’s meaning, it offers a finite level of help.

To improve the level of understanding of individuals who rely on lip-reading, Dr. R. Orin Cornett at Gallaudet University, the prominent American Deaf college, invented cued speech in 1966. Cued speech is a system of eight signs performed in four movements or positions around the face that indicate phonemes that are similar to one another. The system is meant to supplement a speaker’s words with critical information in order to improve comprehension in the listener.

Though to some, it looks like sign language, it is not: cued speech has a much smaller vocabulary of signs and relies more on mouth shapes than hand gestures. It is especially effective among Deaf or hard of hearing individuals who go on to get cochlear implants, as it gives them a better understanding of the different phonemes before they are able to effectively hear them. Though the cued speech method originated as an educational tool for deaf, hard of hearing, and Austistic students, (the NCSA also recommends it as a learning tool for early readers who can hear), it has evolved into a communication method of its own and is a primary dialect between some cuers.

Read more about cued speech on the National Cued Speech Association website, or see a video of what cued speech looks like.

by Carlin Twedt