Intonation is an integral part of communication for all speakers. But can sign languages have intonation? A new study at the University of Haifa shows that signers use their faces to create intonational ‘melodies’ just as speakers use their voices, and that the melodies of the face can differ from one sign language to another. *
Like the intonation of individual spoken languages, sign languages also have their own unique “sound”, and, as with spoken languages, the intonation of one community’s language is different from that of another community, according to a new study at the University of Haifa. “Our discovery that sign languages also have unique intonation patterns once again demonstrates that sign languages share many central properties with spoken languages. It turns out that intonation is an essential component of any human language, including languages without sound,” explained Prof. Wendy Sandler, who led the study.
We can all recognize French, Italian, or Chinese without understanding them at all, due to the unique intonation patterns — the rise and fall of the voice — in each language. Even babies can differentiate between the familiar melodies and rhythms of the language spoken in their own environment and foreign languages, long before they know any words. In our own language, intonation helps us to identify different kinds of sentences and parts of sentences, such as questions, conditionals, imperatives, sentence topics, etc. We can do this because each comes with its own characteristic intonational pattern. In the silent languages used by deaf people, these ‘melodies’ exist as well, transmitted not by the vocal cords, but by a systematic set of facial expressions and head positions.
Prof. Sandler, the Director of the Sign Language Research Lab at the University of Haifa, has been studying the similarities between spoken and sign languages for many years. The purpose of the present study, performed with doctoral students Svetlana Dachkovsky of the University of Haifa and Christina Healy of Gallaudet University, was to determine whether the facial intonation of sign language is the same for all sign languages or whether, like spoken languages, intonation takes on different characteristic patterns in each language.
For this purpose, deaf signers of Israeli Sign Language and of American Sign Language – sign languages that are unrelated both historically and culturally from each other – were asked to sign a list of sentences that included “yes/no” questions, conditional sentences, imperatives, relative clauses, and sentence topics.
The researchers found that a set of fixed facial expressions and head movements typically accompany different kinds of sentences in each language. Some of these are the same in the two languages, but some are noticeably and systematically different.
For example, “yes/no” questions in both languages are accompanied by raised eyebrows, widened eyes, and a forward head position. But the topic of the sentence, the part of the sentence that identifies what the sentence is about (often the same as the subject of the sentence), is marked quite differently in each language. American signers raise their eyebrows and keep their heads tilted back throughout the topic, while Israeli signers typically squint their eyes and move their heads forward and downward as they sign the topic. “This finding is parallel to spoken languages. In most languages, the intonation in yes/no question sentences is very similar — the voice is raised at the end of a question – but in other types of sentences the intonation differs from one language to another,” Prof. Sandler said.
According to Prof. Sandler, the current study provides additional evidence that certain properties are universally shared across languages regardless of the physical channel through which they are conveyed. “The ability to communicate through language is unique to human beings, and the existence of fully functional, complex languages in a different physical modality makes sign languages a natural laboratory for investigating the nature of human language and cognition in our species,” concluded Prof. Sandler.
(YWN – Israel Desk, Jerusalem)
2 Responses
This likely explains why a foreigner, even one very familiar with the language spoken (e.g. a French native who understands English) will not understand a joke told in his second language (English).
It also explains why see G the speaker helps the listener understand.
It also explains why letters, texts, emails are more easily misunderstood.
It might explain why even someone who understands, say Hebrew, will not be able to follow or maintain a conversation with Israelis. It is not only the speed but also the body language.
why seeing *