The study of language traditionally focused on spoken languages, leading to a pervasive, albeit often unconscious, assumption that language is inherently vocal-auditory. However, the emergence of sign language linguistics has revolutionized this understanding, demonstrating conclusively that language is a cognitive capacity not tied to a specific modality. Sign languages, such as American Sign Language (ASL), British Sign Language (BSL), French Sign Language (LSF), and many others, are not mere pantomime or simple gesture systems, nor are they visual representations of spoken languages. Instead, they are fully fledged, complex natural languages with their own intricate grammatical structures, rich vocabularies, and dynamic expressive capabilities, developed organically within Deaf communities.
These languages operate in the visual-manual modality, utilizing the hands, arms, body, and face to convey meaning. This distinct modality profoundly shapes their linguistic features, leading to unique organizational principles that parallel and diverge from those found in spoken languages. Investigating the linguistic features of signs, particularly within the context of sign languages, provides invaluable insights into the universal properties of human language and the diverse ways in which these properties can be realized. This exploration encompasses their sub-lexical components, their morphological complexities, their syntactic structures, and their semantic and pragmatic nuances, all of which are orchestrated by the visual-manual channel.
The Visual-Manual Modality and its Foundations
The fundamental difference between spoken and signed languages lies in their modality. Spoken languages are linear, sequential, and auditory-vocal, relying on sound waves produced by the vocal tract and perceived by the ear. Sign languages, in contrast, are spatial, simultaneous, and visual-manual, utilizing movements and configurations of the hands, arms, and body, perceived by the eye. This distinction is not superficial; it deeply influences the very architecture of their grammar, enabling simultaneous expression of information that would be sequential in spoken language. The signing space—the three-dimensional area in front of the signer’s body—is a crucial linguistic resource, used not merely for general communication but as a grammatical tool for conveying a multitude of linguistic relationships.
Phonology: The Building Blocks of Signs
Just as spoken languages break down words into smaller, meaningless sound units called phonemes, sign languages decompose signs into sub-lexical components that are analogous to phonemes, often referred to as “cheremes” or “primes.” These primes are not meaningful in isolation but combine systematically to form meaningful signs. The simultaneous nature of sign language phonology means that several parameters are articulated concurrently, unlike the sequential nature of phonemes in spoken words. The primary phonological parameters in most sign languages include Handshape, Location, Movement, Orientation, and crucially, Non-manual Features.
Handshape (HS): This parameter refers to the specific configuration of the hand(s). Sign languages utilize a finite set of distinct handshapes, which are typically iconic in origin but become arbitrary and contrastive within the system. Examples include open palm (often ‘B’ or ‘5’), fist (‘A’ or ‘S’), index finger extended (‘G’ or ‘1’), cupped hand (‘C’), or a handshape resembling a specific letter of the manual alphabet. A change in handshape, while keeping other parameters constant, can create a minimal pair, distinguishing two entirely different signs (e.g., the handshape difference between “CARRY” and “WALK” in some sign languages). The precise configuration, including the position of fingers, thumb, and wrist, is phonologically significant.
Location (LOC): This refers to the specific place in the signing space where the sign is produced. The signing space extends from the top of the head to the waist, and from shoulder to shoulder, including locations on the body itself. Common locations include the forehead, temple, nose, chin, chest, upper arm, or a neutral space in front of the body. For instance, signs like “KNOW” (forehead) and “DON’T KNOW” (chin) might differ primarily in location. The selection of location can be precise, differentiating signs with otherwise identical handshapes and movements.
Movement (MOV): This parameter describes the trajectory, direction, and manner of the hand(s) during the production of a sign. Movements can be straight, circular, arcing, zigzag, repetitive, sustained, or momentary. The path, speed, and rhythm of a movement are all phonologically contrastive. For example, a single, sharp movement might denote an instantaneous action, while a repeated, sustained movement might indicate continuity or habituality. The distinction between “SIT” (a downward movement) and “CHAIR” (a repeated downward movement, indicating the noun) often rests on this parameter.
Orientation (ORI): This refers to the direction the palm or fingers are facing relative to the signer’s body or the viewer. Palms can be oriented up, down, forward, backward, or to the side. Fingers can point up, down, or forward. A change in orientation can distinguish signs, such as “NAME” versus “LIVE” in ASL, where the handshape and location are similar, but the palm orientation differs. This subtle parameter ensures precise articulation and differentiation of signs.
Non-manual Features (NMF): These are perhaps the most distinctive and pervasive phonological parameters in sign languages, playing roles that extend far beyond simple facial expressions. NMFs include facial expressions (e.g., eyebrow raises, furrowed brows, mouth movements, puffed cheeks, lowered chin), head tilts, body posture shifts, and eye gaze. While some non-manuals convey emotion, many are integral phonological components of signs, distinguishing lexical items. For example, a particular mouth movement might be a phonological part of a specific sign (e.g., “PO” for “dirty” or “TH” for “careless” in ASL), not just an emotional overlay. They are crucial for distinguishing lexical items and also contribute significantly to morphological and syntactic marking.
The simultaneous nature of these parameters means that a single sign is a holistic gestural configuration rather than a sequence of discrete units. This “duality of patterning,” where meaningless units combine to form meaningful ones, is a universal feature of human language, present in both modalities.
Morphology: Building Meaningful Units
Sign languages exhibit rich and complex morphological systems, demonstrating how new meanings are created by combining or modifying existing signs. This includes both derivational processes (forming new words from existing ones) and inflectional processes (modifying words to indicate grammatical categories like tense, aspect, and agreement).
Derivational Morphology: This process creates new lexical items. A classic example is the noun-verb pair distinction, often achieved through movement modification. For instance, in many sign languages, a single, sustained movement might form a verb (e.g., “TO SIT”), while a repeated, diminutive movement using the same handshape and location creates the corresponding noun (e.g., “CHAIR”). Similarly, agentive nouns can be derived from verbs, often by adding a specific movement or location shift, such as moving the sign for “TEACH” to represent “TEACHER” by adding a downward movement indicating a person.
Inflectional Morphology: This is where the visual-manual modality truly shines in its unique morphological strategies, leveraging space and movement extensively.
-
Verb Agreement: Unlike spoken languages which often use prefixes or suffixes for agreement, sign languages utilize spatial modification of verbs. Verbs can be “directional” or “agreeing,” meaning they change their form to indicate the subject and object of the action by moving from the location representing the subject to the location representing the object. For example, the sign for “GIVE” might start at the signer’s location (representing ‘I’) and move towards a point in space assigned to another person (representing ‘you’ or ‘him/her’). This incorporates subject-object agreement directly into the verb’s movement, making explicit who is performing the action and on whom. This spatial indexing is a highly efficient way to encode grammatical relations.
-
Aspect: Sign languages convey aspect (how an action unfolds over time—e.g., continuous, habitual, durative, iterative) through modifications of the sign’s movement. For example, a sign might be repeated rapidly to indicate habitual action (“always do”), signed with a slow, sustained movement to show duration (“do for a long time”), or signed with a trembling or tense movement to indicate intensity or effort. This allows for nuanced temporal information to be embedded directly within the verb.
-
Number: Pluralization in sign languages is often achieved through reduplication, spatial distribution, or the use of specific numerical classifiers. For instance, a sign might be repeated multiple times across the signing space to indicate multiple referents, or a classifier for “people” might be swept across a line to indicate “many people.”
-
Temporal Markers: Tense is often marked by specific lexical signs (e.g., “PAST,” “FUTURE,” “YESTERDAY”) placed at the beginning or end of a sentence. However, a “timeline” concept is also prevalent, where past events are referred to by signing behind the signer, present events are signed in the immediate front, and future events are signed in front and away from the signer.
Compounding: Sign languages also form compound signs by combining two or more existing signs to create a new concept, often with a meaning that is not simply the sum of its parts. For example, “FACE” + “BOOK” -> “FACEBOOK”. The phonological parameters of the original signs often undergo assimilation or reduction in the compound sign.
Classifiers: These are a particularly salient and productive morphological system in many sign languages, acting as a bridge between morphology and syntax. Classifiers are handshapes that represent a class of nouns (e.g., a “vehicle” handshape, a “person” handshape, a “flat surface” handshape). These handshapes are then used in combination with movement and location to describe the noun’s movement, location, or state in space. For example, a “vehicle” classifier handshape could be moved in an arc to describe a car driving around a bend, or placed on a vertical surface to indicate a car parked on a wall. Classifiers are highly iconic and allow for a very compact and visually descriptive way of conveying information about objects and their actions.
Syntax: Sentence Structure in Space
The syntactic organization of sign languages leverages the visual-manual modality to structure sentences in sophisticated ways, often enabling simultaneous encoding of information that is sequential in spoken languages.
Word Order: While many sign languages exhibit a flexible word order, some, like ASL, often follow a Subject-Verb-Object (SVO) pattern, similar to English. However, this can be less rigid due to the extensive use of spatial verb agreement and non-manual markers. Topic-Comment structures are also very common, where the topic is established first, often marked by a raised eyebrow and head tilt, followed by the comment.
Use of Space (Grammatical Space): The signing space is a dynamic grammatical arena.
- Referential Indexing: Nouns or pronouns are often established in specific locations in the signing space. Once a referent is assigned a spatial locus, subsequent references to that entity can be made simply by pointing to or directing a sign towards that established locus. This is akin to using pronouns but with a powerful spatial dimension that makes coreference explicit and unambiguous.
- Verb Agreement: As discussed under morphology, verbs move between established loci in the signing space to indicate subject and object, thereby encoding agreement spatially rather than through affixes.
- Narrative Structure: Signers can use different areas of the signing space to represent different characters, scenes, or viewpoints in a narrative. This “spatial mapping” allows for complex narratives to unfold visually, making it easy to track multiple participants and their interactions.
- Role Shifting (Constructed Action/Dialogue): A particularly powerful syntactic device is “role shifting,” where the signer physically shifts their body, head, and eye gaze to adopt the role of a character in a narrative, directly performing their actions or voicing their dialogue (through direct quotation in sign). This integrates direct speech/action into the narrative flow seamlessly and visually.
Non-manual Markers (Syntactic Function): Beyond their phonological roles, non-manual features are indispensable for marking grammatical structures at the sentence level.
- Questions: Yes/No questions are typically marked by raised eyebrows, a slight head tilt, and a sustained eye gaze, which must co-occur with the signed lexical items. Wh-questions (who, what, where, when, why) are marked by furrowed brows, a forward head tilt, and often a specific mouth posture.
- Negation: Negation can be indicated by a head shake that co-occurs with the negative sign or the entire negative phrase. Other negative non-manuals include squinted eyes or a downward turn of the mouth.
- Conditionals: Conditional clauses are often marked by raised eyebrows and a head tilt during the “if” clause, followed by a neutral or different non-manual for the “then” clause.
- Topic Marking: As mentioned earlier, raised eyebrows and a head tilt often mark the topic of a sentence.
- Adverbial Phrases: Certain non-manuals can function as adverbs, modifying the signed action to indicate intensity, duration, or manner (e.g., puffed cheeks for “very large” or a tight lip for “carefully”).
Clause Structure and Recursion: Sign languages exhibit recursive properties, meaning phrases and clauses can be embedded within other phrases and clauses, allowing for the expression of infinitely complex ideas from a finite set of elements. For example, relative clauses are marked through specific non-manuals (e.g., narrowed eyes or backward head tilt) over the entire relative clause.
Semantics and Pragmatics: Meaning and Use
The study of meaning and language use in sign languages reveals principles common to all human languages, alongside modality-specific manifestations.
Iconicity and Arbitrariness: While many signs appear to be iconic (i.e., they visually resemble what they represent, like the sign for “HOUSE” or “DRINK”), sign languages are not purely iconic. Over time, signs become conventionalized and often lose their direct iconic resemblance, becoming arbitrary. The relationship between signifier and signified in mature sign languages is largely arbitrary, just as it is in spoken languages. However, sign languages often retain a higher degree of iconicity in their lexicon and especially in their morphological and classifier systems, leveraging the visual channel to directly represent properties or actions. This balance between iconicity and arbitrariness is a fascinating aspect of their design.
Polysemy and Homonymy: Like spoken words, signs can have multiple meanings (polysemy) or signs that look or are produced identically can have different meanings (homonymy), disambiguated by context or non-manuals.
Figurative Language: Sign languages are rich in metaphors, similes, idioms, and other forms of figurative language. For example, a common metaphor in ASL uses concepts of “inside/outside” to refer to “private/public” thoughts or feelings. Idioms, like “TRAIN-GO-SORRY” (missed opportunity), demonstrate the conventionalized and non-compositional nature of meaning.
Pragmatics and Sociolinguistics: Sign languages also show rich pragmatic variation based on context, audience, and social factors. Register differences exist (e.g., formal vs. informal signing). Code-switching between a sign language and a spoken language (or a signed version of a spoken language) is common in bilingual Deaf communities. Eye gaze, body orientation, and proximity are crucial pragmatic cues, indicating turn-taking, attention, and social relationships in conversations.
Universals and Differences in Sign Languages
Despite their modality difference, sign languages share many universal linguistic properties with spoken languages, providing compelling evidence for the cognitive basis of language. Both exhibit duality of patterning, recursion, displacement (the ability to talk about things not present in space or time), and productivity (the ability to create infinite new utterances). They are acquired by children in similar developmental stages, show critical periods for acquisition, and are processed in similar brain regions.
However, the differences between various sign languages (e.g., ASL, BSL, LSF) are as profound as the differences between various spoken languages (e.g., English, French, Mandarin). They are mutually unintelligible and have evolved independently in different Deaf communities, reflecting their distinct cultural and historical trajectories. Each sign language boasts its unique set of handshapes, locations, movements, and non-manuals, and combines them according to its specific phonological, morphological, and syntactic rules.
In conclusion, the linguistic features of signs, particularly as embodied in natural sign languages, represent a remarkable testament to the plasticity and ingenuity of the human language faculty. Far from being simplistic gestural systems, sign languages are complex, fully grammatical systems that harness the visual-manual modality to encode meaning with a richness and sophistication comparable to any spoken language. Their unique phonological parameters (handshape, location, movement, orientation, non-manuals) allow for the simultaneous articulation of sub-lexical units. Their morphology creatively leverages space and movement for inflectional purposes, such as agreement, aspect, and number, while employing derivational processes like noun-verb pairs and compounding. Syntactically, the use of grammatical space for referential indexing, verb agreement, and narrative structuring, alongside the pervasive role of non-manual markers for various sentence types and pragmatic functions, reveals a highly efficient and visually intuitive grammatical architecture.
The interplay between iconicity and arbitrariness in their lexicon, coupled with their capacity for figurative language, further underscores their semantic depth. The existence and systematic study of these languages have profoundly broadened our understanding of what constitutes “language,” challenging the long-held phonetic bias in linguistics. They demonstrate that the human brain’s capacity for language is abstract, capable of manifesting in diverse sensory-motor modalities. Recognizing the linguistic complexity of signs is not only vital for deaf education and communication but also enriches general linguistic theory, providing invaluable data points for identifying universal principles of language and understanding the fascinating ways in which modality can shape linguistic structure.