Corresponding author: Claudia S. Bianchini ( claudia.savina.bianchini@univ-poitiers.fr ) Academic editor: Olga Iriskhanova
© 2021 Léa Chevrefils, Claire Danet, Patrick Doan, Chloé Thomas, Morgane Rébulard, Adrien Contesse, Jean-François Dauphin, Claudia S. Bianchini.
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits to copy and distribute the article for non-commercial purposes, provided that the article is not altered or modified and the original author and source are credited.
Citation:
Chevrefils L, Danet C, Doan P, Thomas C, Rébulard M, Contesse A, Dauphin J-F, Bianchini CS (2021) The body between meaning and form: kinesiological analysis and typographical representation of movement in Sign Languages. Languages and Modalities 1: 49-63. https://doi.org/10.3897/lamo.1.68149
|
Most of the research on Sign Languages (SLs) and gesture is characterized by a focus on hands, considered the sole body parts responsible for the creation of meaning. The corporal part of signs and gestures is then blurred by hand dominance. This particularly impacts the linguistic analysis of movement, which is described as unstable, even idiosyncratic. Boutet’s Kinesiological Approach (KinApp) repositions the speaker’s body at the core of meaning emergence: how this approach considers and conceptualizes movement is the subject of this article. First, the reasons that led SLs researchers to neglect the analysis of the sign signifying form, focusing on the hand, are exposed. The following part introduces KinApp which, through a radical change of point of view, allows revealing the simplicity and stability of movement: understanding the cognitive and motor reasons for this stability is the subject of research whose methodology is described. Setting the body at the center of analysis requires a descriptive model capable of accounting for the SLs signifying form, thus going beyond existing transcription systems. The last part is devoted to the presentation of Typannot, a new transcription system, aimed not only at a kinesiological description of SLs but also at assisting researchers to modify how they understand and analyze movement.
corpus linguistics, gestures, grapholinguistics, kinesiological approach, motion capture, movement, sign languages, transcription systems
It is often said that the deaf, just like the Italians, “speak with their hands”, as if the meaning of sign languages (SLs), and of co-verbal gestures, emerged exclusively from the hands. The body of the speaker, signer or gesticulator, although physically present, almost disappears behind those hands, which capture the full attention of the listener... and of the researchers! This exclusion of the body particularly impacts the linguistic analysis of movement. As the hands can move from, and to, an infinite number of locations, plotting a multitude of possible trajectories in between, their movement is considered to be difficult to analyze, if not irrelevant, because of its instability and of its seeming idiosyncratic nature.
The work of Dominique Boutet aims at placing the speaker’s body back at the center of attention, not only as the origin of articular constraints that limit the possibilities of hand movement, but as the heart of the emergence of meaning of signs and gestures. For this, a single set – called upper limb and formed by fingers, palm, forearm and upper arm – is taken into account. Combining phonology with biomechanics, Boutet’s Kinesiological Approach (KinApp) constitutes a descriptive and representative model of the body, as well as an explanatory model of the emergence of meaning for SLs and gestures. The way in which movement, especially in SLs, is understood and conceptualized by KinApp is the core of this article.
The first part presents two reasons that led SL researchers to focus on the signified and the signs functions: one is linked to the process of recognizing SLs as an “object” of linguistic study (§I.A) which led to the refutation of their corporeality; the other is due to the oral (but not vocal!) nature of SLs (§I.B), and to the difficulties that corporeality generates in the graphic representation of the signifying form. These two reasons create a vicious circle by reinforcing each other, thus confirming the neglecting of the body.
The second part focuses on KinApp, which shows the importance of the body as creator of meaning. In this approach, movement is no longer described as the trajectory connecting two locations, but as a gestural unfolding involving the whole upper limb observed from the intrinsic point of view of its segments: this makes it possible to describe the parameter as a simple and, above all, stable element (§II.A). Keeping on with KinApp, Chevrefils’ thesis (forthcoming) investigates the motor and cognitive economy which seems to govern the stabilization process leading to the creation of meaning, via the study of a hybrid corpus mixing video and motion capture (MoCap) (§II.B).
This new way of seeing body and movement cannot be tested without a new descriptive model capable of accounting for the signifying form of SLs, thus going beyond the limits of existing transcription systems. The objective of the third part is to present a new transcription system, called Typannot (§III.A). The purpose of the system is not only to allow a kinesiological description of SLs, but also to help researchers change how they understand and analyze movement (§III.B).
Long considered as a very elaborate pantomime, SLs have, since the founding works of
The argument making it possible to demonstrate the linguistic nature of SLs goes through the validation of the criteria which, at the time of Stokoe, were considered as defining a language (
The validation of statements (3) and (5) results from the work of Stokoe (1960; Battison et al., 1965), identifying cheremes (i.e., phonemes) in manual parameters (hand shape, orientation, location and movement) and kinemes (i.e., morphemes) in whole signs (note that, for Stokoe, meaning lies exclusively in the hand). However, the validation of statement (2) is problematic: observing SLs reveals a strong iconic motivation linking referent to signifier, which seems to go against the arbitrariness of linguistic signs (Fig.
Early SLs research therefore strives to demonstrate that iconicity is irrelevant, which is tantamount to showing that the signifying form of signs does not matter.
Shortly after Stokoe’s research, SLs captured the interest of researchers (e.g., Klima and Bellugi [1979]) in Chomsky’s rising generativist paradigm (1968). They intended to test their theories on universal grammar using languages which, by their visual-gestural nature, are different from all the other vocal languages (VLs). Their aim was to demonstrate the universality of grammar, independently of the way in which the language is produced: the SLs corporeality, after having captured the generativists’ interest, became an element whose linguistic irrelevance must be demonstrated.
It was therefore within the framework of a body denial that SLs research developed from the ‘60s. In the ‘80s, while generativist research was still in full swing in the USA and in a large part of Europe, the work carried out in France by Cuxac and his team (
One explanation for the lack of interest in the signifying form of signs, unrelated to the linguistic theories, lies in the difficulty to graphically represent SLs. Indeed, like most languages in the world, SLs are exclusively oral languages, that is to say they do not have a writing system (although there are many that tried, in vain, to be accepted by the Deaf; see
The comparison between the processing phases of VLs and SLs oral corpora highlights the implications that representation problems have on linguistic analyses. Researchers working on corpora of oral VLs record their raw data on audio or audiovisual media; thereafter, they can, through IPA or other adaptations of existing (phonographic) systems, produce a transcription of the meaningful form of the language; if they do not master the language in question or if they want to make the corpus more accessible, they may add a word-by-word and/or a sentence-by-sentence translation; finally, based on the transcription, they produce annotations – i.e., labels, reflecting their theories, assumptions and methodologies – on which their analyses will be based (Table
Level of analysis for the phrase in oral Italian (var. Rome) “the cat chases the dog” (revisited from example 4 of Pizzuto and Pietrandrea [2001]).
1a | source | not available, or inspectable only by asking the researcher | ||||
1b | phonetic transcription | εr | gatto | insegwe | εr | kane |
1b' | orthographic transcription | il | gatto | insegue | il | cane |
1c | annotation | det&m&sg | CAT-m&sg | CHASE-3sg | det&m&sg | DOG-m&sg |
1d | translation | "the cat chases the dog" |
Notwithstanding the researcher’s best intentions of objectivity, all pre-analysis operations apply filters that attach subjectivity to the data examined by the researcher. Even if “transcription and its notation system [...] incorporate the theoretical presuppositions of the transcriptor on the written modes of oral representation” (Mondada, 2020), it is indeed this operation that allows maintaining the link between raw data and annotations.
Like those working on VLs, SLs researchers record data in video format. However, not being able to transcribe the signifying forms of the signs, given the absence of a graphic system, they replace them with “glosses”, sign-by-word translations in the researcher’s reference VL (
Level of analysis for the phrase in Italian Sign Language (var. Rome) “the cat chases the dog” (revisited from examples 1 and 3 of Pizzuto and Pietrandrea [2001]).
2a | source | not available, or inspectable only by asking researcher | ||||||
2b | phonetic transcription | not available by lack of notation systems | ||||||
2b' | orthographic transcription | not available by lack of notation systems | ||||||
2c | multilinear annotation | { | LH | [XDOG] | [ACLs | ] | [A>CRUN] | |
RH | [XCAT] | [BCLs] | [B>CCHASE] | |||||
2c' | linear annotation | [LHDOGX] [LHCLsA] [LHCLsA & RHCATX] [LHCLsA & RHCLsB] [LHRUNA>C & RHCHASEB>C] | ||||||
2d | translation | "the cat chases the dog" |
The lack of a graphical representation system makes it necessary to switch from a “transcription” to a “translation” operation, thus losing the connection for linking raw data to annotations. The researcher then bases his analyzes not only on the written form of an oral language, but on the written form of a language which is not only another language – which already creates biases –, but which in addition does not even share the modality of production-reception of the language object of the analysis. So, it is not just a question of a link absence, but of a true rift between raw data and the analyses carried out on those data.
Although many SLs researchers persist in using this practice – once again showing the influence of body denial on SLs analysis –, since the 2000s the widespread use of software like ELAN (
There is nevertheless a minority of researchers (among whom it is possible to remember Antinoro Pizzuto, Garcia, Crasborn, Prillwitz, and the GestualScript team) who consider that the absence of this line of transcription is a major problem, to be solved through the development of specific instruments to graphically represent SLs. The approaches among these researchers are very different: some try to transcribe SLs using systems invented to write them (or other body practices, such as dance); others think about the theoretic characteristics that a SL graphic system should have to satisfy the various writing and transcription functions without, however, proposing concrete graphic solutions; finally, others get practical and try to build a transcription system to meet their research requirements (see
Although – as it has been said – solutions do exist, they are only adopted by a small part of SLs linguists and not just only because of a lack of interest in the question: existing systems have often been developed to meet specific descriptive requirements of a research project, and their graphic principles raise many criticalities. For example, these systems have problems with the correspondence between number of characters and information provided (some do not have enough characters to represent all the SL finesse, while others have so many that there are multiple ways to write the same signifying form, hampering the learning process and making it impossible to compare transcriptions); they are either difficult to read for the researcher (preventing a qualitative assessment of the corpus content) or by linguistic analysis software (making quantitative analysis of the corpus impossible); they sometimes seek to be exhaustive without however being easily extendable to unforeseen cases, blocking their opening to other SLs or to human gesture; finally, the use of these systems is extremely time-consuming because of the complexity inherent in them and/or the lack of computer tools to facilitate their use. For more details on the different systems, see
The sum of body denial and difficulties to graphically represent the signifying forms imposes many biases on SLs linguistic studies. The consequences are noticeable, especially in the study of movement. The absence of a graphic system to represent it, but also of a descriptive model allowing to understand its formal characteristics, means that movement – if even considered – is expounded as a complex and unstable parameter, difficult to analyze and model.
Bypassing the search for reasons preventing the adoption of these graphic systems by linguists, their comparison brings out another common characteristic: although they were created with the aim of representing SLs shape, these systems offer a limited and frozen vision of signs. Moreover, all these systems are organized around the hand: its shape, location, orientation (all grouped under the revealing name of “manual parameters”) are described through their superficial manifestations, offering the vision of a hand “xeroxed” in the graphic space (
This trend, as already noted in §1, is not specific to graphic systems: it has permeated SLs research since 1960. Even research on phonology, for which the sign articular component is fundamental, is limited to the hand, reducing movement to a series of successive postures. This is the case, for example, with the “Hold and Movement” descriptive model of Liddell and Johnson (1989;
Departing from these research practices which entail a hierarchy between parameters - with movement downgraded to a secondary role - and reduce the body to only one of its segments (the hand), Boutet’s KinApp was developed (2005; 2008; and more works; see also the article by Morgenstern and Chevrefils, in this issue).
For Boutet, meaning arises from the body articular capacities, and it is therefore necessary to reposition the body and its dynamics as the core of SLs description: KinApp offers, in addition to an analysis of the perceptual form of signs (the trace drawn in the signing space by the body, seen from the outside), an articular-skeletal and intrinsic description of the upper limbs creating them (the way in which the body is articulated to create this trace, the point of view is then “internal” to the signer’s own segments); this second descriptive scheme is the most innovative aspect of KinApp.
Existing phonological descriptions focus on the hand, as the only segment carrying meaning, and restrict the description of movement to this same segment: the sign [NEVER]
The analysis in biomechanical terms requires examining, one by one, the possible variations of DoF of each segment on the upper limb; this analysis must adopt a frame of reference intrinsic to each segment
In Euclidean geometry, the properties of 2D shapes are studied on plane surfaces. Still governing our normal way of conceiving, measuring and quantifying space, this geometry is adopted by most descriptive models of SLs and gestures: using this approach, the extrinsic frame of reference and the focus on hand, the simplest form a signer can produce is a straight line with the hand. For a signer, however, drawing this line requires in reality the setting in motion of a large number of DoF distributed over several segments (Fig.
New geometries, called “non-Euclidean”, were developed since the 19th century, among which are spherical geometries, where 2D shapes are put on the surface of a sphere. Using this approach and analyzing each segment within a frame of reference intrinsic to the segment, the simplest form a signer can produce is a curve with the segment extremity. Drawing this curve only requires setting in motion a single DoF of the segment (Fig.
KinApp adopts a spherical geometry, allowing a simple geometrical description of what is simple from the articular (but not the visual) point of view, thus realizing a descriptive economy which helps to “decomplexify” the movement analysis: so, movement becomes “simple” to describe and model.
These three paradigmatic changes empowered Boutet to explore in a new way the gestural phenomena, in particular those linked to negation. Although this type of gesture can be carried out in multiple ways, all manners allow to decode the same message: this is because the different realizations contain invariants, i.e. stable and recurring parameters (
A question then arises for SLs (and, by extension, for body language as a whole): within the movement dynamics, is it possible to distinguish formal invariants (which, in Boutet’s hypothesis, constitute the core of meaning creation)? The SL analysis offered by existing systems is too coarse for uncovering the linguistic traits necessary for this exercise. This situation led Boutet to restructure the classic manual parameters (movement, orientation, location), replacing them by the notions of initial location (LOCini) and movement (MOV)
LOCini and MOV are central for understanding the corporeality that structures SLs, i.e. what is at stake in sign deployment. While their relationship is still a hypothesis, it is important to unveil the links between LOCini and MOV, because their understanding may prove the movement stabilization, a parameter which will thereafter become simple to describe and to model. This search is currently being carried out as part of Chevrefils’ thesis (forthcoming).
The hypotheses concerning the relationship between LOCini and MOV, as well as the methodology for deepening and testing them, are presented in the following section (§II.B).
Through KinApp, Boutet proposes to study movement as a stabilized process governed by economic principles: this statement is based on two hypotheses, one biomechanical and the other cognitive.
The biomechanical hypothesis is based on the “movement flow” notion (see Morgenstern and Chevrefils, in this issue). The MOV unfurling through the upper limb does not happen randomly but following rules of inertia transfer and geometric relationships (
The formal sign stabilization is also based on a cognitive hypothesis. This part of the “economy” is linked to the fact that in order to be able to carry out a gesture or a sign, the central nervous system (CNS) must have already set up a precise motor program, i.e. the instructions necessary for its smooth running. Boutet’s hypothesis relates to the nature of these instructions: the observation of signers shows that the realization of a MOV does not require any feedback, readjustment, or special attention, and takes place in one straight whack. The absence of a feedback loop assumes that the motor program readied by the CNS is reduced to its simplest configuration, thus leading to highly economical MOV instructions. Boutet compares this cognitive functioning to a “Mobile” by the sculptor Alexander Calder: “The motor program (
Although the “motor program” notion comes from cognitivist theories – which consider the control of gestures as a centralized task of information processing (
This dynamic theory of motor control is particularly interesting because it emphasizes the relationships among segments or limbs, echoing the body uniqueness of KinApp. Moreover, the self-organization idea is closely linked to that of motor economy, a founding element of Boutet’s thought. Indeed, the two puppets in Fig.
It is therefore relevant to link KinApp to the dynamic theories of motor control to explain MOV produced in SLs and, more generally, in gestures. In order to find answers to Boutet’s cognitive and motor hypotheses (see beginning of §II.B), they are further specified in the context of Chevrefils’ thesis (to be published). These sub-hypotheses, one of which relates to LOCini (Fig.
The green and blue hatched area in Fig.
Testing these hypotheses requires a fine and reliable graphic representation of the SLs signifying form: Typannot, to be described in §III, meets this requirement. However, the complexity of the task also requires a means for processing kinematic data in a quantitative and objective manner, without the latent subjectivity that permeates all intellectual enterprises. In order to efficiently explore the LSF structuring and to test KinApp, a hybrid corpus – video and MoCap–
The first processing, on the video corpus, has been mainly a data segmentation work, with the aim to differentiate one sign from another, but also to distinguish the preparation phase (articulators placement leading to LOCini) from the signifying phase (MOV) of each sign (Fig.
Speaking of the MoCap recordings – which provide values of relative positions along the 3 orthogonal axes of each body segment – the exported row data is like a digital time series: since it is impossible to distinguish the relevant information (intra-sign movements) from what is irrelevant (extra-sign movements), it has been necessary to synchronize the recordings and to merge them with the previously established segmentation, in order to recover the areas of interest in the kinematic data (Fig.
The last step (Fig.
The comparisons arising from this “double” corpus will be examined on a selection of signs classified according to their degrees of complexity: e.g., for each sign, the vector of motion of the segments will be broken down into components, determining what is proper motion and what may belong to the transfer motion from another segment; and compare this to the Typannot transcriptions. That each level of phonological description may find its kinematic correspondence is directly related to the scientific framework set up by KinApp: Boutet’s choices were guided by his willingness to participate in the deployment of MoCap technologies, still not widespread in SLs labs.
These thesis activities thus propose to bring to light and clarify the motor coupling of LOCini and MOV through a double analysis: better understanding the functioning of these two parameters – central to all SLs – can lead to typifying the sign shape from the start of its deployment; in this sense, this applied research is a direct filiation of the approach built by Boutet, contributing to its development and dissemination.
The work carried out by Chevrefils (forthcoming) is not the only one cascading from Boutet’s work; Thomas (forthcoming) is also working on a thesis which, focusing on the analysis of head movements and facial expressions, fits into KinApp.
Still, proving KinApp various hypotheses requires a SLs transcription system allowing the description of upper limb segments, essential for these reflections: the need to create a new form of graphic representation dedicated to SLs, as well as gestures, which incorporates KinApp principles, is therefore unavoidable.
The KinApp principles were adopted by the GestualScript research group
Typannot development is made up of 3 parts, to be exemplified hereafter:
Prior to any explanation of Typannot, it is fundamental to clarify the difference between “typographic character” and “typographic glyph”: a character is an abstract distinctive graphic unit whose informative content is independent of its form; in order to be recognized by software, characters must be linked to a code assigned by the Unicode Consortium; a glyph is the concrete graphic realization of a character or a comb of characters (in the case of typographic ligatures) in a defined font. By pressing the SHIFT+A keys on a keyboard, the command “writes the Unicode character 0041” is sent to the word processor which then displays A, A or A, depending on the chosen font (here: Arial, Impact and Curlz): these 3 different A’s all correspond to the same typographic character (A), but all have different shapes because the displayed glyph is different.
In order to briefly and concretely present how Typannot works, the LOCini component is described below.
LOCini inventory consists of 18 generic characters which, to be displayed on screen, need to be associated with generic glyphs (Fig.
), the segments to be described (
), the possible DoF (
) and the rotation angles of the different DoF, expressed as notches (
).
To describe a particular LOCini, the generics are organized in a rigid syntax formula (Fig.
From just 18 characters, it is possible to encode more than 4 million distinct LOCini, i.e. as many as the possible generic combinations; moreover, the generics inventory is made of characters specific to LOCini, but other traits are common to several sign components, e.g. distinguishes left and right sides in LOCini, in HS, in EAct. This transversality among characters greatly limits the number of generics necessary for the description of all the SLs components: this opens the possibility for recognition of Typannot by the Unicode Consortium, the international organization which ensures the compatibility of a graphics system with all software, operating systems and browsers
The transcription carried out with Typannot results in a description queryable on several levels, e.g. searching for a whole LOCini, but also for all inflections regardless of the segment concerned and/or the flexion amplitude, or looking for co-occurrence of hand flexion and arm adduction, and so on. In the KinApp context, this allows to search for gestural invariants, stable and recurring values within signs (and co-verbal gestures), with the aim of finding what is in common among different realizations of the same sign and of identifying the origin of meaning creation.
The generics arrangement in the formula guarantees information consistency, completeness and queryability. The typographical approach to the transcription problems, via the separation between information imbedded in the character and its glyphic manifestation, as well as the recognition of these characters by the Unicode Consortium, not only ensures that Typannot is compatible with all current software and OS, but that it will remains compatible as new writing technologies are deployed. However, although deciphering these formulas is quite simple, this part of Typannot does not guarantee a good readability for the human researcher.
The second part of Typannot consists of the deployment of a family of typographic fonts including on the one hand all generic glyphs, and on the other a series of compound glyphs, which are visual, synthetic representations of all information encoded by generics in a formula (Fig. ] by composing “
” and “
”, the same is true to make appear the glyph [
] by composing the formula “
”. This procedure allows a multitude of glyphs to appear from a very limited number of characters; moreover, it ensures the queryability of the information contained in the formula while providing a more readable version of this information. Indeed, as it is possible to find [œ] by searching for “o”, it is possible to find [
] by searching for “
” because the information is contained in the abstract characters and not in their visible, actual glyphic visualization.
Creating the compound glyphs is not just a matter of drawing bodies and hands, it is a real design research process (to identify glyphic modules that can be combined in order to obtain a distinct graphic representation for each formula, and to determine the way to combine them) and in linguistics (to identify which combinations to represent
As readable, coherent, exhaustive and searchable as it may be, a good transcription system must be easy to write: indeed, a poorly scriptable system disproportionately increases the time required for transcription. Therefore, Typannot approach was to create a computer instrument that could facilitate the generics insertion: its virtual keyboard (TypannotKB) not only allows the generics selection and their arrangement in the formula without syntax errors, but also checking the transcription in real time, by displaying the compound glyph and an avatar showing the pictured form.
Thus, for LOCini, TypannotKB allows, through a series of cursors, to assign a notch to each segment DoF; the glyph dynamic display visualizes in real time the effects of a DoF variation on a particular segment (Fig.
The presence of an immediate feedback on the values manipulation is, in addition to a means of transcription verification, a pedagogic instrument for Typannot learning. Indeed, the system can be managed forthwith by manipulating the virtual keyboard: the avatar visualization of the various attempts details the exact informational content of each generic; their display as a compound glyph allows learning to read Typannot; their display as generic formula frames makes it possible to memorize the syntax but also the generic glyphs.
In order to further improve Typannot scriptability, the virtual keyboard shall be equipped with a second interface designed to automatically convert data from MoCap into Typannot generic characters (and therefore into compound glyphs too). This innovation, still at an early stage, will drastically reduce the SLs transcription time, from the current 5 hours per minute transcribed (
Typannot wants to provide a body description and representation, which allows detailed analyses of SLs and gestures regardless of the researcher’s theoretical framework; still, Typannot is not born “out of theory”, but is the result of representation necessities essential for the kinesiological reflection of Boutet, founding member and co-coordinator of the GestualScript team till March 2020. It is therefore in the KinApp framework that Typannot MOV is issued, the famous “last but not least” piece of this ten-year project, on which the GestualScript team has been working since January 2021.
The GestualScript current step is to establish the inventory of the generic characters, starting from an in-depth analysis of Boutet’s work to extract the different elements he had identified to describe MOV. The balance between Typannot fonts usable by everybody and its insertion in KinApp requires substantial efforts to fit Boutet’s approach, in particular to make its descriptive levels accessible without forcing the transcriptor to acquire peculiar skills in the study of SLs, gestures and human physiology, which were at the origin of KinApp. It is therefore necessary to detect generics that make it possible to describe MOV from a skeletal and articular point of view (by describing the DoF variations of the different segments, by distinguishing the proper movements from simple displacements, by identifying the gestural flows, etc.) as well as from its visual perception (the destination of pointings, the shapes of the traces left by the movement, etc.), but also to find a way to transmit the informational content of the different generics to the transcriptor.
The idea behind this development step is to use TypannotKB as an instrument for its simplification: the generics remain complex and linked to KinApp, but the keyboard permits to inspect MOV by returning simple questions, which then leads to the automatic selection of generics.
Preliminary results indicate that KinApp requires, among other things, to determine both the motor (i.e., the movement creation) and the semiotic contributions (i.e., the meaning creation) of each segment; following
Once the list of generics and the list of questions allowing TypannotKB to guide the transcriptor have been completed, it will be necessary to determine how all this information can be combined in a generic formula with rigid syntax. Further work will then start for creating the MOV font (with the challenges of drawing composite glyphs) and the computer implementation of TypannotKB (with its questions, its composite glyphs, its avatar and its parametric and gestural interface).
The use of Typannot will allow the researcher to achieve a very fine transcription of the movement parameter, but also to enter smoothly into the descriptive model proposed by KinApp through, among other things, the use of technology.
Linguistic signs are the result of the association of a signified (a meaning) and a signifier (a form); they make it possible to cover several language functions. The linguist’s task thus consists, among other things, in relating meaning, form and function of language units. However, although functions and meaning of the SLs signs have been studied, they were linked to an incomplete idea of movement, concentrated on the hand trajectory. KinApp then offers a further analysis, by searching for the whole form of SLs signs: the exterior observation of each single segment gives way to an intrinsic and multiple analysis of the upper limbs, giving an exhaustive account of the sign realization possibilities. This renewing process in the SLs phonological approach leads Boutet to think differently about the parameters distinctiveness – i.e., grouping orientation and location in LOCini – and to assume that MOV, beyond all appearance, is a simple and stabilized parameter. Boutet (2018:118; translated from French) concludes “knowing how movement propagates along the upper limb allows understanding how formal regularities enable meaning emergence”. Distinguishing proper movements from transferred movements makes it possible to pick the sign origin: where MOV is born, meaning is born.
KinApp continues to be developed in Chevrefils’ thesis work (forthcoming) - which refines the links between LOCini and MOV in order to fully grasp the motor economy structuring the sign shape - and in that of Thomas (forthcoming) - which proposes to study invariants by taking into account the segments and flow of the face articular features - both of which were directed by Boutet. KinApp also continues to proceed through the work of the GestualScript team, aiming at exploiting KinApp via Typannot. The relationship between the theoretical approach and its graphic modeling is indeed not unequivocal: the creation of the transcription system is both thought in terms of direct application, and as a consolidation of KinApp by completing the analysis of MOV and other sign components.
KinApp, thus made accessible to the community of researchers specialized in SLs and gestures, will be able to evolve, grow and mature, in continuity with the work - interrupted too early - of our colleague, director, mentor and, above all, friend, Dominique Boutet, whom we would like to remember with these words (original speech in French), that he told us in February 2020:
“Research is also about launching things that no doubt, if working, will really gain momentum in 10 or 15 years. This is research too, that in fact I’m doing things that will be of help to you, or others even younger. That’s the game! What is very interesting is that we launch ideas! [...] It’s great this kind of handover and the fact that it shall continue… no doubt that at some point you’re going to do the same; yeah, that’s the point!” (Dominique)