My research interests lie in (atypical) language development and the neural foundations of language. Currently my research is focused on the role of early language experience on language and brain development. I have been looking at the longitudinal development, ultimate attainment, and real-time processing of American Sign Language (ASL) syntactic structures by people who are deaf and had very limited early language exposure. I’m also exploring the anatomical differences as a result of early language deprivation.
Aside from my day-time research, I also explore the relations between behaviors, the mind, and the brain, from a cognitive neuroscience perspective, through introspecting and journaling subjective experience at different mental states, practicing mindfulness and meditation, and empathetically engaging with all formats of human endeavors. I do popular science journaling in Chinese for CogSci on wechat.
Tracking event encoding in VR
There are many ways to interpret an event depending on one's perspective. A dog chasing a cat can be viewed as a cat fleeing from a dog, or just a dog chasing, with the object being irrelevant. There is an interaction between language and general cognition/attention to frame an event both in mental image and in verbal expressions. How is this ability to encode events shaped by early language experience? How much will initial attention affect the choice of linguistic structures? In this new project, we use Virtual Reality (VR) goggles to capture how late L1 learners of ASL look at transitive and intransitive events with two participants, and investigate their language production as related to their visual inspections of the events.
Word order, or world knowledge?
This ongoing project uses a sentence-picture verification task and an elicited production task with 4 types of target sentence conditions, contrasting in event probability and noun animacy, to test how deaf individuals with early language deprivation map between phrasal form (syntactic functions) and meaning (semantic roles) when comprehending and producing simple subject-verb-object sentences such as 'APPLE BITE BOY' in American Sign Language. The preliminary results suggest that late L1 learners rely more on world knowledge instead of word order to comprehend improbable sentences.
We are currently designing a new experiment to test word order use among deaf teenagers with various age of ASL onset and ASL duration. This project is supported by NSF doctoral dissertation research improvement awards.
Cheng, Q., & Mayberry, R. (2019, Sep). Word order or world knowledge? Effects of early language deprivation on simple sentence comprehension. Oral presentation at the 13th conference of Theoretical Issues in Sign Language Research, Hamburg, Germany. (also presented at CUNY2019)
Early sign language experience shapes visual and language regions
Sign language is a visuospatial language that requires motion processing, and often involves more peripheral visual processing for proficient signers. Early sensory and language deprivation during a critical period often have a significant impact on brain development. By investigating the brain structures of deaf individuals with various ages of sign language onset, the present study explicitly examines the effects of age of sign language acquisition (AoA) on brain plasticity in visual and language regions.
Cheng, Q., Klein, E., Chen, J.K., Halgren, E., & Mayberry, R. (2018, Nov). Effects of early sign language use on anatomical structures of visual regions: Surface-based and DTI analyses. Poster presentation at the 48th Annual Meeting of Society for Neuroscience, San Diego, CA. [poster]
Early language experience shapes language pathways
The long-distance connections between language regions are crucial for language processing, but how do these language pathways get established in the first place? Are they biologically pre-programmed, or are they shaped by early language experience? In the current study, using Diffusion Tensor Imaging (DTI), we examined the connectivity of several language pathways among late L1 learners who suffered from severe early language deprivation. We found decreased connectivity in left arcuate fasciculus, a fiber bundle connecting Broca’s area and Wernicke’s area.
Cheng, Q., Roth, A., Halgren, E., & Mayberry, R. I. (2019). Effects of early language deprivation on brain connectivity: Language pathways in deaf native and late first-language learners of American Sign Language. Frontiers in Human Neuroscience, 13, 320. [open access link][SNL poster]
ASL Word Order Development
When learning a first language (L1) late in life, will the learners benefit from their matured cognitive functions, or, rather, suffer from the delayed language onset? The current study looked at word order preference in spontaneous language samples from four adolescent L1 learners collected longitudinally from 12 months to 6 years of ASL exposure. Our results suggest that adolescent L1 learners go through stages similar to child native learners, although this process appears to be prolonged.
Cheng, Q., & Mayberry, R. I. (2019). Acquiring a first language in adolescence: the case of basic word order in American Sign Language. Journal of child language, 46(2), 214-240. [link][PubMed] [ICSCL poster]
Person Marking in Ja’a Kumiai
Auka, muju tumuwa? 'Hello, how are you (singular, seated position)?'
We have been conducting field work on Ja’a Kumiai, a critically endangered and under-documented Yuman language spoken in Baja California, since 2016. The current project provides the first description of the person marking system of Ja’a Kumiai. This system involves obligatory agreement that can be analyzed compositionally and where contrasts in verbal paradigms may be neutralized due to general phonological constraints. We also show that agreement in this language involves a direct/inverse system, a system not previously described for any other Yuman language.
Caballero, G., & Cheng, Q. (to appear). Person marking in Ja’a Kumiai (Yuman). Amerindia, 42.
Cantonese speakers acquiring L2 Mandarin semantic operators
Cantonese and Mandarin are two closely related languages/dialects. Bilingual speakers of Cantonese and Mandarin mostly reside in southern China (Guangdong province) and in Hong Kong. One intriguing difference between Mandarin and Cantonese is the use of semantic operators. One semantic operator, ‘dou1′(都), is a cognate that can be found in both languages. It can be either universal/distributive (meaning ‘all/each’) or additive (meaning ‘also’) in Cantonese, but only carries the universal/distributive meaning in Mandarin, and the additive meaning is carried by a distinct operator, ‘ye3′(也). Can Cantonese speakers map the dual semantic functions of Cantonese ‘dou1’ correctly onto Mandarin ‘dou1’ and ‘ye3’? Will age of acquisition affect their performance? Find out more in the manuscript ;)
We also collected data from middle school students with various years of Mandarin experience to examine the developmental trajectory of distinguishing two semantic operators. Will write it up one day, I promise!
Cheng, Q. & Tang, G. (2016). On the L2 Ultimate Attainment of Mandarin Additive and Distributive Operators by Cantonese Learners. In Proceedings of the 13th Generative Approaches to Second Language Acquisition Conference (GASLA 2015), ed. David Stringer et al., 31-44. Somerville, MA: Cascadilla Proceedings Project. [lingref]
Using DeepLabCut for markerless 3D kinematic analysis of ASL
This project uses DeepLabCut, a cutting-edge deep learning toolbox for markerless pose estimation, to annotate kinematic features of core body parts during ASL production, using front and side view videos from the BU NCSLGR corpus. The goal is to link kinematic information with linguistic annotations (provided by the NCSLGR corpus) to facilitate research in sign language kinematics/phonetics. This type of data has been largely missing for sign linguistics research. On the left is a demo of a 3-minutes long front-view video, trained using 20 annotated frames and with 23 markers. This is a collaboration with my undergraduate RA Jan Hsiao.
This project is currently on hold due to technical issues, but feel free to request codes/trained videos if you're interested!
Grammatical judgement of Mandarin parasitic gaps
Mandarin is a wh-in-situ language that seems to exhibit parasitic gap phenomenon, but the status of this phenomenon and the licensing constraints are still under debates. This is because most of the linguistic evidence are provided by introspect judgements and lack experimental controls. The current project uses a grammatical judgement task to explicitly test two factors that are argued to license parasitic gaps in Mandarin, namely topicalization (topicalized vs. in-situ) and wh-words (Wh-words vs. noun phrase). The preliminary results showed interaction effects between wh-word and topicalization.
This is a course project of LIGN225 taught by Prof. Grant Goodall, Spring 2015, UCSD. It is not continued due to lack of time and shift of research interests. Feel free to request the stimuli list or the preliminary dataset (from 22 native Mandarin speakers).
In a related collaboration project with Yaqian Huang and Momma Shota, we used Mandarin parasitic gap as a tool to investigate the filler-gap mechanisms during sentence processing. We used a self-pace reading task and unfortunately the reaction time data was too messy to provide useful information.