In the CLASS lab we work with participants from a broad age range (infancy to adulthood) and from various language backgrounds (monolinguals and multi-linguals, as well as those who are exposed to one versus many varieties of their native language). Our methodological toolbox includes eye-tracking, corpus analyses, and infant habituation paradigms (see below). To learn more about our approach to research, we invite you to read some of our recent methodological and theoretical overviews.
In 2023, thanks to new funding from CFI and NSERC-RTI, we will be adding infant ERP and infant head-mounted eye-tracking to our methodological toolbox. Check back soon to learn more!
We are excited to announce the release of BITTSy, a freely available infant and toddler behaviour testing system designed to facilitate cross-lab collaborations! BITTSy was created by Rochelle Newman and her team at the University of Maryland, with help from multiple labs (including our own). We thank NSF for funding the BITTSy initiative.
By tracking where listeners look as the speech signal unfolds over time, we can learn a lot about how listeners process the speech signal and access lexical representations. This paradigm works well with all ages, from young infants to adults.
This classic infant testing paradigm allows us to test how language experience influences infants’ ability to tell apart various auditory stimuli (e.g., speech sounds, voices, words, or languages).
This paradigm works well with 6- to 12-month-old infants, who can sit up and control their head movements. We have used this paradigm to test infants’ recognition of words in speech, as well as toddlers’ development of grammatical sensitivities.
The CLASS Lab is most well-known for their work on speech perception, but we also run production studies. Some paradigms we have used include altered auditory feedback, phonetic convergence, and elicited production.
Some questions are best addressed by giving children explicit choices. For example, infants can be presented with puppets who speak in different ways. Infants’ tendency to subsequently grab certain puppets can tell us a lot about their perception of talking styles.
Video Game Tasks
Most of our work focuses on speech and language development in infants and toddlers, but we also work with older children. Once children can operate a mouse or use a touch screen, we can test them using simple video games we design in the lab.
We often collect standardized measures to complement our experimental work (e.g., the MacArthur-Bates Communicative Development Inventory (MCDI), the Structured Photographic Expressive Language Test (SPELT), and the Comprehensive Test of Phonological Processing (CTOPP)).
The artificial languages we use typically consist of a small collection of nonsense words with carefully controlled phonological characteristics. By testing listeners’ ability to learn patterns embedded in these toy languages, we can test hypotheses about how real language acquisition works.
Although we are primarily a developmental lab, we also work with adults. In addition to eye tracking (mentioned above), some other common paradigms we use with adults include talker recognition line-ups and speech transcription in noise.
Online Data Collection
During the pandemic, we developed a host of online studies powered by platforms such as Gorilla and Qualtrics. These studies have been so successful that we have decided to maintain this line of work indefinitely. For more information about our online studies, click here.