SNL 2019 August 20-22, Helsinki, Finland Eleventh Annual Meeting of the Society for the Neurobiology of Language Now Accepting Submissions Steven L. Small and Kate E. Watkins Editors-in-Chief The Society for the Neurobiology of Language and the MIT Press are pleased to announce the launch of Neurobiology of Language. This open access journal will publish interdisciplinary articles addressing the neurobiological basis of speech and language. mitpressjournals.org/nol Welcome to the 11th Annual Meeting of the Society for the Neurobiology of Language Helsinki, Finland, August 20-22, 2019 This year we will have three very exciting days to enjoy an amazing program. This includes keynote lectures by Nikolaus Kriegeskorte, Dorothy Bishop and Frank Guenther, talks by our Early Career Award Winners, Vit́́ória Piai and Jonathan Brennan, our Distinguished Career Award Winner, Jeff Binder, three Slide Sessions, and five Poster Sessions – each preceded by our now tried and tested Poster Slam! This year we also have two Symposia: one on dyslexia entitled New Perspectives into Neurobiology of Reading and Dyslexia and another on neuroimaging methods entitled Windows into Language: Benefits and Challenges of Combining Methods. In addition, the day before the meeting starts, there will be an Educational Course: Investigating Language with MEG. This will be a good opportunity for students and researchers interested in using MEG to find out more from Finnish teams who have been instrumental in the development of MEG as a technique. On Wednesday, the Student and Postdoc Career Development Panel will provide attendees with the opportunity to ask early career researchers about their experiences in the academic and non-academic job markets. The evening before the meeting starts we will host a public lecture by Teppo Särkämö entitled When Words Fail: Music in aphasia and dementia rehabilitation. The goal of these lectures is to raise awareness in the general public about language-related issues and to promote research in the field of the neurobiology of language. Therefore, this lecture will be delivered in Finnish. Please join us for the Opening Night Reception in the Banquet Hall of Helsinki City Hall, thanks to the generosity of the City of Helsinki. I would like to thank the Program Committee for putting together an exciting scientific program: Brenda Rapp, Mairéad MacSweeney, Sonja Kotz, with special thanks to Marina Bedny as Program Committee Chair and Riitta Salmelin as head of the Local Arrangements Committee. Shauney Wilson, Shawna Lampkin, and their team also deserve our huge thanks for their skill in organizing and running these meetings. As you know, the possibility of an open access journal was raised at the SNL meeting last year. I’m thrilled to announce that this has now become reality. SNL, in a joint venture (co-ownership) with MIT Press, has just launched a new open access journal titled Neurobiology of Language. Steve Small and Kate Watkins have agreed to take on the responsibility of being the two Editors-in-Chief for the first 5 years. Under their expert leadership, I’m sure they will make the journal a great success. Finally, I would like to especially thank Matt Davis – the SNL Publication Officer. Matt has worked extremely hard over the last year on making this journal a reality. To do this, he’s had to show great patience in carefully balancing the needs of MIT and SNL. We will tell you more about the new journal at the SNL meeting in Helskinki. For this to succeed, it will need support from you, our SNL members. I hope you’re as excited about this venture as we are, and we look forward to telling you more about it. Enjoy the meeting! Manuel Carreiras Chair, Society for the Neurobiology of Language 2019 Review Committee SNL 2019 Program 2019 Review Committee Alyson Abel Amélie Achim F.-Xavier Alario Lucia Amoruso Nicolás Araneda Hinrichs Venu Balasubramanian Juliana V. Baldo Karen Banai Michal Ben-Shachar Sara Berentsen Pillay Anna Beres Jonathan Berken Tali Bitan Idan Blank Heather Bortfeld Jonathan Brennan Bradley R. Buchsbaum Adam Buchwald Sendy Caffarra Stefano Cappa Velia Cardin Joanna Chen Lee Joao Correia Brendan Costello Seana Coulson Tanya Dash Matt Davis Angela De Bruin Greig de Zubicaray Isabelle Deschamps Michele T. Diaz Sayako Earle Karen Emmorey Samuel Evans Li-Ying Fan Evelina Fedorenko Christian Fiebach Adeen Flinker Nadine Gaab Carolina Gattei Ladan Ghazi-Saidi Laurie Glezer Angela Grant Sara Guediche Thomas C. Gunter Ayse Gürel Uri Hasson Arturo Hernandez Argye Hillis Jessica Hodgson Andreas Højlund Maria Ivanova Xiaoming Jiang Katerina Danae Kandylaki Denise Klein Vanja Kljajevic Vicky Lai Nicole Landi Laurie Lawyer Chia-lin Lee Matt Leonard Frederique Liegeois James Magnuson Simona Mancini Alec Marantz Clara Martin William Matchin Aya Meltzer-Asscher Emily Myers Lars Meyer Nicola Molinaro Monika Molnar Nicole Neef Caroline Niziolek Müge Özker Sertel Myung-Kwan Park Jonathan Peelle Stephen Politzer-Ahles Liina Pylkkänen Ileana Quinones Jamie Reilly Stephanie Ries Carlos Romero-Rivas Daniela Sammler Mathias Scharinger Matthias Schlesewsky Julie Schneider Katrien Segaert Mohamed Seghier Wai Ting Siok Tineke Snijders Christina Sotiropoulou Drosopoulou Kristof Strijkers Jo Taylor Sukru Torun Katherine Travis Tae Twomey Kenny I. Vaden Jane E. Warren Nicole Wicha Roel M. Willems Stephen M. Wilson Maximiliano Agustin Wilson Zoe Woodhead Say Young Kim Linmin Zhang Anna Zumbansen Contents Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2019 Review Committee. . . . . . . . . . . . . . . . . . . . . . . . 2 Directors, Committees and Founders. . . . . . . . . . . . 3 Area Map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Venue Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Schedule of Events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Opening Night Reception. . . . . . . . . . . . . . . . . . . . . . . 6 SNL Social Hour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Keynote Lecture: Nikolaus Kriegeskorte. . . . . . . . . 7 Keynote Lecture: Dorothy Bishop . . . . . . . . . . . . . . . 8 Keynote Lecture: Frank Guenther. . . . . . . . . . . . . . . 9 Dyslexia Symposium . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Satellite Symposium. . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Invited Symposium. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Student and Postdoc Career Development Panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Distinguished Career Award: Jeffrey Binder. . . . . 17 Early Career Award: Jonathan Brennan. . . . . . . . . 18 2 Early Career Award: Vitória Piai. . . . . . . . . . . . . . . . 19 Abstract Merit Awards. . . . . . . . . . . . . . . . . . . . . . . . . 20 Travel Awards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Future Meetings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Attendee Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Neurobiology of Language Journal. . . . . . . . . . . . . 24 Sponsors and Contributors . . . . . . . . . . . . . . . . . . . . 25 Slide Schedule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Slide Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Poster Slam Schedule . . . . . . . . . . . . . . . . . . . . . . . . . 35 Poster Slam Sessions. . . . . . . . . . . . . . . . . . . . . . . . . . 35 Poster Schedule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Poster Sessions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Poster Session A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Poster Session B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Poster Session C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Poster Session D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Poster Session E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Author Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 The Society for the Neurobiology of Language SNL 2019 Program  Directors, Committees and Founders Directors, Committees and Founders 2019 Board of Directors 2019 Program Committee Manuel Carreiras, Chair Basque Center on Cognition, Brain and Language Manuel Carreiras, Chair Basque Center on Cognition, Brain and Language Mairéad MacSweeney, Treasurer University College London Sonja Kotz Maastricht University Brenda Rapp, Secretary Johns Hopkins University Marina Bedny Johns Hopkins University Marina Bedny, Program Committee Chair Johns Hopkins University Mairéad MacSweeney University College London Angela Grant, Student/Postdoc Representative Missouri Western State University Brenda Rapp Johns Hopkins University Sonja Kotz, Chair-Elect Maastricht University Riitta Salmelin Aalto University Seana Coulson, Treasurer-Elect University of California, San Diego Steven Small University of Texas at Dallas Jamie Reilly, Secretary-Elect Temple University 2019 Nomination Committee Emily Myers, Program Committee Chair-Elect University of Connecticut Karen Emmorey, Past Chair San Diego State University James Magnuson, Past Treasurer University of Connecticut Clara D. Martin, Past Secretary Basque Center on Cognition, Brain and Language Michal Ben-Shachar, Past Program Committee Chair Bar Ilan University Incoming 2020 Board Members Matt Lambon Ralph, Chair-Elect University of Cambridge Tamara Swaab, Treasurer-Elect University of California, Davis Evelina (Ev) Fedorenko, Secretary-Elect MIT Carolyn McGettigan, Program Committee Chair-Elect University College London Esti Blanco-Elorrieta Student/Postdoc Representative-Elect New York University and Harvard University The Society for the Neurobiology of Language Kate Watkins University of Oxford Joe Devlin University College London Ellen Lau University of Maryland 2019 Local Organizer Riitta Salmelin Aalto University 2019 Event Organizers Shauney Wilson Executive Director and Event Director Shawna Lampkin Event Manager Jeff Wilson Technical Manager SNL Founders Steven L. Small University of Texas at Dallas Pascale Tremblay Université Laval 3 Area and Venue Maps SNL 2019 Program Area Map SNL 2019 is being held in Finlandia Hall, Mannerheimintie 13 e, 00100, Helsinki. Opening Night Reception is being held at Helsinki City Hall, Pohjoisesplanadi 11–13, Helsinki. Crowne Plaza Hotel SNL 2019 Finlandia Hall Opening Night Reception  Helsinki City Hall Venue Map All SNL 2019 events are on the 2nd floor of Finlandia Hall. Talks are held in the main auditorium, posters are in Restaurant Hall, and Registration and Exhibits are in the Finlandia Hall Foyer. Finlandia Hall, 2nd Floor are Childc Posters 4 Registration Exhibits Talks Exhibits The Society for the Neurobiology of Language SNL 2019 Program  Schedule of Events Schedule of Events Monday, August 19 8:30 am – 4:30 pm 5:30 – 6:45 pm Educational Course: Investigating Language with MEG Aalto University Otaniemi campus, Espoo (offsite) Public Lecture: Teppo Särkämö Tiedekulma (Think Corner), Yliopistonkatu 4 (offsite) Tuesday, August 20 7:00 am – 5:45 pm Meeting Registration Finlandia Hall Foyer 8:00 – 9:00 am Morning Coffee Finlandia Hall Foyer 8:30 am – 5:00 pm Exhibits Open Finlandia Hall Foyer 8:45 – 9:00 am Opening Remarks: Manuel Carreiras Finlandia Hall 9:00 – 10:00 am Keynote Lecture: Nikolaus Kriegeskorte Finlandia Hall 10:00 – 10:15 am Poster Slam Session A Finlandia Hall 10:15 – 10:45 am Coffee Break Finlandia Hall Foyer 10:15 am – 12:00 pm Poster Session A Restaurant Hall 12:00 – 1:30 pm Lunch (on your own) 1:30 – 3:00 pm Slide Session A Finlandia Hall 3:00 – 3:15 pm Poster Slam Session B Finlandia Hall 3:15 – 3:45 pm Coffee Break Finlandia Hall Foyer Sponsored by CICERO Learning 3:15 – 5:00 pm Poster Session B Restaurant Hall 5:00 – 5:45 pm Distinguished Career Award: Jeffrey Binder Finlandia Hall 7:00 – 8:30 pm Opening Night Reception Helsinki City Hall (offsite) Early Career Awards: Jonathan Brennan and Vitória Piai Finlandia Hall 10:30 – 10:45 am Poster Slam Session C Finlandia Hall 10:45 – 11:15 am Coffee Break Finlandia Hall Foyer 10:45 am – 12:30 pm Poster Session C Restaurant Hall 12:30 – 2:00 pm Student and Postdoc Career Development Panel Balcony Level Foyer 12:30 – 2:00 pm Lunch (on your own) 2:00 – 3:30 pm Slide Session B Finlandia Hall 3:30 – 4:00 pm Coffee Break Finlandia Hall Foyer Sponsored by Lingsoft 4:00 – 5:00 pm Keynote Lecture: Dorothy Bishop Finlandia Hall 5:00 – 5:15 pm Poster Slam Session D Finlandia Hall 5:15 – 7:00 pm Poster Session D and Social Hour Restaurant Hall 9:45 – 10:30 am Thursday, August 22 7:30 am – 6:00 pm 7:30 – 8:30 am 8:00 am – 4:30 pm 8:30 – 9:30 am 9:30 – 10:15 am 10:15 – 10:45 am 10:45 – 12:15 pm 12:15 – 1:45 pm 1:45 – 3:30 pm Wednesday, August 21 7:00 am – 7:00 pm 7:30 – 8:30 am 8:00 am – 7:00 pm 8:00 – 9:45 am Meeting Registration Finlandia Hall Foyer Morning Coffee Finlandia Hall Foyer Exhibits Open Finlandia Hall Foyer Symposium: New Perspectives into Neurobiology of Reading and Dyslexia Finlandia Hall The Society for the Neurobiology of Language 3:30 – 3:45 pm 3:45 – 4:15 pm 3:45 – 5:30 pm 5:30 – 6:00 pm Meeting Registration Finlandia Hall Foyer Morning Coffee Finlandia Hall Foyer Exhibits Open Finlandia Hall Foyer Keynote Lecture: Frank Guenther Finlandia Hall Business Meeting Finlandia Hall Coffee Break Finlandia Hall Foyer Slide Session C Finlandia Hall Lunch (on your own) Invited Symposium: Windows into Language: Benefits and Challenges of Combining Methods Finlandia Hall Poster Slam Session E Finlandia Hall Coffee Break Finlandia Hall Foyer Poster Session E Restaurant Hall Closing Remarks and Outlook to SNL 2020: Manuel Carreiras & Sonja Kotz Finlandia Hall 5 Opening Night Reception SNL 2019 Program Opening Night Reception Tuesday, August 20, 7:00 – 8:30 pm Helsinki City Hall, Pohjoisesplanadi 11–13 (offsite) To attend the Opening Night Reception, you must present your invitation at the door. Guests of registered SNL attendees are welcome to attend the reception, but must also have an invitation. Invitations can be picked up at the Registration Desk. Many thanks to the generosity of the City of Helsinki for welcoming SNL attendees with an Opening Night Reception to be held in the exquisite Banquet Hall of the Helsinki City Hall. Following the first day’s sessions, join your colleagues for an an elegant evening of food, drinks and stimulating conversation. The reception will open with a Welcome Address from a City of Helsinki official, after which SNL attendees will enjoy a delicious salad bar buffet, wine and refreshments. We are excited to announce that entertainment during the reception will be provided by the acclaimed Philomela. Philomela is an innovative and skilled female choir, ensemble and a polyphonic instrument known for its strong, memorable and spatial musical performances. Philomela commissions new music frequently and works with contemporary composers who take elements from Finnish folk tradition and creatively combine it with their own musical ideas. Philomela is founded and conducted with great passion by Marjukka Riihimäki, since 1984. Philomela shakes the traditions of choral singing and aims to surprise its audience in every concert. Movement is also an essential part of Philomela’s performance: the space is always a key element to present a multidimensional musical experience for the audience. Philomela’s unique musical expression has succeeded to touch audiences’ hearts all over the world across language and cultural barriers. Philomela has won Championship titles in the World Choir Games in 2014 and European Choir Games in 2013. Helsinki City Hall, Helsingin Kaupungintalo, is the central administrative building of Helsinki, the capital of Finland. Located in the Kruununhaka district, City Hall features a beautiful white and blue façade in the imperial style and overlooks the bustling Market Square. Designed in 1833 by the famous German architect Carl Ludvig Engel, the building served as a cultural entertainment hotel until it was acquired by the city in 1913, renovated, and christened as the Helsinki City Hall in 1932. The interior was modernized the 1960’s, but the resplendent Banquet Hall has retained its elegant 19th century form. The walk from the Finlandia Hall to the Helsinki City Hall is 1.8 KM, about a 20 minute walk. You can also take tram 4 from Finlandia Hall to the beautiful Senate Square and then walk one block to the City Hall. For guests needing extra assistance getting to the event, please contact the SNL Registration Desk. Helsingin Kaupungintalo Satamasta (Helsinki City Hall) Banquet Hall of Helsinki City Hall SNL Social Hour Wednesday, August 21, 5:15 – 7:00 pm, Restaurant Hall The first drink is on us at the SNL Social Hour! Attendees are invited to enjoy a casual Social Hour in Restaurant Hall during the Wednesday evening poster session. View the poster presentations while enjoying complimentary local delicacies. Look for your free drink ticket in the back of your SNL badge. In addition, a cash bar will be available. 6 The Society for the Neurobiology of Language SNL 2019 Program  Keynote Lecture Keynote Lecture: Nikolaus Kriegeskorte Nikolaus Kriegeskorte Professor of Psychology; Director of Cognitive Imaging at the Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Psychology, Columbia University Nikolaus Kriegeskorte is a computational neuroscientist who studies how our brains enable us to see and understand the world around us. He received his PhD in Cognitive Neuroscience from Maastricht University, held postdoctoral positions at the Center for Magnetic Resonance Research at the University of Minnesota and the U.S. National Institute of Mental Health in Bethesda, and was a Programme Leader at the U.K. Medical Research Council Cognition and Brain Sciences Unit at the University of Cambridge. Kriegeskorte is a Professor at Columbia University, affiliated with the Departments of Psychology and Neuroscience. He is a Principal Investigator and Director of Cognitive Imaging at the Zuckerman Mind Brain Behavior Institute at Columbia University. Kriegeskorte is a co-founder of the conference “Cognitive Computational Neuroscience”, which had its inaugural meeting in September 2017 at Columbia University. The Society for the Neurobiology of Language Cognitive computational neuroscience of vision Tuesday, August 20, 2019, 9:00 – 10:00 am, Finlandia Hall Chair: Marina Bedny To learn how cognition is implemented in the brain, we must build computational models that can perform cognitive tasks, and test such models with brain and behavioral experiments. Modern technologies enable us to measure and manipulate brain activity in unprecedentedly rich ways in animals and humans. However, experiments will yield theoretical insight only when employed to test brain-computational models. Recent advances in neural network modelling have enabled major strides in computer vision and other artificial intelligence applications. This brain-inspired technology provides the basis for tomorrow’s computational neuroscience. Deep convolutional neural nets trained for visual object recognition have internal representational spaces remarkably similar to those of the human and monkey ventral visual pathway. Functional imaging and invasive neuronal recording provide rich brain-activity measurements in humans and animals, but a challenge is to leverage such data to gain insight into the brain’s computational mechanisms. We build neural network models of primate vision, inspired by biology and guided by engineering considerations. We also develop statistical inference techniques that enable us to adjudicate between complex brain-computational models on the basis of brain and behavioral data. I will discuss recent work extending deep convolutional feedforward vision models by adding recurrent signal flow and stochasticity. These characteristics of biological neural networks may improve inferential performance and enable neural networks to more accurately represent their own uncertainty. 7 Keynote Lecture SNL 2019 Program Keynote Lecture: Dorothy Bishop Dorothy Bishop Professor of Developmental Neuropsychology, Department of Experimental Psychology, University of Oxford Dorothy Bishop is a psychologist who holds a Wellcome Trust Principal Research Fellowship at the University of Oxford, where she heads an ERC-funded programme of research into cerebral lateralisation for language. She is a supernumerary fellow of St John’s College Oxford, a Fellow of the Royal Society, Fellow of the British Academy and Fellow of the Academy of Medical Sciences. Her main research interests are in the nature and causes of developmental language difficulties, with a particular focus on psycholinguistics, neurobiology and genetics. Her book Uncommon Understanding won the British Psychological Society’s annual award in 1999, and she has published widely on children’s language disorders. In 2015 Dorothy chaired a symposium on Reproducibility in Biomedical Science organised by the Academy of Medical Sciences, Wellcome Trust, MRC, and BBSRC, and she is chairing the advisory board of the recently-formed UK Reproducibility Network. She has a popular blog, Bishopblog, which features posts on a wide range of topics, including those relevant to reproducibility. She is also on Twitter as @deevybee. 8 Individual differences in language laterality: are they meaningful? Wednesday, August 21, 2019, 4:00 – 5:00 pm, Finlandia Hall Chair: Mairéad MacSweeney 150 years after Broca’s seminal statement “Nous parlons avec l’hémisphère gauche” we still do not know how or why we have this bias. It seems reasonable to suppose that lateralisation evolved because it is adaptive, perhaps enabling complementary specialisation of different functions. However, if this were so, then we might expect to find evidence of language deficits in people who have bilateral or right-hemisphere language. Although such an association has been proposed, the evidence is not compelling. My research group has been investigating the possibility that we need to look at the pattern of lateralization for different components of language. Although language dominance is typically regarded as a unitary dimension, discrepant laterality across different language tasks is often reported in studies. We have shown that this cannot be dismissed as simple measurement error, and task differences in laterality may represent meaningful individual variability that originates from true differences in the hemispheric organisation of different language networks. In a series of studies using functional transcranial Doppler sonography, we have investigated the functional correlates of inconsistent language laterality across tasks, to test a network efficiency hypothesis which maintains that optimal development depends on organisation of language functions within the same cerebral hemisphere. The Society for the Neurobiology of Language SNL 2019 Program  Keynote Lecture Keynote Lecture: Frank Guenther Frank Guenther Professor of Speech Language, & Hearing Sciences and Biomedical Engineering, Boston University Frank Guenther is Professor of Speech Language, & Hearing Sciences and Biomedical Engineering at Boston University. He also holds research appointments at the Picower Institute for Learning and Memory at Massachusetts Institute of Technology and the Department of Radiology at Massachusetts General Hospital. Dr. Guenther’s research combines theoretical modeling with behavioral and neuroimaging experiments to characterize the neural computations underlying speech production. He is the originator of the DIVA model, which provides a quantitative and neuroanatomically explicit account of the neural computations underlying speech motor control and their breakdown in communication disorders such as stuttering and apraxia of speech. He also develops brainmachine interface technology aimed at restoring speech communication to severely paralyzed individuals. These topics are the focus of his 2016 book from MIT Press entitled Neural Control of Speech. The Society for the Neurobiology of Language Neural modeling and imaging of speech production in neurotypical and disordered populations Thursday, August 22, 2019, 8:30 – 9:30 am, Finlandia Hall Chair: Brenda Rapp Speech production is a highly complex sensorimotor task involving tightly coordinated processing in the frontal, temporal, and parietal lobes of the cerebral cortex. To better understand these processes, our laboratory has designed, experimentally tested, and iteratively refined a neural network model whose components correspond to the brain regions involved in speech. Babbling and imitation phases are used to train neural mappings between phonological, articulatory, auditory, and somatosensory representations. After the imitation phase, the model can produce learned phonemes and syllables by generating movements of an articulatory synthesizer. Because the model’s components correspond to neural populations and are given precise anatomical locations, activity in the model’s neurons can be compared directly to neuroimaging data. Computer simulations of the model account for a wide range of experimental findings, including data on acquisition of speaking skills, articulatory kinematics, and brain activity during normal and perturbed speech. Furthermore, “damaged” versions of the model are being used to investigate several communication disorders, including stuttering, apraxia of speech, and spasmodic dysphonia. The model has also been used to guide development of a brain-computer interface aimed at restoring speech output to an individual suffering from locked-in syndrome, characterized by complete paralysis with intact sensation and cognition. 9 Dyslexia Symposium SNL 2019 Program Dyslexia Symposium New Perspectives into Neurobiology of Reading and Dyslexia Wednesday, August 21, 2019, 8:00 – 9:45 am, Finlandia Hall Chair: Michal Ben-Shachar Speakers: Kenneth Pugh, Franck Ramus, Paavo Leppanen, Fumiko Hoeft Reading processes rely on the integration of efficient component processes such as phonological, orthographic and semantic processes, coupled with domain general processes such as attention. Reading phenotypes, including specific learning disabilities such as decodingspecific reading disorders (a.k.a. developmental dyslexia), affects 5-10% of all children and are complex traits requiring multiple cognitive and neural processes mediated by interacting genetic and environmental factors. In this symposium, we will present the latest research performed in 4 different laboratories that aims to advance our understanding of the neural mechanisms underlying dyslexia. Research presented adopts a number of neuroimaging modalities from functional and structural MRI, MR spectroscopy, magnetoencephalogram (MEG) and electroencephalogram. This symposium is in collaboration with the European and International Dyslexia Associations’ (EDA and IDA, respectively) Scientific Advisory Boards. computational models to better understand critical genebrain-behavior connections in early language and speech motor development and reading and are discussed in detail (including new findings with magnetic resonance spectroscopy and multimodal brain imaging that reveals how excitatory and inhibitory neurochemistry moderate language and reading development in high risk children). Finally, we discuss recent studies that extend this brain research into second language learning. About Kenneth Pugh Dr. Pugh is the President and Director of Research at Haskins Laboratories, a Yale University and University of Connecticut affiliated inter-disciplinary institute, dedicated to the investigation of the biological bases of language. He also holds academic appointments in the Department of Psychology at the University of Connecticut, in the Department of Linguistics at Yale University, and in the Department of Diagnostic Radiology at Yale University School of Medicine. He serves as a member of the Scientific Advisory Board for the International Dyslexia Association, the Scientific Advisory Panel for Dyslexia International in Paris, a member of the Board of Visitors for the Learning Research. Franck Ramus Senior Research Scientist, Centre national de la recherche scientifique, Adjunct Professor, Department of Cognitive Studies, Ecole Normale Supérieure Kenneth Pugh President and Director of Research at Haskins Laboratories, Professor at Yale University and Yale University School of Medicine, Professor at University of Connecticut Building the literate brain: How learning to read depends upon, and changes, brain organization for spoken language The development of skilled reading involves a major re-organization of language systems in the brain. We will present ongoing research from our lab on the genetic and neurobiological foundations of learning to read across writing systems, with particular focus on bi-directional dependencies between brain pathways that are critical in linking spoken and written language. Our research suggests that print/ speech convergence in language cortex accounts for individual differences in reading outcomes in high and low risk learners. New longitudinal findings from our lab using 10 Cortical oscillations for speech processing in dyslexia We will report the results of our MEG investigations of cortical oscillations in response to speech and nonspeech in dyslexic and normal reading adults. We will focus on the replication of previously published results on auditory entrainment in the delta, theta and gamma bands, and new investigations on the role of alpha band oscillations, and on the coupling between alpha and other frequency bands. About Franck Ramus Franck Ramus is a CNRS senior research scientist and adjunct professor at the Department of Cognitive Studies, Ecole Normale Supérieure in Paris. His research bears on the development of language and social cognition in children, its disorders (developmental dyslexia, specific language impairment, autism), its cognitive and neural bases and its genetic and environmental determinants. The Society for the Neurobiology of Language SNL 2019 Program  Dyslexia Symposium Paavo Leppanen Professor and Vice-Dean of Research, Faculty of Education and Psychology, University of Jyväskylä, Finland Neural signatures of speech perception, attention and reading in dyslexia Reading difficulties are linked to multiple atypical processes or deficits at multiple levels. We will report recent brain response findings from our eSeek-study (“Internet and learning difficulties: multidisciplinary approach for understanding information seeking in new media”), where we used different experimental designs and stimuli with the same 14-year old school-age children with typical reading skills and those with reading (RD) and attentional difficulties (AD). The cross-linguistic speech perception experiment shows differences in the pattern of speech driven brain responses to native- and non-native speech sounds in both RD and AD children. The brain response findings from the experiment measuring attention network (ANT-task) show that both dyslexic and AD children differ from control children as well as from each other in their neural sources of several sub-processes of attention. The eye movements and fixation related brain responses (FRPs) of dyslexic readers measured during a natural sentence reading task and extracted using linear deconvolution approach show differences compared to typical readers. We discuss our findings from the multiple deficit and shared deficit perspectives. About Paavo H.T. Leppänen Paavo H.T. Leppänen, PhD, Vice-Dean (research) of the Faculty of Education and Psychology, is Professor of psychology and dyslexia research at the Department of Psychology at University of Jyväskylä (JYU), the head of the EEG and behavioral cognitive psychology laboratories of Department of Psychology (JYU). He has long experience in the research of learning disorders, especially reading and reading difficulties and related cognitive risk factors using both brain event-related potential (ERP) and behavioral research methods with infant, child and adult populations. He currently conducts and directs research in the field of developmental cognitive neuroscience using MEG and EEG techniques, and those combined with eyetracking methodology. His research themes include digital and Internet reading (with web-based, behavioral, eyetracking and brain response measures), dyslexia, language difficulties, and problems in foreign/ second language learning, their risk factors and neurocognitive processes of reading. The Society for the Neurobiology of Language Fumiko Hoeft Professor, Department of Psychological Sciences, and Director of Brain Imaging Research Center (BIRC), University of Connecticut Intergenerational Neuroimaging of Literacy and Dyslexia: A New Cognitive Neuroscience Research Paradigm Parents have large influence on offspring’s brain and cognitive development. The Intergenerational Multiple Deficit Model (iMDM [van Bergen et al. Front Hum Neurosci 2014]; or Cumulative Risk and Protection Model, CRAP Model) affords integration of parental influences as well as others, whether genetic or environmental, and whether risk or protective factors, to explain individual differences in reading ability and liability for developing dyslexia, a specific disorder of reading. Further, it has recently been suggested that most complex traits show intergenerational sex-specific transmission patterns, which could help uncover biological pathways of transmission. Macrocircuits using imaging may be an ideal target for investigations of intergenerational effects, where key causes may converge in ways that lead to complex phenotypes such as reading and dyslexia. Based on these notions, we are currently examining how parental cognitive and neuroimaging patterns are associated with offspring’s reading and related imaging patterns (e.g. Black et al. NeuroImage 2012, Hosseini et al. NeuroImage 2013, Hoeft & Hancock. GeschwindGalaburda Hypothesis, 30 years Later 2017, Chang et al. under prep). We first establish the feasibility of this novel approach, intergenerational neuroimaging, by confirming matrilineal transmission patterns in the cortico-limbic system that is well established in gene expression and behavioral studies of animals and humans (Yamagata et al. J Neurosci 2016). We then interrogate network patterns related to reading, and show intergenerational transmission patterns. We also show results indicating how paternal age may negatively predict reading outcome and the potential neural mechanism (e.g. attention, thalamic development, de novo mutation [Xia et al. under review]). We discuss preliminary findings in light of historical and latest causal theories of dyslexia (Hancock Pugh Hoeft. Trends in Cog Sci 2017). We also introduce our new research program utilizing a natural cross-fostering design will allow us to dissociate genetic, prenatal and postnatal environmental influences, which has traditionally not been feasible in humans, but is critically important in dissecting neurobiological mechanisms underlying reading and dyslexia (Ho et al. Trends in Neuroscience. 2016). 11 Dyslexia Symposium SNL 2019 Program About Fumiko Hoeft research training at Harvard, UCLA, Caltech and Stanford, and held faculty positions at Stanford, UCSF and UConn. Honors include awards from the International Dyslexia Association (IDA; 2014), International Mind Brain & Education Society (2018), and Society for Neuroscience (2018). She has published over 140 articles, and has delivered over 210 talks such as at TEDx and the White House. Her work has been widely covered in media such as The New York Times, CNN, and Scientific American. She Co-Chairs the Scientific Advisory Board at the IDA. Fumiko Hoeft MD PhD is Professor of Psychological Sciences, Psychiatry and Neuroscience, and Director of the Brain Imaging Research Center (BIRC) at University of Connecticut (UConn). She also holds appointments at the UCSF Dyslexia Center and Haskins Laboratories. She is a neurophysiologist, and systems/developmental cognitive neuroscientist interested in risk and protective factors in dyslexia, as well as developing edtech tools such as APPRISE that assesses dyslexia risk. She received Satellite Symposium Educational Course: Investigating Language with MEG Monday, August 19, 2019, 8:30 am – 4:30 pm, Aalto University Otaniemi campus, Espoo (offsite) Investigating language with MEG is a whole-day course which will provide a firm understanding of the neural activity that generates the measured MEG signal, as well as of source localization techniques that map the measured signal to the underlying brain structures. The course gives insight into the different types of measures that can be extracted from the multidimensional MEG signal, their benefits and limitations, and how they can be used to shed light on language processing. A strong emphasis is placed on novel methods for analyzing MEG data: measures of functional connectivity as well as machine learning methods. Language development is highlighted as a particular application of MEG in studying language. The course will be conducted in English. Keynote Speaker Jan-Mathijs Schoffelen, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, The Netherlands Speakers Neural activity underlying MEG, source localization of MEG signals Mia Liljeström, Aalto University, Finland Different types of neural measures: evoked responses, cortical oscillations, and connectivity Jan Kujala, University of Jyväskylä, Finland MEG data analysis hands-on session: an introduction to MNE-Python Marijn van Vliet, Aalto University, Finland Using MEG to study language development Tiina Parviainen, University of Jyväskylä, Finland Making use of machine learning in analyzing your MEG data Lauri Parkkonen, Aalto University, Finland 12 The Society for the Neurobiology of Language SNL 2019 Program  Invited Symposium Invited Symposium Windows into Language: Benefits and Challenges of Combining Methods Thursday, August 22, 2019, 1:45 – 3:30 pm, Finlandia Hall Chair and Discussant: Sonja Kotz Speakers: Evelina (Ev) Fedorenko, Riitta Salmelin, Kate Watkins This symposium brings together researchers studying the neurobiology of language from different methodological perspectives. Talks will be followed by a panel discussion of the benefits and challenges of combining methods to study the neurobiology of language. Sonja Kotz Professor, Maastricht University Sonja A. Kotz is a translational cognitive neuroscientist, who investigates predictive coding and cognitive control in speech, language, and communication in healthy and patient populations. She utilizes a wide range of behavioral and neuroimaging methods (M/EEG, EEG-oscillations, and functional and structural magnetic resonance imaging). She heads the section of neuropsychology at Maastricht University, the Netherlands and holds several honorary professorships (Manchester, Glasgow, Leipzig, and Lisbon). Evelina (Ev) Fedorenko Associate Professor, MIT Language is a remarkable system for expressing intricate ideas, unparalleled by any other animal communication system. This feature of language makes it both the holy grail of research on human cognition, and one of the most challenging pursuits, due to the lack of animal models. The only solution to the latter is to use the rich arsenal of tools from cognitive science, neuroscience, and computer science in the hope of getting robust converging answers about the functional architecture of language. I will use the much debated question of the relationship between lexico-semantic and syntactic representations and processes to illustrate how fMRI, ECoG, and behavioral data from neurotypical adults and patients with brain lesions can sometimes paint a remarkably clear and consistent answer — in this case, a tight integration between lexical semantics and syntax. This answer further The Society for the Neurobiology of Language aligns with many current theoretical linguistic frameworks. I will also briefly talk about how advances in machine learning can help shed light on this and similar architectural questions, and perhaps bring us closer to computationally precise models of different linguistic processes. About Evelina Fedorenko Ev Fedorenko is a cognitive neuroscientist who specializes in the study of the human language system. She received her Bachelor’s degree in Psychology and Linguistics from Harvard University in 2002. She then proceeded to pursue graduate studies in cognitive science and neuroscience at MIT. After receiving her Ph.D. in 2007, she was awarded a K99R00 career development award from NICHD and stayed on as a postdoctoral researcher and then a research scientist at MIT. In 2014, she joined the faculty at HMS/MGH. Fedorenko aims to understand the computations we perform and the representations we build during language processing, and to provide a detailed characterization of the brain regions underlying these computations and representations. She uses an array of methods, including fMRI, ERPs, MEG, intracranial recordings and stimulation, and tools from Natural Language Processing, and works with diverse populations, including healthy children and adults, as well as individuals with developmental and acquired brain disorders. Riitta Salmelin Professor, Aalto University, Finland By now, we know what kind of MEG or fMRI activation patterns to expect in basic language paradigms, such as spoken or written word perception and picture naming. Based on this groundwork, it has been possible to address neural correlates of language development, learning and disorders, and even begin to elucidate brain organization of meaning and knowledge. However, the choice of imaging measures can importantly influence the way we interpret brain function. MEG evoked responses, oscillatory power and real-time connectivity, as well as fMRI activation and slow haemodynamic interareal correlations afford complementary views to language processing. I will discuss findings from experiments where both MEG and fMRI data were recorded from the same individuals, using the same exact language paradigms. Those studies have demonstrated similarities but also highlighted distinct functional sensitivities of different neuroimaging proxies. Together, the various MEG and fMRI measures promise rich possibilities to multiview imaging that can reach beyond mere combination of location and timing of neural activation and help to uncover the organizational principles of language function in the human brain. 13 Invited Symposium About Riitta Salmelin Riitta Salmelin is Professor of Imaging Neuroscience at the Department of Neuroscience and Biomedical Engineering, Aalto University. Her research focuses on two complementary lines of investigation: uncovering neural organization of human language function by use and development of imaging methods and computational modelling, and examining sensitivity of MEG and fMRI activation and network measures to different neural and cognitive processes. She has pioneered the use of MEG in language research, and applied multimodal MEG/ fMRI and interareal connectivity in the study of human cognition. She is the senior editor of the first handbook on MEG (“MEG. An Introduction to Methods”, Oxford University Press, 2010) and Associate Editor of Human Brain Mapping. Honours include membership of the Academia Europaea, Wiley Young Investigator Award by the Organization for Human Brain Mapping, and the Justine and Yves Sergent Award. Kate Watkins Professor of Cognitive Neuroscience, University of Oxford There are many tools available for studying the neurobiology of language. Traditionally, measures such as fMRI, MEG and EEG provided only correlational information regarding where or when a brain area was activated during a task. Causal inference was reserved typically for studies of patients with brain lesions or those using brain stimulation. Both sets of methods have provided valuable insights into the neurobiology of language. In this talk, I will provide examples of how these tools can be usefully 14 SNL 2019 Program combined. For example, data obtained using different imaging modalities in the same participants can constrain analyses and provide confirmatory evidence of abnormality affecting both structure and function in developmental disorders. By combining interference and measurement tools we can ask questions about the causal role of brain areas or explore interactions between them and their connectivity. I will demonstrate how we have done this in studies of speech perception using TMS to temporarily perturb brain function and then EEG or MEG to measure the effects. About Kate Watkins Kate Watkins is a Cognitive Neuroscientist in the Department of Experimental Psychology at the University of Oxford. She is a Fellow of St. Anne’s College in Oxford, where she teaches Psychology. Kate trained in neuropsychology and neuroimaging at the Institute of Child Health where she did her PhD studying the members of the KE family who have a mutation in FOXP2. She did a postdoc at the Montreal Neurological Institute with Tomas Paus where she learned to use non-invasive brain stimulation to study the motor system in speech perception. Kate returned to the UK, to Oxford, working at the FMRIB Centre initially and then in Experimental Psychology where she established the Speech and Brain Research Group. The group uses brain imaging and brain stimulation to study children and adults with and without disorders affecting speech and language. Current studies in the lab involve using brain stimulation to enhance fluency in people who stutter, brain imaging of children with developmental language disorder, brain stimulation to interfere with or enhance speech motor learning, and using imaging to map the laryngeal motor cortex. Kate has also looked at plasticity for auditory and language functions in people who are congenitally blind. The Society for the Neurobiology of Language SNL 2019 Program  Student and Postdoc Career Development Panel Student and Postdoc Career Development Panel Wednesday, August 21, 2019, 12:30 – 2:00 pm, Balcony Level Foyer The SNL Student and Postdoc Career Development Panel will provide our student and postdoc members with the opportunity to ask five early career researchers about their experiences in the academic and non-academic job markets. Lunch will be provided, and our current student and postdoctoral representative, Angela Grant, will moderate the session and get the ball rolling with some prepared questions. Advance registration for this event is sold out. If you are interested in attending, please check with the Registration Desk to see if additional space has become available. Moderator Angela Grant Missouri Western State University Angela Grant currently works as a Postdoctoral Researcher in the Department of Psychology at Missouri Western State University. She earned her dual-title Ph.D. in Psychology and Language Science from the Pennsylvania State University, and previously worked as a Horizon Postdoctoral Fellow at Concordia University in Montréal. Angela’s research focuses on bilingual language processing and its relationship with cognitive control, using behavioral, EEG, and MRI methodologies. Some of her current projects include: discourse processing in bilinguals, speech in noise processing in bilinguals, and computational modeling of individual differences in executive control using drift diffusion models. Panelists Annika Hultén Janssen-Cilag Annika Hultén is a licensed psychologist with an MA in Psychology from Åbo Akademi University in Turku. She received her PhD in 2011, on the topic of neural correlates of adult language learning for work done at the Low Temperature Laboratory at Helsinki University of Technology. After her PhD she moved to the Netherlands were she worked as Research Staff at the Max Planck Institute for Psycholinguistics until 2013. She then returned to Finland, to Aalto University where she worked as a Academy of Finland postdoctoral researcher until 2019. Dr. Hultén has more than 20 peer reviewed publications, including publications in PNAS and Nature The Society for the Neurobiology of Language Neuroscience. She is also a founding member of the Brain Twitter Conference which annually reaches around 400 000 Twitter users. In March 2019, she moved to the pharmaceutical industry and started as a Medical Advisor at Janssen-Cilag part of the Johnson & Johnson family of companies, where she continues work with science, now with the focus on mood disorders. Laurel Lawyer University of Essex Laurel Lawyer received her PhD in Linguistics from the University of California, Davis in 2015. Following this, she was a Postdoc in the UC Davis Center for Mind and Brain’s Cognitive Neurolinguistics Laboratory with David Corina, investigating longitudinal changes in visual and auditory perception in children with cochlear implants. In 2017 she was hired at the University of Essex as a Lecturer in Psycholinguistics, where she joined the Centre for Research in Language Development throughout the Lifespan, and started the Essex Speech and Sign Lab. She uses a variety of methods, including fMRI, EEG, eye tracking and other behavioural measures, as well as corpus studies. Her broad interests in the perception of language has led to projects on phonological and morphological alternations in English, on single sign perception in ASL, and orthographic processing by deaf adults. Most recently, she has been working on ambient language processing in children and adults, and has a new project tracking the development of regional accent awareness in British school children. Qingqing Qu Chinese Academy of Sciences Qingqing Qu is an Associate Professor of Psychology at the Chinese Academy of Sciences. She received her Ph.D. in 2013 from the University of Bristol in the UK, where she worked with Markus Damian. Her research is concerned with the cognitive process and mechanisms underlying language processing, including language production, spoken word recognition, and cross-modal visual-spoken interaction in both the first and second languages. Since 2013, she has been an associate professor at the institute of Psychology . She has been the recipient of several awards, including the Chinese government award for outstanding student study abroad, Bristol University Commendation for Excellence in a Doctoral thesis, and Youth Innovation Promotion Association, CAS. 15 Student and Postdoc Career Development Panel SNL 2019 Program Benjamin Schloss David Thornton Benjamin Schloss’s early interest in mathematics and linguistics led him to study cognitive science and eventually pursue a PhD in Cognitive Psychology and Language Sciences. He is currently in the last year of my dual title PhD program at the Pennsylvania State University. A series of fortunate funding events which propelled him through data collection, a mountain of debt from his undergraduate education, and a likely genetic predisposition to anxiety led him to precociously enter the job market at the end of the fourth year of his PhD program. After a short stint working as a computational scientist in the epidemiology department of a genetics company, he currently resides in Pittsburgh where he is simultaneously working as a full time Machine Learning Engineer for a venture capital backed healthcare tech startup called Abridge AI and writing and publishing his dissertation research. David Thornton is an Assistant Professor in the Department of Hearing, Speech, and Language Sciences at Gallaudet University in Washington D.C., USA. He started this position in 2018, while finishing up his Ph.D. at the University of Tennessee Health Science Center in Knoxville, Tennessee, USA. His research examines sensorimotor processing, with a particular focus on how listener sex influences these processes. This research is intended to investigate the higher presentation of certain disorders in males, which are often characterized by sensorimotor deficits. Beyond research, David also serves on the Communication Sciences and Disorders Advisory Board for Longwood University. In his spare time, he enjoys a good hockey game and trying to travel the world. Pennsylvania State University Gallaudet University In industry (except for the genetics research) and in academia, his work has always focused on language. He makes heavy use of behavioral, eye-tracking, MRI, and advanced cognitive models (machine learning, deep learning, artificial intelligence, etc.) to understand how meaning is represented computationally as well as in the brain. He is also interested in how semantic processing in bilingualism may be different and how this interacts with memory and cognitive control (the topic of his dissertation). As a machine learning engineer, he works primarily on Natural Language Processing and Natural Language Understanding of dialogues between patients and caregivers in medical settings. 16 The Society for the Neurobiology of Language SNL 2019 Program  Distinguished Career Award Distinguished Career Award: Jeffrey Binder The Society for the Neurobiology of Language is pleased to announce the 2019 Distinguished Career Award winner: Jeffrey Binder. Jeffrey Binder Professor of Neurology and Biophysics, Medical College of Wisconsin, Milwaukee Dr. Jeffrey R. Binder is a cognitive neurologist whose research centers on the neural bases of language functions. He completed his undergraduate studies and his MD at the University of Nebraska, followed by a Neurology residency at Columbia University. As a postdoctoral fellow at the Neurological Institute of New York, his study of pure alexia provided some of the first evidence for a specific orthographic perceptual system in the left fusiform gyrus. In 1992 he joined the Neurology Department at the Medical College of Wisconsin, where he led several of the earliest fMRI studies on language and speech perception. His large body of work is characterized by detailed attention to theoretical issues, experimental control, and anatomical precision. Beyond his seminal contributions to the understanding of lexical representations in the human brain, he also conducts research on acquired language disorders and language lateralization in healthy and clinical populations, and made critical contributions in developing the field of presurgical fMRI. Beginning with his pioneering use of neuroimaging to study language function, Dr. Binder has been a leader in the study of the neurobiology of language. His extensive research on lexical semantics has delineated specific contributions of sensory and motor systems to concept representation, culminating in a novel high-dimensional brain-based model of word meaning. His work has contributed significantly to longstanding debates on abstract and concrete word representation. In another line of work, Dr. Binder published the first fMRI study of speech perception in 1994, and in 2000 characterized hierarchical processing of nonspeech and speech sounds in the superior temporal cortex. In yet a separate line of study, Dr.Binder published the first validation study comparing language lateralization through preoperative fMRI and invasive Wada testing, followed by several largescale studies examining factors associated with language The Society for the Neurobiology of Language lateralization and fMRI-Wada discordance. Binder and colleagues also published the first (and still the largest) study showing that preoperative fMRI is predictive of postoperative language and verbal memory outcomes. This work played a large role in recent guidelines developed by the American Academy of Neurology concerning presurgical language fMRI. Dr. Binder’s contributions to the basic science of reading, speech perception and semantics, along with clinical contributions including pre-surgical language mapping and the understanding of acquired language disorders, exemplify the interdisciplinary ethos and mission of the Society for the Neurobiology of Language. Language Neuroscience in the Modern Age: A personal perspective Tuesday, August 20, 5:00 – 5:45 pm, Finlandia Hall Chair: Karen Emmorey Dramatic advances in our understanding of language systems in the brain have accrued over the past 25 years as a result of functional neuroimaging experiments. These data make it clear that many of the core features of classical biological models of language are no longer tenable. The highly localist view of the left posterior perisylvian region as a comprehension module, for example, has given way to a model in which language comprehension occurs over widely distributed networks for speech perception, syntactic processing, and semantic cognition. The classical view of the left inferior frontal gyrus as a module for language production has similarly been supplanted by a model involving distributed networks for concept retrieval, lexical selection, syntactic processing, phonological access, and articulatory planning. Many of these networks participate in both comprehension and production, blurring the biological distinction between these complex functions. The speaker will give an overview of these neurobiological advances and discuss some developments in the areas of phoneme perception, reading, semantic cognition, and clinical language mapping to which he and his colleagues contributed. An argument will be made for the central role of concept retrieval and selection in many facets of human cognition, including language. The Distinguished Career Award is sponsored by 17 Early Career Award SNL 2019 Program Early Career Award: Jonathan Brennan The Society for the Neurobiology of Language is pleased to announce the 2019 Early Career Award winners: Jonathan Brennan and Vitória Piai. Jonathan Brennan Assistant Professor of Linguistics and Psychology, Department of Linguistics, University of Michigan Jonathan Brennan received his Ph.D. in Linguistics from New York University in 2010. After completing post-doctoral research at the Children’s Hospital of Philadelphia and the Neuroscience of Language Laboratory of NYU Abu Dhabi, he joined the faculty at the University of Michigan in 2012 as an Assistant Professor of Linguistics and Psychology. He was promoted to Associate Professor in 2019. The neural dynamics of naturalistic language comprehension Wednesday, August 21, 9:45 – 10:30 am, Finlandia Hall Chair: Manuel Carreiras The cognitive neuroscience of language relies largely on controlled experiments that are different from the everyday situations in which we use language. I discuss an approach to studying specific aspects of sentence comprehension in the brain using data collected while participants perform an every-day task, such as listening to an audiobook story. The approach uses ‘neurocomputational’ models that are based on linguistic and psycholinguistic theories. These models quantify how a specific computation, such as identifying a syntactic constituent, might be carried out by a neural circuit wordby-word. Model predictions are tested for their statistical fit with measured brain data. By comparing the fit of the models to electrophysiological and hemodynamic data we can tease out the spatio-temporal dynamics of specific aspects of structure-building. As director of the Computational Neurolinguistics Laboratory at Michigan, Dr. Brennan is one of the pioneers in the study of language comprehension using naturalistic stimuli, such as audiobooks. The naturalistic approach is made possible by the use of rigorous and explicit computational models that specify how different linguistic representations and computations are deployed word-by-word. This work has contributed importantly to our understanding of how syntactic structure is built and used by the brain during comprehension, and to our understanding of predictive processing in children with developmental disorders who may have difficulty completing standard laboratory tasks. Dr. Brennan combines EEG, MEG, and fMRI to obtain neural signals for computational modelling and has brought together a remarkably interdisciplinary group of neuroscientists, linguists, psychologists and computer scientists as collaborators. Dr. Brennan’s research is funded by the National Science Foundation. Though early in his career, Dr. Brennan has become a sought after voice for linguistically informed cognitive neuroscience of language, with numerous invited presentations and publications aimed at making neurobiological methods accessible for linguists and linguistics accessible for neuroscientists. 18 The Society for the Neurobiology of Language SNL 2019 Program  Early Career Award Early Career Award: Vitória Piai Vitória Piai As we speak Associate Principal Investigator, Radboud University, Donders Centre for Cognition and Radboud University Medical Center, Department of Medical Psychology, Nijmegen, the Netherlands Wednesday, August 21, 9:45 – 10:30 am, Finlandia Hall Dr. Vitória Piai received her Ph.D. in Cognitive Neuroscience from Radboud University, Nijmegen, The Netherlands in 2014. After completing post-doctoral research at Helen Wills Neuroscience Institute, University of California, Berkley and the Center for Aphasia and Related Disorders, Veterans Affairs Health Care System Northern, Martinez, she joined Radboud University again as an Associate Principal Investigator at the Donders Centre for Cognition of Radboud University and the Department of Medical Psychology at the Radboud University Medical Center, Nijmegen. Chair: Manuel Carreiras Producing words involves not only preparation and execution of an articulatory programme, but also the access of conceptual and lexical information in long-term memory, as well as controlled processes for selection and monitoring. In this talk, I will present how I have been studying language production in relation to the domains of memory and executive functioning mainly using electrophysiology, as this technique provides a characterisation of neuronal activity at the resolution necessary to understand language processes as they occur. I will also show how this approach has allowed me to study how the brain reorganises following disruptions of normal functioning. The Early Career Awards are sponsored by In her current role as an associate principal investigator, Dr. Piai pursues fundamental and applied research on the neurobiological basis of language in healthy and impaired adults of different etiologies. A common theme in her intuitive, creative, and highly successful investigations is the linking of language, memory, and executive functioning, through uncovering shared neurobiological mechanisms. Electrophysiology forms the core of her interdisciplinary research program, which she combines with behavioral measures, non-invasive brain stimulation, lesion-symptom models, and computational modeling. This multimethods approach has resulted in great theoretical advances as illustrated in one of her influential studies published in PNAS that aimed at understanding the neurobiological basis of lexical-semantic processing using intracranial electrophysiology. She has already published no fewer than 39 peer-reviewed papers in top journals, obtained 3 substantial grants to fund her research, and has established herself as a rising star in the neuropsychology of language. The Society for the Neurobiology of Language 19 Awards SNL 2019 Program Abstract Merit Awards Travel Awards The Society for the Neurobiology of Language Abstract Merit Awards are given to the students and postdocs who submitted the highest ranked abstracts. Graduate Student Merit Award Winners This year, the Society for the Neurobiology of Language granted 15 Travel Awards. The awards, funded by the National Institutes of Health (NIH), help to cover travel and registration costs for the 2019 Society for the Neurobiology of Language Meeting in Helsinki. Nikki Janssen, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands Through the travel awards, SNL aims to encourage and foster the participation of junior scientists who are members of underrepresented groups. Charlotte Wiltshire, University of Oxford, UK The 2019 Travel Awards were given to: Post Doctoral Merit Award Winners Jose Aguasvivas, University of Basque Country UPV/ EHU, Spain Muge Ozker Sertel, New York University School of Medicine, USA Oscar Woolnough, University of Texas Health Science Center at Houston, USA Nicolás Araneda Hinrichs, University of Concepción, Chile Melissa Baker, City University of New York, USA Honorable Mention Jocelyn Caballero, University of California, San Francisco, USA Salomi Asaridou, University of California, Irvine, USA Nicole Dickerson, Johns Hopkins University, USA Rachel Romeo, Massachusetts Institute of Technology, USA Shirley Dumassais, Concordia University, Canada Ekaterina Stupina, National Research University Higher School of Economics, Moscow, Russia Alex Teghipco, University of California, Irvine, USA Benedikt Zoefel, MRC Cognition and Brain Sciences Unit, University of Cambridge, UK Andrey Zyryanov, National Research University Higher School of Economics, Moscow, Russia Xiaoxia Feng, Beijing Normal University, China Dimitrios S. Kasselimis, School of Medicine, University of Crete, Greece Sigfus Kristinsson, University of South Carolina, USA Kelly Michaelis, Georgetown University, USA Alberto Osa Garcia, Université de Montréal, Canada Lorelei Phillip Johnson, University of South Carolina, USA Sarah Phillips, New York University, USA Elizabeth Valles Capetillo, Instituto de Neurobiología in the Universidad Nacional Autónoma de México Rachel Elizabeth Weissler, University of Michigan, USA Future Meetings SNL 2020 October 21-23, 2020 Philadelphia, USA SNL 2021 October 7-9, 2021 Brisbane, Australia 20 The Society for the Neurobiology of Language SNL 2019 Program  Attendee Resources Attendee Resources There are no ATMs within Finlandia Hall. There is an ATM located in the railway station area. of that person’s gender, sexual orientation or race. In addition, we require that all questions and comments to speakers and poster presenters be respectful and collegial. Verbal aggression will not be tolerated. Abstracts Contact Us The full text of poster and slide abstracts can be found in the SNL 2019 Abstracts Book, which can be downloaded in PDF format from www.neurolang.org. To contact us onsite, visit the Registration Desk in the Finlandia Hall Foyer, or send an email to info@neurolang. org. We will respond to your email at our earliest opportunity. ATM Airport Information Helsinki Airport is located 21.1 km (12.5 miles) away from downtown Helsinki. For more information on getting from the Airport to the city centre, see Transportation. Audio-Visual An LCD projector (for PowerPoint presentations) will be provided in Finlandia Hall; however, computers are NOT provided. Presenters must bring their own computers and set them up BEFORE the start of the session in which they are presenting. Presenters must arrive at the auditorium a minimum of 30 minutes before their talk. Poster Slam slides are collected before the meeting and loaded onto a single laptop, which will be in the talk room for your presentation. Business Meeting The SNL Business Meeting is Thursday, August 22 at 9:30 am. All SNL members are encouraged to attend. This is your opportunity to hear about SNL, ask questions, and give feedback. Certificate of Attendance To receive a Certificate of Attendance, please visit the Registration Desk. If you require any changes, we will be happy to email or mail a copy after the meeting (info@ neurolang.org). Childcare Thanks to funding from the National Institutes of Health, SNL is pleased to offer onsite childcare at this year’s meeting in Helsinki. Childcare is located in the Orchestra Lounge. Childcare Hours Tuesday, August 20, 8:30 am – 6:00 pm Wednesday, August 21, 7:45 am – 7:15 pm Thursday, August 22, 8:15 am – 6:15 pm Code of Conduct The Society for the Neurobiology of Language is committed to providing a safe and professional environment during our annual meeting. All Attendees are expected to conduct themselves in a professional manner. It is unlawful to harass any person or employee because The Society for the Neurobiology of Language Copying & Printing Posters can be printed at Unigrafia (University print house). You can order online for pick up at the city centre location (Fabianinkatu 28), not too far from Finlandia Hall. Paper posters may take up to three working days to print. Disclaimer The SNL Program Committee reserves the right to make changes to the meeting program at any time without notice. This program was correct at the time of printing. Exhibits SNL is pleased to have the following companies exhibiting this year: Artinis Medical Systems BV, GE Healthcare, MIT Press (Neurobiology of Language), Nexstim, Rogue Research Inc., Siemens Healthineers, Taylor & Francis (Language, Cognition & Neuroscience). All exhibits are located in the Finlandia Hall Foyer. Exhibit Hours Tuesday, August 20, 8:30 am – 5:00 pm Wednesday, August 21, 8:00 am – 7:00 pm Thursday, August 22, 8:00 am – 4:30 pm Food Service Complimentary food and beverage service is available to all registered attendees at the following times: Tuesday Morning Coffee, 8:00 – 9:00 am Coffee Break, 10:15 – 10:45 am Afternoon Coffee & Snack, 3:15 – 3:45 pm Opening Night Reception, 7:00 – 8:30 pm Wednesday Morning Coffee, 7:30 – 8:30 am Coffee Break, 10:45 – 11:15 am Afternoon Coffee & Snack, 3:30 – 4:00 pm Social Hour, 5:15 – 7:00 pm Thursday Morning Coffee, 7:30 – 8:30 am Coffee Break, 10:15 – 10:45 am Afternoon Coffee & Snack, 3:45 – 4:15 pm 21 Attendee Resources SNL 2019 Program Future Meetings Name Badges SNL 2020 For security purposes, all attendees must wear their name badge to all sessions and social functions. Entrance into sessions is restricted to registered attendees only. If you misplace your name badge, please go to the Registration Desk for a replacement. October 21-23, 2020 , Philadelphia, USA SNL 2021 October 7-9, 2021, Brisbane, Australia SNL 2022 and Beyond We are currently accepting bids for hosting the 2022, 2023, and 2024 meetings. See https://www.neurolang. org/future-meeting-proposal/ to submit a proposal. Guest Policy Guests are allowed complimentary entry into one SNL session (for the purposes of seeing the poster or slide of the person they are a guest of). Guests are welcome to attend the Opening Night Reception. Guests must register at the SNL Registration Desk upon arrival and must be accompanied by the SNL attendee. Guests must wear a badge for entrance into the session they are attending. Internet Access Internet access is free throughout Finlandia Hall. Connect to “FinlandiaHallSecure.” The password is “FinlandiaHallSecure.” Lost & Found Please check with the SNL Registration Desk for lost and found items. Lunch Breaks SNL provides a 1.5 hour lunch break each day. Café Veranda, which features fresh, organic products, is located on the street level of Finlandia Hall. The glass walls and summer terrace offer an inspiring view of the Töölö Bay. In addition, attendees can venture out and discover some of the delicious local eateries nearby. A list of local restaurants is available at the Registration Desk. Meeting Rooms All general sessions (Keynotes, Award Talks, Slides, Slams, and the Symposia) will be held in Finlandia Hall. Posters will be presented in Restaurant Hall. Messages A bulletin board will be available for messages and job postings near the SNL Registration Desk. Mobile Phones Parking Q-Park Finlandia is located nearby and connected to Finlandia Hall by an underground walkway during the hours that Finlandia Hall is open. The entrance to the car park is on Karamzininranta. Parking charges apply and the 650-space facility is operated by Q-Park. For more information, please contact Q-Park on +358 20 781 2400. Two hours of free parking are provided to veterans and the disabled and mobilityimpaired. Tickets for these spaces are available at the Finlandia Hall Service Point on Level 1. Phone Charging Station For your convenience, a phone charging station is located at the Registration Desk. Poster Printing See Copying and Printing. Public Transportation (Free 3-day Pass) Get around safely and sustainably using Helsinki Region Transport (HRT/HSL), Helsinki’s public transit system. HRT/HSL will be supporting SNL 2019 by offering FREE public transportation for all meeting attendees. The 3-day tickets are valid in the greater Helsinki area, including the neighboring cities of Espoo, Vantaa and Kauniainen. Detailed information about public transportation in Helsinki can be found at https://www.hsl.fi/en. In addition, the HSL app is available for Android and iPhone devices at https://www.hsl.fi/en/app. Registration The SNL Registration Desk is located in the Finlandia Hall Foyer. Registration Desk Hours Tuesday, August 20, 7:00 am – 5:45 pm Wednesday, August 21, 7:00 am – 7:00 pm Thursday, August 22, 7:30 am – 6:00 pm Smoking Smoking, including the use of e-cigarettes, is not permitted inside Finlandia Hall. There are designated smoking areas outside of the M4 and K4 entrances. Attendees are asked to silence their mobile phones when in sessions. 22 The Society for the Neurobiology of Language SNL 2019 Program  Attendee Resources Social Events Transportation Opening Night Reception Car Rental Join your colleagues on Tuesday, August 20 at 7:00 pm for an elegant evening of food, drinks and stimulating conversation in the exquisite Banquet Hall of the Helsinki City Hall, Pohjoisesplanadi 11–13. Designed in 1833 by the famous German architect Carl Ludvig Engel, City Hall features a beautiful white and blue façade in the imperial style and overlooks the bustling Market Square. The interior was modernized the 1960’s, but the resplendent Banquet Hall has retained its elegant 19th century form. There are six car-rental companies at Helsinki Airport: Avis, Budget, Enterprise, Europcar, Hertz and Sixt. Their service desks can be found in the corridor between terminals 1 and 2. You can pick up the rental cars easily from the parking hall P3. To attend the Opening Night Reception, you must present your invitation at the door. Guests of registered SNL attendees are welcome to attend the reception but must also have an invitation. Invitations can be picked up at the Registration Desk. The following taxi agencies are serving the airport as its contractual partners: The walk from the Finlandia Hall to the Helsinki City Hall is 1.8 km, about a 20 minute walk. You can also take tram 4 from Finlandia Hall to the beautiful Senate Square and then walk one block to the City Hall. Agency-specific price information is available in front of the terminals, on the information screens next to the taxi ranks. For guests needing extra assistance getting to the event, please contact the SNL Registration Desk. Wednesday Evening Social Hour Attendees are invited to enjoy a special Social Hour in Restaurant Hall during the Wednesday evening poster session. The event will feature complimentary local delicacies, and the first drink is on us! Look for your free drink ticket in the back of your badge. In addition, a cash bar will be available. Social Media Join the SNL discussion on Twitter! Follow @SNLmtg for meeting information Follow SNL colleagues (like @kemmorey1) Tag meeting-related tweets with #snlmtg19 Join in the conversation by searching for tweets tagged #snlmtg19 Taxi Airport taxis pick up in front of terminal 1 and on ground floor at terminal 2. Lähitaksi Vantaan Taksi – Helsinki Airport Taxi Taksi Helsinki Finlandia Hall’s taxi stand can be found on Karamzininranta, in front of the K1 entrance. Taxis can also be asked to pick up guests on the Mannerheimintie side of the building. Train From the Airport: The train connection between Helsinki Airport and the Helsinki city center takes about 30 minutes. There are two services: Train I and Train P. Schedules, ticket prices and routes are available on the HSL website. Tickets can be purchased at kiosks throughout the airport (tickets are not available for purchase on the train). Local Travel: Free 3-day train tickets are available at the SNL Registration Desk. See Public Transportation for details. Uber Speakers Uber operates in Helsinki and is another option for transportation to and from the airport. See the Audio-Visual section above. Venue Location Finlandia Hall is located at Mannerheimintie 13 e, 00100, Helsinki. The Society for the Neurobiology of Language 23 Neurobiology of Language Journal SNL 2019 Program Neurobiology of Language Journal Neurobiology of Language is the new open access journal sponsored by the Society for the Neurobiology of Language and MIT Press. Launched in March 2019, the journal provides a new venue for articles across a range of disciplines addressing the neurobiological basis of speech and language. The journal is supported by an international gender-balanced editorial board with wide-ranging expertise. It is fully open access, published on-line, has author-friendly submission, and rigorous double-blind peer review, including consensus reviewing. Article Processing Charges are significantly reduced for authors who are members of SNL to $800 per article. The editors are now accepting submissions for the inaugural issue on work that significantly advances the understanding of language mechanisms as implemented in the human brain. The journal Editors-in-Chief, Steven L. Small and Kate E. Watkins, will be at the MIT Press booth during the meeting. Stop by if you have anything you would like to discuss or to learn more about the journal and how to submit articles. Find out more at: https://www.mitpressjournals.org/nol European Dyslexia Association & International Dyslexia Association www.eda-info.eu www. dyslexiaida.org We are now welcoming participation of researchers to our conferences. EDA Academic Committee Professors Franck Ramus, Gerd Schulte-Körne, Maggie Snowling, and Karin Landerl IDA Scientific Advisory Board Professors Fumiko Hoeft, Richard Wagner, Jason Yeatman, Elsje van Bergen, Don Compton, Laurie Cutting, Holly Fitch, Elena Grigorenko, Charles Hulme, Lynn Fuchs, Cammie McBride, Ken Pugh, Sally Shaywitz, and Julie Washington 24 The Society for the Neurobiology of Language SNL 2019 Program  Sponsors and Contributors Sponsors and Contributors The Society for the Neurobiology of Language thanks the following sponsors for their support of our 2019 meeting. Please visit our exhibitors in the Findlandia Hall Foyer. Major Sponsor National Institutes of Health The Society for Neurobiology of Language is generously supported by the National Institutes of Health (R13 grant #DC011445). The NIH has been supporting SNL meetings by sponsoring travel grants to under-represented minorities, daycare services, sign language interpreting services and more, thus enhancing the accessibility of the meetings to various audiences. We are extremely grateful to the NIH for its generous support of SNL meetings over the years. Platinum Sponsor Neurobiology of Language (The MIT Press) The MIT Press commits daily to re-imagining what a university press can be. Known for bold design and creative technology, the Press advances knowledge by publishing significant works from leading scholars and researchers around the globe for the broadest possible access, impact, and audience. Committed to exploring new disciplines and modes of inquiry, the press publishes 220+ new books a year and over 35 journals in a wide range of fields. The Press is proud to partner with the Society for the Neurobiology of Language to publish Neurobiology of Language. Award Sponsors Brain & Language (Elsevier) An interdisciplinary journal, Brain & Language focuses on the neurobiological mechanisms underlying human language. The journal covers the large variety of modern techniques in cognitive neuroscience, including lesion-based approaches as well as functional and structural brain imaging, electrophysiology, cellular and molecular neurobiology, genetics, and computational modeling. All articles must relate to human language and be relevant to an elaboration of its neurobiological basis. Along with an emphasis on neurobiology, journal articles are expected to take into account relevant data and theoretical perspectives from psychology and linguistics. Brain & Language (Elsevier) is the sponsor of the Early Career Awards. Language, Cognition & Neuroscience (Routledge) Language, Cognition & Neuroscience publishes high-quality papers taking an interdisciplinary approach to the study of brain and language, and promotes studies that integrate cognitive theoretical accounts of language and its neural bases. The Journal publishes both high quality, theoretically-motivated cognitive behavioural studies of language function, and papers which integrate cognitive theoretical accounts of language with its neurobiological foundations. Language, Cognition & Neuroscience (Routledge) is the sponsor of the Distinguished Career Award. The Society for the Neurobiology of Language 25 Sponsors and Contributors SNL 2019 Program Silver Sponsors European Dyslexia Association & International Dyslexia Association The EDA and IDA are organizations for people with dyslexia, their families, professionals and academic researchers. EDA is a European umbrella organization for national and regional associations, and the IDA under their host organization has 43 branches in the U.S. and 26 global partners. Our mission is to facilitate the exchange of information and good practice through international networking and lobbying. Currently, we are increasing participation of researchers to link research and practice. For more information, please visit https://www.eda-info.eu, https://dyslexiaida.org. GE Healthcare GE Healthcare is a leading provider of medical imaging, monitoring, biomanufacturing, and cell and gene therapy technologies. GE Healthcare enables precision health in diagnostics, therapeutics and monitoring through intelligent devices, data analytics, applications and services. With over 100 years of experience and leadership in the healthcare industry and more than 50,000 employees globally, GE Healthcare helps healthcare providers, researchers and life sciences companies in their mission to improve outcomes for patients around the world. Visit our website www.gehealthcare. com for more information. Rogue Research Inc. Rogue Research has been your partner in non-invasive brain stimulation for almost 20 years. We pioneered neuronavigation for TMS with Brainsight and continue this leadership role by developing the most advanced TMS stimulator, the Brainsight cTMS. cTMS offers the ability to manipulate key parameters in the TMS pulse including pulse width and directionality and opens new avenues for stimulation research. Rogue Research also provides tools for basic science including our Brainsight-driven microsurgical robot and deep brain stimulator designed specifically for animal studies. We can also develop custom hardware solutions for your research needs. Siemens Healthineers Siemens Healthineers enables healthcare providers worldwide to increase value by empowering them on their journey towards expanding precision medicine, transforming care delivery, improving patient experience and digitalizing healthcare. A leader in medical technology, Siemens Healthineers is constantly innovating its portfolio of products and services in its core areas of diagnostic and therapeutic imaging and in laboratory diagnostics and molecular medicine. Siemens Healthineers is also actively developing its digital health services and enterprise services. Nexstim Nexstim is a medical technology company pioneering in the development of noninvasive brain stimulation technologies. The company’s proprietary SmartFocusTM technology with a highly sophisticated 3D navigation is the only truly personalized, navigated transcranial magnetic stimulation (nTMS) approach used for both diagnostics and therapy. Nexstim’s NBS system is the only FDA cleared and CE marked navigated TMS system for pre-surgical mapping of the speech and motor cortices of the brain. In addition, the company is commercialising its Navigated Brain Therapy (NBT®) system for the treatment of Major Depressive Disorder (MDD). ANT Neuro ANT Neuro is a Dutch-based, internationally established corporation specialized in the development, manufacturing and sales of medical and research applications, including EEG, aEEG, EMG, MRI, TMS and MEG technology. ANT Neuro specializes in being a single-source provider of innovative, high performance products within neuroscience, neurocare, neuromodulation and neonatology. 26 The Society for the Neurobiology of Language SNL 2019 Program  Sponsors and Contributors Event Sponsors CICERO Learning CICERO Learning is a network for distinguished researchers and research groups on learning, brain and technology. The researchers of the network are based in different universities and research institutes in Finland. The network builds co-operation with research groups and units around the world. CICERO Learning is the sponsor of the Tuesday afternoon coffee break. Lingsoft Lingsoft is a full-service language management company and one of the leading providers of language services and solutions in Finland and the Nordic countries. We provide a wide range of services and solutions for analysis, processing, production and management of spoken and written language. Lingsoft is the sponsor of the Wednesday afternoon coffee break. Local Contributors City of Helsinki Helsinki is the Capital of Finland and the centre of the Helsinki Region, a functional urban region of about 1.48 million inhabitants and 767,000 jobs. Founded in 1550, it is the world’s northernmost metropolitan area with over one million people, as well as the northernmost capital of an EU member state. Helsinki is the country’s most important centre for politics, education, finance, culture, and research, and it has one of the highest urban standards of living in the world. Helsinki Region Transport Helsinki Region Transport is a joint local authority whose task is to develop and provide smooth, reliable transport solutions to customers’ needs in the region. In 2017, a total of 375 million journeys were made on HSL’s transport services. HSL strives to make the Helsinki region the most functional urban region in the world, promoting attractive ecofriendly mobility options. Aalto Brain Centre Aalto Brain Centre is a strategic initiative within the Aalto University School of Science. ABC relies on the strong expertise of the school’s research teams in systems neuroscience, neurotechnology, signal analysis, machine learning, network analysis, brain imaging, psychology, cognitive science, clinical neuroscience, naturalistic neuroscience, physics, and engineering. Aalto University Aalto University, with its main campus in Otaniemi, Espoo, is a multidisciplinary community of 20,000 bold thinkers, committed to identifying and solving grand societal challenges and building an innovative future. Aalto University was established in 2010 as three leading Finnish universities, Helsinki University of Technology, Helsinki School of Economics, and University of Art and Design Helsinki, were merged. University of Helsinki Established in 1640, the University of Helsinki is the oldest and largest institution of academic education in Finland, an international scientific community of 40,000 students and researchers. In international university rankings, the University of Helsinki typically ranks among the top 100. The University of Helsinki seeks solutions for global challenges and creates new ways of thinking for the best of humanity. The Society for the Neurobiology of Language 27 Slide Sessions SNL 2019 Program Slide Schedule Slides are held in Finlandia Hall. Session Session A Session B Session C Date Tuesday, August 20 Wednesday, August 21 Thursday, August 22 Time 1:30 – 3:00 pm 2:00 – 3:30 pm 10:45 am – 12:15 pm Chair Seana Coulson Jamie Reilly Clara Martin Slide Sessions Slide Session A Tuesday, August 20, 1:30 – 3:00 pm, Finlandia Hall Chair: Seana Coulson Speakers: Rachel Romeo, Anna Martinez-Alvarez, Claudia Männel, Linda Lönnqvist 1:30 pm A1 Cortical plasticity associated with a parentimplemented language intervention Rachel Romeo1,2, Julia Leonard1,3, Hannah Grotzinger1, Sydney Robinson1,3, Megumi Takada1, Joshua Segaran1, Allyson Mackey1,3, Meredith Rowe4, John Gabrieli1,4; 1Massachusetts Institute of Technology, 2Boston Children’s Hospital, 3University of Pennsylvania, 4Harvard University Introduction: Children’s early language experiences, including high quality parent-child interactions, are related to their linguistic, cognitive, and academic development, as well as both their brain structure and function (Romeo et al., 2018). On average, children from lower socioeconomic status (SES) backgrounds receive reduced language exposure. Recently, several parent-implemented interventions have resulted in both improved home language environments as well as increases in children’s language skills (e.g., Leech et al., 2018, Ferjan Ramirez et al., 2018). However, the neuroplastic mechanisms underlying this modification in children’s language input-output relationship are yet unknown. Methods: One hundred lower-SES 4-to-6 yearold children and their primary caregivers were randomly assigned to either a 9-week family-based intervention or a no-contact control group. The intervention centered around an interactive, culturally sensitive curriculum, during which trained facilitators led didactic smallgroup sessions on using responsive “meaningFULL language” to enhance children’s communication, executive functioning, and school readiness, provided in either English or Spanish. Children completed pre and post assessments of verbal and nonverbal cognitive skills, and subsets of each participant group additionally completed two full days of auditory home language recording (with LENA) and structural neuroimaging, from which longitudinal cortical thickness changes were calculated using Freesurfer. Results: Controlling for baseline measures, families who completed the intervention exhibited significantly more adult-child conversational turns than families assigned to the 28 control group; however, there was still a wide variation in response. A 3-way interaction revealed that within the intervention group only, the magnitude of change in conversational turn-taking was positively correlated with increases in children’s receptive and expressive language scores. Furthermore, change in turn-taking was significantly positively correlated to cortical thickening in language-related left inferior frontal regions, as well as social-related right supramarginal regions. Conclusions: This study provides the first evidence of neural plasticity as a result of perturbations in children’s early language environments. Results suggest that the neural mechanisms underlying the effect of parent-implemented language interventions on improve children’s language skills may lie in cortical plasticity of both canonical language and social regions during development. These findings have translational implications for social, educational, and clinical policies involving early intervention. 1:50 pm A2 Neural networks of non-adjacent rule learning in infancy Anna Martinez-Alvarez1, Judit Gervain1, Elena Koulaguina2,3, Ferran Pons2,3,4, Ruth De Diego-Balaguer2,3,4,5; 1 CNRS-Université Paris Descartes, 2University of Barcelona, 3 Cognition and Brain Plasticity Unit, 4Institute for Brain, Cognition and Behaviour, 5ICREA One essential mechanism advocated to underlie infant grammar acquisition is rule learning. Previous research investigated the neural networks of repetition-based rule learning (ABB; e.g., “mubaba,” “penana”) in neonates and found increased responses to repetition sequences in temporal and left frontal regions (Gervain et al., 2008). However, the learning of rules involving non-adjacent elements in the absence of repetition-based cues (AXB; e.g., “pel wadim rud” “pel loga rud”) is only observed after the first year of life (Gómez & Maye, 2005). Recent proposals account for this developmental trajectory postulating that infants’ attentional system may support language development (de Diego-Balaguer et al., 2016). The present study reports four experiments, in which we test the hypothesis that prosodic cues promote the learning of non-repetition based regularities in infancy. Prosodic cues (e.g. pitch manipulation) are used as a proxy of exogenous attention capture already present in early infancy. We predict that the use of exogenous attention mechanisms will allow young infants to learn the rules. In two behavioral and two fNIRS experiments, The Society for the Neurobiology of Language SNL 2019 Program  we presented 8-10-month-old infants (n = 83) with sequences containing an AXB-type structure, where A and B predict one another with certainty (“pedibu”, “pegabu”) or a random control structure (“dibupe”, “bugape”). The stimuli either contained or lacked pitch cues in the dependent (A and B) elements. Infants’ rule discrimination was measured behaviorally using a Central Fixation Procedure and infants’ brain activity (hemodynamic response) was measured in the temporal, parietal, and frontal lobes using functional nearinfrared spectroscopy (fNIRS). In the absence of pitch cues, behavioural results show that infants are unable to discriminate rule-following from random control structures. At a neural level, a larger activation (oxyHb) is observed in temporal areas but no difference between conditions (rule vs. no rule) arises, suggesting that infants’ brain processes the auditory stimuli similarly in both conditions. However, in the presence of prosodic cues highlighting the elements to be learned, infants show successful rule learning behaviourally and a significant larger activation (oxyHb) for the rule condition is observed in bilateral temporal and frontal areas. These results suggest that infants’ use of prosodic cues present in the input facilitates their learning of rules. This study contributes to our understanding of the brain substrates of rule learning suggesting that the powerful attention system infants are equipped with early in life may assist language learning. 2:10 pm A3 Basic acoustic features of the learning context shape infants’ lexical acquisition Claudia Männel1,2,3, Hellmuth Obrig1,2, Arno Villringer1,2, Merav Ahissar4, Gesa Schaadt1,2,3; 1Medical Faculty, University of Leipzig, 2Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 3Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 4Department of Psychology, The Hebrew University of Jerusalem In language acquisition, infants benefit from salient acoustic marking and repetition of the words to be learned. However, beyond the features of the learning items themselves, basic acoustic and phonological characteristics of the learning context have not been examined in detail for lexical acquisition. Those contextual features may be highly relevant, given that speakers tend to provide constant sentence frames and frequent words in the learning context when teaching infants new words. The current event-related brain potential (ERP) study examined in two experiments the processing benefit from repeated contextual information, which might act as an anchor for the encoding and later recognition of learning items. In the first experiment, we probed repeated acoustic information (i.e., constant pitch marking of syllables) as learning context, and in the second experiment repeated phonological information (i.e., constant syllables). In Experiment 1, infants at 6.5 months (N = 30) were familiarized with syllable pairs in two blocks with a constant acoustic context (i.e., first syllable with constant pitch marking) and two blocks with a variable context (i.e., first syllable with variable pitch The Society for the Neurobiology of Language Slide Sessions marking), each comprising 40 stimulus pairs. Importantly, the second syllables represented the learning items and were identical across conditions, thus enabling the evaluation of ERP responses to the same stimuli preceded by either constant or random first stimuli. In Experiment 2, 10-month-olds (N = 28) were familiarized with syllables pairs in two blocks with a constant phonological context (i.e., constant first syllable) and in two blocks with a variable context (i.e., variable first syllable). In both experiments, each familiarization block was followed by a test phase contrasting ERP responses to familiarized versus novel syllables (Experiment 1) and pseudo-words containing the familiarized versus novel syllables (Experiment 2). Here, differential ERP responses would indicate the recognition of previously presented stimuli and reveal whether recognition is modulated by familiarization condition (i.e., constant vs. variable context). During familiarization, ERP results across experiments revealed more pronounced responses for the second syllables presented in the constant context than in the variable context. This implies that physically identical stimuli are differently processed depending on their stimulus environment. ERP results at test revealed a modulation of familiarity recognition, as infants only showed ERP differences between novel and familiar stimuli, when the latter were previously heard under constant context conditions. Together these results indicate that repeated contextual information, whether acoustic or phonological in nature, acts as an anchor guiding infants’ attention towards the processing of subsequent stimuli. Importantly, this enhanced processing seems to boost infants’ later recognition of learning items, pointing to the relevance of constant contextual information in language acquisition. 2:30 pm A4 Brain Responses to Speech Sound Changes are Associated with the Development of Prelinguistic Skills in Infancy Linda Lönnqvist1, Paula Virtala1, Eino Partanen1, Paavo H. T. Leppänen2, Anja Thiede1, Teija Kujala1; 1Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, 2 Department of Psychology, University of Jyväskylä, Finland Neural auditory processing and prelinguistic communication, such as the use of vocalizations, facial expressions and gestures for communicative purposes, builds the foundation for later language development. Children who later develop language impairments may exhibit difficulties in either or both of these two abilities at an early age. However, the associations between neural auditory processing abilities and the development of prelinguistic communication skills are not well known. The interplay of these two abilities needs to be further elucidated data in order to detect infants at highest risk of language development delays and advance preventive interventions. Optimally, this should be done using longitudinal data sets and methods suitable for longitudinal analyses. The study investigated the relationship between neural speech sound processing at six months of age and the development of prelinguistic communication skills between six and 12 months of 29 Slide Sessions age in approximately 90 infants. Neural speech sound processing, specifically, cortical discrimination of speech-relevant auditory features, was studied using electroencephalography (EEG). We recorded mismatch responses (MMRs) to changes of the frequency, vowel duration, or vowel identity of the second syllable in the pseudoword /ta-ta/. Prelinguistic communication skills at six and 12 months of age were assessed with the parental questionnaire Infant-Toddler Checklist (ITC). To examine the association between the level of MMR amplitudes and the change in prelinguistic skills, we used a variant of a structural equation model (SEM), the latent change score (LCS) model. We opted for a method explicitly modelling intra-individual change of prelinguistic skills in order to correctly capture the longitudinal nature of the data set. The preliminary results of the LCS model suggested that a large amplitude of the MMR for the frequency deviant was associated with a large positive change of prelinguistic skills between six and 12 months of age. To check the robustness of the results, we also built a simple correlational model, showing that the amplitude of the MMR for the frequency deviant was positively associated with the level of prelinguistic skills at 12 months of age. Overall, our results suggest that neural auditory processing of speech sounds is associated with the development and level of prelinguistic communication skills. Neural auditory processing could therefore be a promising neural marker for prelinguistic development. Slide Session B Wednesday, August 21, 2:00 – 3:30 pm, Finlandia Hall Chair: Jamie Reilly Speakers: Nikki Janssen, Olga Dragoy, Nitin Tandon, Roeland Hancock 2:00 pm B1 Subtracts of the arcuate fasciculus mediate conceptually driven generation and repetition of speech Nikki Janssen1,2, Roy P.C. Kessels1,2,5, Rogier B. Mars1,4, Alberto Llera1,4, Christian F. Beckmann1,3,4, Ardi Roelofs1; 1Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 2Department of Medical Psychology, Radboud University Medical Center, 3Radboudumc, Donders Institute for Brain, Cognition and Behaviour, Department of Cognitive Neuroscience, 4Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB), University of Oxford, 5Vincent van Gogh Institute for Psychiatry, Venray, the Netherlands Recent tractography and postmortem microdissection studies have shown that the left arcuate fasciculus (AF), a major fiber tract for language, consists of two subtracts directly connecting temporal and frontal cortex. These subtracts link posterior superior temporal gyrus (STG) versus middle temporal gyrus (MTG) to posterior inferior frontal gyrus. It has been hypothesized that the subtracts mediate different functions in speech production, but direct evidence for this hypothesis is lacking. To functionally segregate the two segments of the AF with different hypothesized functions we combined functional magnetic resonance imaging (fMRI) with diffusion 30 SNL 2019 Program tensor imaging (DTI) tractography. We determined the functional roles of the STG and MTG subtracts using two prototypical speech production tasks, namely spoken pseudoword repetition (PR) and verb generation (VG). Overt repetition of aurally presented pseudowords was assumed to activate areas involved in sublexical mapping of sound to articulation and the STG segment of the AF. In contrast, overt generation of verbs in response to aurally presented nouns was expected to activate areas associated with lexical-semantic driven production and the MTG segment of the AF. Task-based activation nodes then served as seed regions for probabilistic tractography. Fifty healthy adults (25 women, range 19–75 years, all right-handed) underwent task-fMRI and multishell diffusion weighted imaging. Within the major frontal and temporal activation clusters the peak voxels were identified for each task, resliced to the native space of each subject’s DTI data, and enlarged to a sphere with a radius of 6 mm. Tractography was then performed using a probabilistic tractography algorithm implemented in FSL (probtrackx) and CSF segmentations in temporal and frontal regions, acquired through FSL FAST, were used as exclusion masks. We then zoomed in on the region where the fMRI-based tracts arc around the lateral sulcus and calculated the locality of peak sample count of the group probability maps of each tract. For validation purposes, we subsequently performed Linear Discriminant Analyses (LDA) using as features the (x,y,z) coordinates of the peak voxel count within the arc of the AF for each task and each subject and assessed the performance using Leave One Out (LOO) cross validation. In the temporal lobe, PR and VG were associated with areas of activation in the left STG and left MTG, respectively, and both tasks showed activation in BA44. Fiber tracking based on these temporal and frontal fMRI-based seeds revealed a clear segmentation of the left AF into two subtracts. In addition, the LDA resulted in a mean classification accuracy of 82.9% (STD = 0.29), demonstrating that the location of peak sample count within the AF contains discriminative power to distinguish which of both tasks was being performed. Our findings corroborate evidence for the existence of two distinct subtracts of the AF with different functional roles, namely sublexical mapping of sound to articulation by the STG-tract and conceptually driven generation of words by the MTG-tract. Our results contribute to the unraveling of a century-old controversy concerning the functional role in speech production of a major fiber tract involved in language 2:20 pm B2 Functional specificity of the left frontal aslant tract: evidence from intraoperative language mapping Olga Dragoy1,2, Andrey Zyryanov1, Oleg Bronov3, Elizaveta Gordeyeva1, Natalya Gronskaya4, Oksana Kryuchkova5, Evgenij Klyuev6, Dmitry Kopachev7, Igor Medyanik6, Lidiya Mishnyakova8, Nikita Pedyash3, Igor Pronin7, Andrey Reutov5, Andrey Sitnikov8, Ekaterina Stupina1, Konstantin Yashin6, Valeriya Zhirnova1, Andrey Zuev3; 1National Research University Higher School of Economics, Moscow, 2Federal Center for Cerebrovascular Pathology and Stroke, Moscow, 3National The Society for the Neurobiology of Language SNL 2019 Program  Medical and Surgical Center named after N.I. Pirogov, Moscow, 4 National Research University Higher School of Economics, Nizhny Novgorod, 5Central Clinical Hospital of the Presidential Administration of the Russian Federation, Moscow, 6Privolzhsky Research Medical University, Nizhny Novgorod, 7N.N. Burdenko National Scientific and Practical Center for Neurosurgery, Moscow, 8Federal Centre of Treatment and Rehabilitation of the Ministry of Healthcare of the Russian Federation, Moscow The left frontal aslant tract (FAT), a frontal intralobular white-matter pathway, connecting the posterior regions of the superior and inferior frontal gyri, has been proposed to be relevant for language, and specifically – for speech initiation and fluency. Individuals with stroke (Kinkingnehun et al., 2007; Basilakos et al., 2014), tumor (Bizzi et al., 2012; Chernoff et al., 2018) and primary progressive aphasia (Catani et al., 2013; Mandelli et al, 2014) showed reduction of spontaneous speech production whenever the left FAT was involved. A few recent studies combined intraoperative direct electrical stimulation (DES) with white matter reconstructions to tap into linguistic relevance of the left FAT (Fujii et al., 2015; Kinoshita et al., 2015; Sierpowska et al., 2015; Vassal et al., 2014). However, convincing evidence that DES of the FAT affects specifically spontaneous speech initiation, and not a general language production ability, was missing. The aim of this study was to test linguistic functional specificity of the left FAT in the awake surgery settings. Ten consecutive patients (three female; age range 25-64, M=41 y.o.) underwent awake craniotomy with language mapping for removal of a brain pathological tissue (9 primary brain tumors, WHO grade 1-4, and 1 focal cortical dysplasia) in proximity of the left FAT. Two language tasks were used in combination with cortical DES: picture naming – a standard and widely used test for intraoperative language production mapping; and sentence completion, tapping more specifically into spontaneous speech initiation. Diffusion-tensor imaging sequences were acquired for all patients preoperatively, using 3T or 1.5T scanners (64 directions, 2.5 or 3 mm isovoxel, b=1500 or 1000 s/ mm2, two repetitions with opposite phase encoding directions). After preprocessing in FSL (Jenkinson et al., 2012) and ExploreDTI (http://www.exploredti.com) using the deterministic diffusion tensor imaging approach, the left FAT in each of the patients was reconstructed in TrackVis (http://www.trackvis.org) manually. The language-positive sites revealed during the intraoperative procedure were further mapped onto those individual reconstructions. Intraoperative stimulation of the exposed cortex in all ten cases resulted in language-positive sites, with a task dissociation revealed. Some sites were predominantly responsive to sentence completion: being stimulated on those, patients could not complete a sentence, but were able to name a picture. Overlaying the revealed language-positive sites, which were specifically responsive to sentence completion, and not to action naming, on tractography reconstructions demonstrated that all of them were located precisely on individual cortical terminals of the FAT in the superior and/or inferior frontal gyri. Direct electrical stimulation The Society for the Neurobiology of Language Slide Sessions of the left FAT was associated with a specific language impairment – inability to complete sentences, in contrast to a spare ability to name a picture. This proves linguistic functional specificity of the left FAT as a tract underlying spontaneous speech initiation and suggests the sentence completion task as an adequate tool for intraoperative functional mapping of the FAT. The study was supported by the Russian Foundation for Basic Research (project 18-012-00829) and by the RF Government grant (ag. No. 14.641.31.0004). 2:40 pm B3 A new essential language site revealed by direct cortical recordings and stimulation Nitin Tandon1,2, Kiefer J Forseth1; 1McGovern Medical School, 2Memorial Hermann Hospital Functional maps of eloquent cortex have assigned major roles to the inferior frontal gyrus, distributed substrates in the temporal lobe, and mouth sensorimotor cortex. These architectures for language production have been primarily informed by data from behavioral responses, lesion mapping, and functional imaging – methods without access to rapid, transient, and coordinated neural processes. In contrast, human intracranial electrophysiology is uniquely suited to study the network dynamics involved in cognition with full spectrum recordings of cortical oscillations at millimeter spatial and millisecond temporal resolution. Furthermore, these recordings afford us the opportunity to causally interact with cortex through the injection of targeted current, mimicking transient focal lesions. In a large cohort, we use both passive recordings and active modulations of cortical function to generate a complete characterization of the language network. These results delineate and emphasize an under-appreciated, yet essential, node in the broader network: the dorsolateral prefrontal cortex. We collected data in 201 patients undergoing language mapping (awake craniotomy, n=60, subdural grids, n=49; stereotactic depths, n=92) with CSM (110mA, 50Hz, 2s) and/or intracranial electrophysiology. Language function was evaluated with a battery of tasks including visual picture naming and auditory naming to description. Stimulation-induced depolarization and electrode recording zones were transformed onto the pial surface with a current spread model to generate subject-specific functional map. CSM at the group-level revealed five regions which consistently disrupted both auditory and visual naming function. These regions were also identified using electrophysiology and are listed in their temporal sequence of engagement - middle fusiform gyrus, inferior frontal gyrus, dorsomedial prefrontal cortex, superior temporal gyrus, and posterior middle temporal gyrus. Gamma (60-120 Hz) power during task performance was strongly predictive of functional classification by CSM (p<0.001). In particular, we found that the dorsolateral prefrontal region was active prior to articulatory onset and its stimulation was consistently disruptive to domain-general naming. This analysis, integrating essential surgical planning tools, constitutes a significant advance in large-scale multimodal population- 31 Slide Sessions level maps of human language. The results motivate further investigation of the role of dorsolateral prefrontal cortex in language production. An analysis of the impact of resections very proximate to this site, on language production, is underway. 3:00 pm B4 Genetic Differentiation of Dorsal and Ventral Language Processes Roeland Hancock1; 1University of Connecticut A major goal of neurobiological studies of language processing is to delineate the functional architecture of the multiple, hierarchically interacting neural systems that underlie human language capacity, and to identify the biological factors that regulate the development of these systems. We used genetic correlation to investigate how shared genetic factors may contribute to covariance in language-related task fMRI activation between the left inferior frontal cortex (IFC), spanning classical Broca’s area, and the rest of the left hemisphere. The results broadly provide novel support for a dorsal/ventral dual stream model of language processing, with dorsal and ventral streams having distinct genetic influences, yet also raise questions about the role of premotor cortex (PMC) and the anterior temporal lobe (aTL) within language networks. The IFC was partitioned into 4 clusters by spectral clustering and Calinski-Harabaz score. A cluster spanning BA44/45 largely reproduced current models of dorsal stream language architecture, with significant genetic similarities between IFG, posterior STS, perisylvian cortex and angular gyrus. A cluster spanning inferior BA45 and posterior BA47 was suggestive of a ventral language stream, having significant genetic similarities with middle temporal gyrus/TE2. These results provide novel confirmation of current understanding of language networks, showing that a broad dorsal/ventral stream model is also supported by genetic differentiation of the two streams. In contrast to the expected dorsal architecture, we found that activation in PMC was not genetically similar to adjacent BA44. Instead, PMC activity, along with a posterior temporal region, was genetically similar to a cluster spanning BA47/BA45. This result is also inconsistent with parcellations of the IFG based on genetic similarity in cortical morphology (Cui et al., 2016), which placed PMC with BA44. This suggests that functional architecture is often, but not always, consistent with underlying genetic architecture, and points to the importance of understanding the neural architecture of language processing at multiple biologically sensitive levels. This analysis also revealed putative parcellation of sensorimotor integration and lexico-semantic networks of distinct shared genetic variance. Methods Functional activation (beta) values from a narrative comprehension fMRI task (Binder 2011) were obtained from preprocessed Human Connectome Project (HCP) young adult data (Barch et al., 2013). The related individuals from the larger HCP1200 sample were split into test and validation samples (approximately 240 twin pairs in each) to verify the reliability of clusters. Within each sample, beta 32 SNL 2019 Program values were adjusted for sex and age, and z-transformed. Genetic correlations were estimated between each vertex within left BA44, BA45, BA47 and FOP, and every other vertex in the left hemisphere using a bivariate additive-environmental model that partitioned variance into additive genetic and environmental components. Permutation tests were used to identify regions of significant genetic similarity at P <.05. Slide Session C Thursday, August 22, 10:45 am – 12:15 pm, Finlandia Hall Chair: Clara Martin Speakers: Oscar Woolnough, Angela Grant, Pantelis Lioumis, Maria Spychalska 10:45 am C1 Functional Architecture of the Ventral Visual Pathway for Reading Oscar Woolnough1, Cristian Donos1, Patrick Rollo1, Simon Fischer-Baum2, Stanislas Dehaene3,4, Nitin Tandon1,5; 1University of Texas Health Science Center at Houston, 2 Rice University, 3INSERM-CEA Cognitive Neuroimaging Unit, 4 College de France, 5Memorial Hermann Hospital, Texas Medical Center Visual word reading is believed to be performed by a hierarchical system with increasing sensitivity to complexity, from letters to morphemes and whole words, progressing anteriorly along the ventral cortical surface and culminating in the visual word form area (VWFA). The VWFA has been implicated in sub-lexical processing but its precise role remains controversial. The lack of temporal resolution in functional imaging studies and problems with source localisation with non-invasive electrophysiological measures has led to an incomplete understanding of the functional roles of visual word regions. Here, we used direct recordings across the ventral visual pathway in a large cohort to create a spatiotemporal map of visual word reading. Word reading experiments were performed in 48 patients undergoing semi-chronic implantation of intracranial electrodes for localising pharmaco-resistant epilepsy. Each patient performed a set of experiments testing sublexical processing (false-fonts, letter strings of varying sub-lexical complexity and words), lexical processing (single word reading of words and pseudowords), and higher order language ( jabberwocky and real sentences). Broadband gamma activity (70-150Hz) from electrodes localised to the ventral cortical surface (n>600) was used to index local neural processing. We found, (i) contrary to fMRI studies, no evidence of a posterior-to-anterior complexity gradient but instead a sharp transition between preferential activation to false-fonts and a word selective region in the mid-fusiform. Non-negative matrix factorisation showed two distinct response profiles, prioritising either novel, low probability stimuli or wordlike stimuli in different spatial clusters. (ii) Contrasts of real and jabberwocky words during tasks requiring word engagement revealed two lexical processing regions: mid-fusiform and lateral occipitotemporal gyrus. However, The Society for the Neurobiology of Language SNL 2019 Program  no distinctions of this kind were seen in these regions while passively viewing the words in a pattern detection task, thereby suggesting task related modulation of these regions by higher language areas. (iii) During sentence reading, activity in the mid-fusiform was driven primarily by word frequency and to a lesser extent by word length. These effects were seen equally in both sentences and unstructured word lists, dissociating frequency from predictability. The frequency effect was also evident in occipitotemporal gyrus, to a lesser extent, establishing in both regions ~160 ms after word onset. Bigram frequency, orthographic neighbourhood and number of morphemes, syllables or phonemes did not significantly affect activity in any ventral region. In conclusion, we have identified and characterised at least two spatially separable ventral word regions that perform distinct roles in reading: lateral occipitotemporal gyrus and mid-fusiform cortex. We have shown these regions are task modulated and sensitive to the statistics of natural language, reflecting diverse influences from bottom-up vs top-down processes. This highlights the critical need for evaluating network behaviour rather than purely local activation when characterising language processes. 11:05 am C2 From structure to function: A multimodal analysis of bilingual speech in noise processing Angela Grant1,4, Shanna Kousaie2,4, Kristina Coulter1,4, Shari Baum2,4, Vincent Gracco2,3,4, Denise Klein2,4, Debra Titone2,4, Natalie Phillips1,4; 1 Concordia University, 2McGill University, 3Yale University, 4Centre for Research on Brain, Language and Music Speech comprehension in noise is difficult, especially in a second language (L2). Previous fMRI work suggests that regions such as the inferior frontal gyrus (IFG) and angular gyrus (AG) are sensitive to manipulations of both comprehensibility and predictability. In addition, the gray matter volume (GMV) of the IFG and Heschl’s gyrus (HG) has been demonstrated to correlate positively with performance on speech in noise tasks. In our study, we build on this literature to investigate how gray matter volume in these regions may predict an electrophysiological (EEG) signature of speech comprehension, the N400. We collected T1-weighted structural brain images and EEG recordings from a sample of 28 young (M age = 25; SD = 4.3) highly proficient English/French bilinguals (M L2 AoA = 3.9; SD=3.5). During EEG recording, participants heard sentences that varied in their semantic constraint, such as “The secret agent was a spy” or “The man knew about the spy” in both languages and in both noise (16-talker babble) and quiet. After each sentence, participants repeated the final word. Only sentences where the final word was produced correctly were analyzed. EEG data were pre-processed in BrainVision and N400 amplitude from 300-500ms for each condition was extracted for statistical analysis. Structural MRI data were pre-processed using the CIVET pipeline and gray matter volume was extracted from AG, HG, and IFG pars orbitalis, triangularis, and opercularis as defined by the AAL atlas. Extracted N400 amplitude and The Society for the Neurobiology of Language Slide Sessions GMV information were combined in R and linear mixed effect models for each ROI were estimated using lme4. Each model estimated the contextual N400 effect (Low Constraint – High Constraint) as a function of GMV in that region, Listening Condition, Language (L1/L2), and Years of L2 Experience. We additionally included total intracranial volume as a fixed effect, and included random intercepts for each participant with slopes that varied as a function of language and listening condition. Our analyses found 4-way interactions between GMV, Years of L2 Experience, Language, and Listening Condition in bilateral AG and IFG pars triangularis, as well as left IFG pars opercularis and right HG. When predicting the N400 effect in the L2, we observed two patterns. One pattern was indicative of more efficient processing, such that less GMV was associated with a larger N400 effect. The second pattern was indicative of increased sensitivity, such that more GMV was associated with a larger N400 effect. Years of L2 experience appeared to modulate which pattern was present in the data, such that participants with more L2 experience showed efficiency patterns in the bilateral IFG pars triangularis, as well as the right HG and AG. Participants with less L2 experience did not show efficiency patterns in any region, but did show sensitivity patterns in the bilateral IFG pars triangularis, left IFG pars opercularis and AG, as well as the right HG. We interpret these data as supporting and expanding the Dynamic Restructuring Hypothesis (Pliatsikas, 2019), which predicts patterns of growth followed by contraction in GMV as a function of bilingual experience. 11:25 am C3 Real-time diffusion-MRI-based tractographyguided TMS for speech cortical mapping Pantelis Lioumis1,2, Dogu Baran Aydogan1, Risto Ilmoniemi1,2, Aki Laakso3; 1 Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, 2BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Hospital, 3Department of Neurosurgery, HUS Helsinki University Hospital At Helsinki University Hospital, we have previously developed the use of navigated transcranial magnetic stimulation (nTMS) to map cortical language areas of the brain prior to neurosurgery. This methodology is in routine use in over 40 neurosurgical centers around the world, but its clinical value could be further improved if one could distinguish between language nodes that are essential and those that are secondary. Transcranial magnetic stimulation (TMS) is a non-invasive brain stimulation method that is used to excite neuronal populations in the cortex by means of brief, time-varying magnetic field pulses. The initiation of cortical activation or its modulation depends on the background activation of the neurons, the characteristics of the coil, its position and its orientation with respect to the head. Fiber tractography based on diffusion magnetic resonance imaging (dMRI) is a powerful technique that enables non-invasive reconstruction of structural connections in the brain. Importantly, a tractography technique that has been developed at Aalto University can provide 33 Slide Sessions connections reliably in real-time. The new method offers novel opportunities for brain stimulation when used with TMS, leading to a new paradigm where TMS operators can find and target desired connections in the brain. We applied the two methods so that the speech cortical mapping was based on real-time tracking of connections (fiber tractography and it calculations were performed prior to the experiments). We studied the new approach on several healthy volunteers by stimulating traditional speech areas and observing their structural connections to a distant area which is not typically studied for the speech functions (i.e., supplementary motor area). Then we applied stimulations along the computed tracks, which resulted in several different kinds of naming errors; for example, anomias and semantic and phonological paraphasias were evoked. Real-time fiber-tractographyguided TMS revealed unusual cortical sites (anterior to left IFG and SMA) involved in speech processing during object naming task. The features of real-time tractography allow the TMS user to decide during the experiment how to perform the cortical mapping, allowing the studying of each case individually and taking into consideration the entire cortical mandle based if the connection of the fibers indicate so. The combination of TMS and real-time tractography can highlight different potential languagerelated areas and at the same time validate them. Additionally, connections to the contralateral hemisphere can be easily studied for speech cortical mapping with the new technique. Our experiments point out that the combination of TMS and real-time tractography can play an important role for studying speech both for basic research and clinical applications and pave the way for fully automated algorithm-driven procedures based on future multichannel TMS systems. 11:45 am C4 Order and relevance: revising temporal structures Maria Spychalska1; 1University of Cologne Conjunctive sentences reporting two past events suggest that the events happened in the order of mentioning. This phenomenon is described as “temporal implicature”. The events may be linked in some way, i.e. we have “script” knowledge regarding the natural order in which the events normally happen, e.g. “She washed her hair and dried it”. However, if the events are unrelated, the only temporal order that is implied is the order in which the events are mentioned, e.g. “Julia read a book and sang a song” reports two events that could happen 34 SNL 2019 Program in any order. Furthermore, the temporal implicature may sometimes not arise, if the order of events is not contextually relevant. It is still an open question to what extend the temporal representation of events as observed in the real life modulates the linguistic processing, in particular, whether the temporal information enters the compositional semantic representation of the linguistic input and whether it modulates the predictive processing in language. I present two ERP experiments investigating the processing of reversed order sentences in contexts where the order is relevant or irrelevant. The experimental paradigm resembles a memory game, in which participants assign points to a virtual player and read sentences describing game events. In each trial, four cards are dealt and the player flips two of them. Afterwards, the participants assign points based on the defined game rules: If the player flips two cards from the same category (animal or non-animal cards), she gets 1 point. If she flips two cards from different categories, the points depend on the cards’ order: If an animal card is flipped first, the player gets 2 points; if a non-animal card is flipped first, she gets 0 points. Subsequently, a sentence is presented word-by-word describing the game trial, e.g. “Julia has flipped a cat and a flower”. In the Correct-Order condition, sentences describe the events in the order in which they happened; in the Reversed-Order condition, the events are described in the reversed order. Reversed-Order conditions show a P600 effect relative to Correct-Order conditions at the first noun at which the order violation can be detected. This effect occurs both for cases where the order is relevant for the points’ assignment (Mixed-Category), as well as for cases where the order is irrelevant (Same-category). In addition, a modulation of the N400 by Order is observed: ReversedOrder conditions elicit a larger N400 than CorrectOrder conditions. In a follow-up experiment, the points’ assignment only depends on whether the cards come from the same category. A similar P600 effect is observed for the order violation but no modulation of the N400. The experiments show that, irrespectively of whether the attention is directed towards the order as relevant in the given context, the violation of the order in the linguistic report engages reprocessing mechanisms, as indicated by the P600 effect, which can be linked to revising of the temporal representation. The N400 component appears to be only modulated by the encoded order if the order is contextually relevant. The Society for the Neurobiology of Language SNL 2019 Program  Poster Slam Sessions Poster Slam Schedule Poster Slams provide a fast-paced and entertaining showcase for posters. Forty presenters have been invited to provide one-minute, one-slide overviews of their posters. Poster Slams take place just prior to the beginning of each poster session. Participants will present their Slam in the main auditorum, ensuring effective exposure to the entire SNL audience. Poster Slams are held in Finlandia Hall. Session Session A Session B Session C Session D Session E Poster Slam Session Tuesday, August 20, 10:00 am Tuesday, August 20, 3:00 pm Wednesday, August 21, 10:30 am Wednesday, August 21, 5:00 pm Thursday, August 22, 3:30 pm Mandatory Briefing Tuesday, August 20, 8:30 am Tuesday, August 20, 1:15 pm Tuesday, August 20, 1:15 pm Wednesday, August 21, 3:45 pm Thursday, August 22, 1:30 pm Information for Slam Presenters All Poster Slam presenters must attend a Briefing Session at the designated time for your Poster Slam Session (see above). Please assemble at Door A (the left-most door) of the main auditorium in the Finlandia Hall Foyer. SNL staff will meet you there and explain the logistics of the Poster Slam session, including where to line up, use of the microphone, timing, and so on. This is critical for the success of this fast-paced micro-session. Presenters must arrive in Finlandia Hall a minimum of 15 minutes before their Poster Slam Session. Note that the Briefing Session for Poster Slam C is the day before the Slam session. Poster Slam Sessions For poster details, see page 40. Poster Slam Session A Tuesday, August 20, 2019, 10:00 – 10:15 am, Finlandia Hall, Chair: Michal Ben-Shachar A1 BDNF Genotype Specific Differences in Cortical Activation in Chronic Aphasia Sigfus Kristinsson A13 Language learning in the adult brain: TMS-induced disruption of the left dorsolateral prefrontal cortex facilitates statistical language learning Eleonore Smalle A16 ​A Neurogenetic Study of Dyslexia Risk in Neonates Stanimira Georgieva A18 Translating computational modeling into clinical practice: BiLex as a tool to simulate treatment outcomes in bilingual aphasia Claudia Penaloza A61 Sustained oscillations in MEG and tACS demonstrate true neural entrainment in speech processing Benedikt Zoefel A69 Human speech cortex encodes amplitude envelope as transient, phase-locked responses to discrete temporal landmarks Katsuaki Kojima A79 Grey and white matter correlates of verbal and nonverbal working memory Dimitrios Kasselimis A81 MOUS, a 204-subject multimodal neuroimaging dataset to study language processing Jan Mathijs Schoffelen The Society for the Neurobiology of Language Poster Slam Session B Tuesday, August 20, 2019, 3:00 – 3:15 pm, Finlandia Hall, Chair: Angela Grant B4 Brain tumors in left frontal regions impact language laterality as determined by pre-surgical fMRI Monika Polczynska B14 The influence of the prenatal linguistic environment on the newborn cerebral language processing: A fNIRS study Laura Caron-Desrochers B53 Linking production and comprehension – Investigating the lexical interface Arushi Garg B58 Neurophysiological evidence of the transformation of phonological into phonographic representations Chotiga Pattamadilok B63 Inner speech in silent reading evokes theta phaselocked responses in the auditory cortex Bo Yao B68 Beyond Phrase Structure: An Alternative Analysis of Brennan and Hale (2019) Using a Dependency Parser Phillip M. Alday B71 Constraint Asyntactic Structure of Ancient Chinese Poetry Facilitates Content Parsing: A study combining MEG, RNN, and crowdsourcing Xiangbin Teng B79 Associations between cortical surface structure and reading related skills Meaghan Perdue 35 Poster Slam Sessions Poster Slam Session C Wednesday, August 21, 2019, 10:30 - 10:45 am, Finlandia Hall, Chair: James Magnuson C3 Gesture-Speech Integration in the Adolescent Brain Salomi Asaridou C13 Language outcome after anterior temporal lobectomy: insights from tractography Ekaterina Stupina C30 fMRI Representational Similarity Analysis Reveals the Information Structure Underlying Word Semantics Leonardo Fernandino C50 The left Frontal Aslant Tract supports sentence planning: Evidence from direct electrical stimulation and longitudinal diffusion MRI in brain tumor patients Benjamin Chernoff C51 Identifying the cognitive components of the morphological fluency task through neurocognitive correlations Galit Agmon C58 Cortical dynamics of the speech motor control network in the non-fluent variant of Primary Progressive Aphasia Hardik Kothare C62 Modelling incremental development of semantic prediction Hun S. Choi C82 Universal neural anomalies in Chinese and French dyslexic children Xiaoxia Feng Poster Slam Session D SNL 2019 Program Poster Slam Session E Thursday, August 22, 2019, 3:30 - 3:45 pm, Finlandia Hall, Chair: Brenda Rapp E1 The role of the left frontal aslant tract in lexical selection: data from picture-word interference and sentence completion tasks Andrey Zyryanov E2 White matter tracts underlying orthographic processing: Evidence for the role of the left vertical occipital fasciculus Celia Litovsky E9 Disentangling the contribution of family risk on reading precursors in pre-readers Lauren Blockmans E13 Language network re-organization associated with word- and sentence-level language interventions in chronic aphasia Elena Barbieri E36 Automatic decomposition revisited with MEG evidence from visual processing of Tagalog circumfixes, infixes, and reduplication Samantha Wray E38 Where do code-switching constraints apply? An ERP study of code-switching in Mandarin-Taiwanese sentences Chia-Hsuan Liao E56 Disruption of Speech Adaptation with Repetitive Transcranial Magnetic Stimulation of the Articulatory Representation in Primary Motor Cortex Ding-Lan Tang E79 Neural characteristics of acoustic prosody during continuous real-life speech Satu Saalasti Wednesday, August 21, 2019, 5:00 - 5:15 pm, Finlandia Hall, Chair: Emily Myers D11 Neural correlates of learning novel word forms in children with developmental language disorder Saloni Krishnan D12 Stuttering and the Social Brain Eric S. Jackson D16 Longitudinal atrophy of the left inferior frontal gyrus following post-stroke aphasia Natalia Egorova D34 Navigating the turbulent seas of lesion symptom mapping: Comparative analyses of univariate and multivariate lesion symptom mapping methods Maria V. Ivanova D55 Syllabic and phonemic effects in Chinese spoken language production: Evidence from ERPs Qingqing Qu D63 Failure to replicate the effects of transcranial Direct Current Stimulation (tDCS) on articulation of tongue twisters: a pre-registered study Charlotte Wiltshire D70 Shared Cortical Activation for Children’s Phonological Awareness in Signed and Spoken Language: A Functional Near Infrared Spectroscopy Study Diana Andriola D79 Neuromagnetic evoked responses to unattended speech as indices of language processing in young adults and in healthy aging Rasha Hyder 36 The Society for the Neurobiology of Language SNL 2019 Program  Poster Schedule Poster Schedule Poster sessions are scheduled on Tuesday, August 20 through Thursday, August 22. Poster sessions are 1 hour and 45 minutes long. Presenting authors are expected to be present the entire time. Posters are located in Restaurant Hall. You may post your materials on the board assigned to you starting at the scheduled “Set-up Begins” time shown below. Please note that any posters not removed by “Teardown Complete” time will be discarded. Do not leave personal items in the poster room. IMPORTANT: Only the supplied push pins are allowed to be used for securing your poster to your board. Date & Time Posters Topics Poster Session A A1 - A8 A9 - A12 A13 - A14 A15 A16 - A17 A18 - A20 A21 - A22 A23 - A27 A28 - A30 A31 - A39 A40 A41 - A44 A45 A46 - A47 A48 A49 - A50 A51 - A58 A59 A60 A61 - A68 A69 - A73 A74 - A77 A79 - A80 A81 - A86 A87 - A88 Disorders: Acquired Disorders: Developmental Development Language Therapy Language Genetics Computational Approaches Syntax Meaning: Combinatorial Semantics Meaning: Discourse and Pragmatics Meaning: Lexical Semantics Morphology Multilingualism Prosody Control, Selection, and Executive Processes Signed Language and Gesture Methods Language Production Speech Motor Control Disorders: Acquired Speech Perception Perception: Auditory Perception: Speech Perception and Audiovisual Integration Phonology and Phonological Working Memory Reading Multisensory or Sensorimotor Integration B1 B2 - B3 B4 - B11 B12 B13 B14 - B18 B19 - B23 B24 B25 B26 - B32 B33 - B36 B37 - B40 B41 - B43 B44 - B46 B47 - B52 B53 - B55 B56 - B57 B58 - B62 B63 B64 B65 B66 - B67 B68 - B70 B71 - B78 B79- B87 Signed Language and Gesture Control, Selection, and Executive Processes Disorders: Acquired Methods Language Therapy Development Disorders: Developmental Language Genetics History of the Neurobiology of Language Meaning: Lexical Semantics Meaning: Combinatorial Semantics Meaning: Discourse and Pragmatics Syntax Morphology Multilingualism Language Production Perception: Auditory Perception: Speech Perception and Audiovisual Integration Phonology and Phonological Working Memory Prosody Multisensory or Sensorimotor Integration Speech Motor Control Computational Approaches Speech Perception Reading Tuesday, August 20 10:15 am – 12:00 pm Restaurant Hall Setup Begins: 8:00 am Teardown Complete: 1:00 pm Poster Session B Tuesday, August 20 3:15 – 5:00 pm Restaurant Hall Setup Begins: 1:00 pm Teardown Complete: 6:00 pm The Society for the Neurobiology of Language 37 Poster Schedule SNL 2019 Program Date & Time Posters Topics Poster Session C C1 - C2 C3 - C6 C7 - C11 C12 C13 - C20 C21 C22 - C25 C26 - C29 C30 - C38 C39 C40 - C41 C42 - C43 C44 C45 - C48 C49 C50 - C57 C58 - C61 C62 - C63 C64 - C71 C72 - C75 C76 - C79 C80 - C81 C82 - C89 Control, Selection, and Executive Processes Development Disorders: Developmental Language Genetics Disorders: Acquired Language Therapy Meaning: Combinatorial Semantics Meaning: Discourse and Pragmatics Meaning: Lexical Semantics Signed Language and Gesture Methods Syntax Morphology Multilingualism Multisensory or Sensorimotor Integration Language Production Speech Motor Control Computational Approaches Speech Perception Perception: Auditory Perception: Speech Perception and Audiovisual Integration Phonology and Phonological Working Memory Reading D1 - D4 D5 - D7 D8 - D10 D11 - D15 D16 - D24 D25 D26 - D33 D34 - D35 D36 - D39 D40 - D42 D43 - D44 D45 - D48 D50 D51 - D54 D55 - D62 D63 D64 D65 - D68 D69 D70 - D74 D76 - D77 D78 D79 - D88 Computational Approaches Control, Selection, and Executive Processes Development Disorders: Developmental Disorders: Acquired Language Therapy Meaning: Lexical Semantics Methods Meaning: Combinatorial Semantics Syntax Morphology Meaning: Discourse and Pragmatics Methods Multilingualism Language Production Speech Motor Control Signed Language and Gesture Perception: Auditory Prosody Reading Reading Writing and Spelling Speech Perception Wednesday, August 21 10:45 am - 12:30 pm Restaurant Hall Setup Begins: 8:00 am Teardown Complete: 1:30 pm Poster Session D Wednesday, August 21 5:15 – 7:00 pm Restaurant Hall Setup Begins: 1:30 pm Teardown Complete: 7:15 pm 38 The Society for the Neurobiology of Language SNL 2019 Program  Poster Schedule Date & Time Posters Topics Poster Session E E1 E2 E3 - E8 E9 - E12 E13 - E19 E20 - E25 E26 - E29 E30 - E33 E34 - E35 E36 - E37 E38 - E45 E46 - E47 E48 - E55 E56 - E59 E60 E61 - E63 E65 - E66 E67 E68 - E70 E71 - E78 E79 - E80 E81 - E88 Control, Selection, and Executive Processes Writing and Spelling Development Disorders: Developmental Disorders: Acquired Meaning: Lexical Semantics Meaning: Combinatorial Semantics Meaning: Discourse and Pragmatics Syntax Morphology Multilingualism Signed Language and Gesture Language Production Speech Motor Control Methods Perception: Auditory Perception: Speech Perception and Audiovisual Integration Phonology and Phonological Working Memory Computational Approaches Speech Perception Prosody Reading Thursday, August 22 3:45 – 5:30 pm Restaurant Hall Setup Begins: 8:00 am Teardown Complete: 6:15 pm The Society for the Neurobiology of Language 39 Poster Session A SNL 2019 Program Poster Sessions Poster Session A Tuesday, August 20, 2019, 10:15 am – 12:00 pm, Restaurant Hall Disorders: Acquired A1 BDNF Genotype Specific Differences in Cortical Activation in Chronic Aphasia Sigfus Kristinsson1, Grigori Yourganov2, Feifei Xiao3, Leonardo Bonilha4, Brielle C. Stark5, Chris Rorden2, Basilakos Alexandra1, Julius Fridriksson1; 1Department of Communication Sciences & Disorders, University of South Carolina, 2Department of Psychology, University of South Carolina, 3 Department of Epidemiology and Biostatistics, University of South Carolina, 4Department of Neurology, Medical University of South Carolina, 5Department of Speech and Hearing Sciences, Indiana University Introduction: The presence of a Met allele at one or both codon positions 66 of the BDNF gene has been associated with poorer functional recovery and decreased functional brain activation in stroke patients (Johansson, 2011; Kim et al., 2016). The current study aimed to explore functional brain activation by BDNF genotype in individuals with chronic aphasia. We hypothesized that the presence of the Met allele of the BDNF gene is associated with reduced functional brain activation compared to individuals without the Met allele polymorphism. Methods: We recruited 87 individuals with chronic stroke-induced aphasia (typical BDNF genotype (Val66Val): n=53 (61%), atypical BDNF genotype (Val66Met/Met66Met): n=34 (39%)). Participants performed a naming task during fMRI scanning in which they were presented with 40 colored pictures of high-frequency nouns. For the purpose of establishing a baseline for the fMRI data analysis, 20 colored abstract pictures were shown at random among the real picture presentation. We utilized general linear modeling and a standard hemodynamic response function to generate contrast maps isolating brain activation related to naming. We then obtained the number of voxels where naming-related brain activation was significantly greater than zero (FWE=.05) for each group and compared across groups using independent samples t-test. Neuropsychological testing was conducted to compare language impairment between BDNF genotype groups and WAB-AQ was used as a covariate in the analysis. Results: Participants in both genotype groups presented with distributed cortical and subcortical lesions that covered the middle cerebral artery territory. Greatest lesion overlap was identified in the longitudinal fasciculus (MNI coordinates: -34x-36x28) in the typical genotype group and the longitudinal fasciculus (MNI: -36x-8x25) and the insula (MNI: -44x-9x3) in the atypical genotype group. The overall activation pattern was similar across groups, with greatest intensity of activation present in the bilateral posterior temporal gyrus, pre- and postcentral gyrus, and the longitudinal fissure. We found that the number of activated voxels was greater in the 40 typical genotype group compared to the atypical group at the whole brain level (98,500 vs. 28,630; t(85)=18.63, p<.001), in the left hemisphere (37,290 vs. 7,000; t(85)=8.33, p<.001), and in the right hemisphere (74,830 vs. 30,630; t(85)=11.29, p<.001). Corresponding to results from functional MRI data analysis, we observed clear differences in language impairment between the typical and atypical BDNF genotype groups, where aphasia severity was significantly greater in the atypical compared to the typical group (WAB-R AQ: 54.3 vs. 64.2, p=.033; PNT: 52.8 vs. 74.7, p=.047, respectively). Conclusion: While consistent with previous findings in the stroke population (Johansson, 2011; Kim et al., 2016), conflicting results have been reported in acute aphasia (e.g., De Boer et al., 2017; Mirowska-Guzel et al., 2013). Given the subtle effects of BDNF genotype on BDNF secretion (18-30% decrease), our results may suggest that the effects of genotype accumulate over time in the recovery process, enabling individuals with the typical genotype to experience greater recovery than their counterparts with the atypical genotype. A2 Serial position effects at different delays differentially predict patterns of brain damage in dementia Sasa Kivisaari1, Kirsten I. Taylor2, Andreas U. Monsch3, Marc Sollberger4, Nancy S. Foldi5; 1Department of Neuroscience and Biomedical Engineering, Aalto University, 2Neuroscience, Ophthalmology, and Rare Diseases (NORD), Roche Pharma Research and Early Development, Roche Innovation Center Basel, F. Hoffmann-La Roche Ltd, 3Memory Clinic, University Center for Medicine of Aging Basel, Felix-Platter Hospital, 4Memory Clinic, University Center for Medicine of Aging Basel, Felix Platter-Hospital, and Department of Neurology, University Hospital Basel, 5Department of Psychology, Queens College and The Graduate Center, City University of New York Introduction: Serial Position Effects (SPE) in wordlist learning provide a rich set of metrics of cognitive functioning. In particular, the recall of primacy and recency items from the beginning and end of the word list, respectively, putatively have a distinct neuroanatomical basis. The two goals of this study were to (1) systematically map the neuroanatomical correlates of SPEs after Learning (across Trials 1-5) , Short delay (SD) and Long delay (LD) and (2) test whether progression of performance accuracy from Learning to LD serves as a sensitive measure of major and minor neurocognitive disorders. Method: Data of 236 individuals from longitudinal studies at the Basel Memory Clinic were examined. The data included California Verbal Learning Test (CVLT) scores at learning, short delay (SD) and long delay (LD) from healthy control participants (NC, N = 62) and patients (PAT, N = 69) who met criteria for Alzheimer’s disease (AD) or mild cognitive impairment. We acquired structural MRI scans (MPRAGE) from all participants, measured in the same 3T MRI scanner (MAGNETOM Allegra, Siemens). All accuracy scores were submitted to a mixed model analysis of covariance with The Society for the Neurobiology of Language SNL 2019 Program  group (NC, PAT) as the between-subject variable, and list position (primacy, middle, recency) and time (Learning, SD, LD) as within-subjects variables. The structural MRI data was subjected to voxel-based morphometry (VBM) analyses conducted in SPM8 running in Matlab. Results: Primacy, middle, and recency SPE scores of the California Verbal Learning Test at Learning, SD, and LD in healthy controls and patients with dementia-related pathology were correlated with MRI gray matter signal intensities. As expected, the VBM analyses revealed distinct patterns of brain-behavioral correlations depending on both the SPE (primacy, middle, recency) and the time-point (Learning, SD, LD). Interestingly, we also found that the healthy controls’ performance incrementally improved recall of primacy items from Learning to SD and to LD, i.e., “primacy progression”, whereas the patients’ did not. Moreover, this proportion of correct primacy items recalled at LD compared to Learning correlated with bilateral MTL regions, which commonly bear the brunt of pathology of Alzheimer’s disease. We did not observe a similar effect for either middle or recency list regions. Conclusions: The findings support the notion of distinct functional neuroanatomy of SPEs. The results suggest ‘primacy progression’ scores should be used as an accessible and sensitive measure for disease detection, progression, and therapeutic response. A3 Is the Supramarginal Gyrus a Hub for Both Spoken and Written Word Production? Venugopal Balasuramanian1, Savannah Sabu1, Julia Terrezza1; 1Seton Hall University The involvement of the supra marginal gyrus (SMG) in speech production is well attested in research (Gunther & Hickok, 2016). Recent research has added new information about the role of SMG and inferior parietal lobe in several different processes and networks (Bikofski et al, 2016). For instance, studies strongly suggest that SMG plays a significant role in spelling/writing (Baldo etal, 2018), along with angular gyrus (AG), SMG underlies Spanish orthographic competence (Gonzalez-Garrido et al, (2017), and visual word processing (Stoeckel etal, 2009). In the present context, the current study aims at answering the question ‘Does the SMG serve as a hub for spoken and written word production? Method: Subject. CBH, is a 59-year-old female chronic aphasic with bilateral inferior parietal lobe lesion. Procedure: CBH was tested on Boston Diagnostic Aphasia Examination (BDAE) and the Psycholinguistic Assessment of Language Performance in Aphasia. Results: CBH’s spelling performance revealed impairments at the levels lexical-semantics, phonemegrapheme conversion, orthographic output lexicon, and graphemic buffer. The current case report offers support to the assertion that SMG plays a significant role in spelling/ writing (Baldo et al, 2018). Other studies support the view that SMG also plays a significant role in phonological short-term memory, visual word processing, phonological planning (Luria, 1970), and grammatical processing (Schonberger et al, 2014). Taken together, these studies and the results from the current study appear to suggest The Society for the Neurobiology of Language Poster Session A that SMG probably serves as a hub (van den Heuvel & Sporns, 2013) pooling variety of information and serving both spoken and written forms of production. A4 Parieto-temporal functional connectivity supports speech comprehension: Evidence from a stroke model Lynsey Keator1, Grigori Yourganov2, Leonardo Bonilha3, Christopher Rorden2,4, Julius Fridriksson1,4; 1Department of Communication Sciences and Disorders, University of South Carolina, 2Department of Psychology, University of South Carolina, 3Department of Neurology, Medical University of South Carolina, 4McClausland Center for Brain Imaging, University of South Carolina Introduction: Auditory comprehension (AC) is often impaired after left hemisphere (LH) stroke. Traditional views propose Wernicke’s area (left posterior temporal cortex) is crucial for AC; however, current literature reveals a distributed LH network consistent with the dual-stream model of language processing (Hickok and Poeppel, 2007), where ventral-stream areas are implicated in language comprehension. Functional connectivity (FC) analysis, which measures the temporal synchrony of activation between brain regions, is a widely used method for studying cortical networks. In the current study, we analyze functional networks within the dual stream model in relation to AC in persons with aphasia (PWA) and introduce a novel approach to control for potentially confounding effects of lesion volume and impaired structural connectivity. Methods: Sixty-three PWA (25 women, mean age at stroke = 56.79 ± 12.74; mean months post stroke = 56.08 ± 59.77) were assessed with the Western Aphasia Battery-Revised (WAB-R; Kertesz, 2007) to determine aphasia type and severity. The same battery was used to assess AC. Structural (T1, T2), diffusion tensor imaging, and resting state fMRI scans were acquired and lesions were demarcated manually. FC for each pairing of cortical regions of interest (ROIs) was estimated as Pearson’s correlation coefficient between the mean BOLD time courses measured in those regions. NiiStat (https:// github.com/neurolabusc/NiiStat) was used to analyze association between FC within dorsal and ventral stream ROIs and performance on the WAB, using a general linear model (GLM). ROIs were obtained by mapping ventral stream areas (Fridriksson et al., 2016) onto the segmentation specified by the AICHA atlas (Joliot et al., 2015), yielding 41 LH ventral stream ROIs, 26 dorsal stream ROIs and right-hemisphere homologues. We controlled for lesion size by regressing it out of the behavioral scores and controlled for structural connectivity by regressing out the fiber count between a pair of ROIs from the FC value for the same pair. 5000 permutations were performed to correct for multiple comparisons. Results: We identified several functional connections within the LH ventral stream (dorsal angular gyrus (and adjacent inferior parietal region) and middle temporal regions where decreased FC was associated with impaired AC. GLM Z-scores (measuring the strength of association between FC and AC scores) ranged between 3.7 and 4.6. Functional connections within LH dorsal-stream regions were not significantly correlated with AC scores nor were 41 Poster Session A connections within right-hemisphere homologues of the ventral areas. Conclusion: Controlling for both lesion size and structural connectivity, we conclude these effects are not solely driven by damage to ROIs, frank damage to white matter connections between ROIs, or general behavioral impairment associated with larger lesions. Instead, poor AC correlates with a loss of synchrony in a broad posterior parieto-temporal network. These regions, independently, have been implicated in AC (Dronkers et al., 2004) and in consideration of the current body of literature, our findings emphasize AC is not localized to a single cortical region, but rather, a widely distributed network. A5 Single word, sentence and discourse comprehension in patients with temporal lobe epilepsy Anna Yurchenko1,2, Vardan Arutiunian1, Alexander Golovteev2, Olga Dragoy1,3; 1National Research University Higher School of Economics, Russia, 2 Epilepsy Center, Russia, 3Federal Center for Cerebrovascular Pathology and Stroke, Russia Introduction: Investigations on language comprehension in patients with temporal lobe epilepsy (TLE) are very limited and show contradictory results. Whereas Kho and colleagues (2008) did not show impairment in single word comprehension, other studies indicate that patients with left TLE can have difficulties in noun (Giovagnoli et al., 2005) and verb (Yurchenko et al., 2017) comprehension. According to Kho et al. (2008), sentence comprehension is spared in patients with TLE. However, patients with TLE may show impaired comprehension of questions and instructions of different syntactic structures (Lomlomdjian et al., 2017; Wang et al., 2011). In addition, difficulties in comprehension of short stories and mischievous behavior described in them were found in patients with both left and right TLE (Lomlomdjian et al., 2017; Giovagnoli et al., 2011). The goal of the present study was to systematically analyze single word, sentence and discourse comprehension in patients with left and right TLE. Methods: Twenty two patients with left TLE (14 females, mean age = 29 years), 22 patients with right TLE (10 females, mean age = 30 years) and 22 healthy controls (12 females, mean age = 28 years) participated in the study. During the word comprehension tests, participants listened to nouns and verbs and chose the corresponding pictures out of four pictures presented for each trial (target, phonologically related, semantically related, and unrelated distractor). Sentence comprehension test included questions of different syntactic structures presented auditorilly together with the corresponding picture and a semantically close distractor. During the test for discourse comprehension, participants listened to a narrative and answered 16 questions related to the story. Results: Statistical analysis did not show difference among the three groups of participants for noun and verb comprehension tests. In contrast, the three groups of participants significantly differed in sentence (p = 0.041) and discourse (p = 0.002) comprehension. As compared to healthy individuals, the percentage of correct answers was significantly lower in patients with left (94.7% vs. 98.2%, p = 0.047) and right (94.5% vs. 98.2%, p = 0.023) 42 SNL 2019 Program TLE for sentence comprehension. Similarly, patients with both left (85.8% vs. 97.5%, p = 0.004) and right (88.9% vs. 97.5%, p = 0.005) TLE gave less correct answers in the discourse comprehension test. The two groups of patients did not differ for any of the tests. Discussion: Our results show that language comprehension may be spared at the level of single word and impaired at the sentence and discourse levels in patients with left and right TLE. In addition, no difference was observed between the two groups of patients. It means that language comprehension at the level of sentence and discourse may be impaired in patients with TLE even when epileptogenic focus is localized in the non-dominant for language hemisphere. Difficulties in sentence comprehension and processing of discourse content in patients with left and right TLE may be related to impairments in other cognitive functions (e.g., memory), which is a subject for further research. The reported study was funded by RFBR according to the research project № 18-312-00091. A6 Language disorders after adulthood cerebellar lesions: systematic review of 91 cases Emilia Malinen1, Matti Lehtihalmes1; 1University of Oulu, Finland Introduction There has been a growing interest of discovering the role of the cerebellum in cognitive functions, such as language (van Dun et al., 2016). Despite of several reviews of cerebellum’s connection to linguistic processing (see e.g. De Smet et al., 2007; van Dun et al., 2016), to our knowledge, no comprehensive systematic review of linguistic disorders after adulthood cerebellar lesions including single cases have been published. Methods Our objective was to evaluate systematically the type of language disorders after adulthood cerebellar lesions, and the association between symptom severity and time since injury. Systematic literature search was conducted on seven electronic databases. Manual and reference search were also conducted. Total of 91 cases with a cerebellar lesion published in 35 articles (out of 1317) between 1994-2017 met the inclusion and exclusion criteria. Results Studies included in this systematic review were methodologically highly heterogeneous. Only limited number of studies were high-quality and reliable. Results showed a large variation in linguistic disorders after cerebellar lesions. There was no clear laterality effect between language disorders and damage location. Language symptoms mentioned most often were disorders in verbal fluency (VF) and in verbal working memory. In addition, disorders in naming and word finding, higher level language deficits and agrammatism were also mentioned frequently. However, disorders were mainly restricted, affecting only single language functions, which can be partially explained by limitations of language assessments in the studies included. Only in five cases the symptoms were defined as aphasia. Because of the limitations in studies, we could not comprehensively conclude the severity of all language disorders. Language disorders often seem to have improved over time and, in some cases, the full recovery of symptoms was observed. Relationship was found between the severity of VF and naming disorders and time since lesion as the degree The Society for the Neurobiology of Language SNL 2019 Program  of most prominent impairments decreased over time. Notably, VF impairments seems to improve slower than naming. In some cases, language disorders were observed even years after damage. However, after one year or more post onset, disorders were mainly restricted, specific and were related to other cognitive impairments. Functional abnormalities in cortical and subcortical regions outside the cerebellum seem to be a potential mechanism explaining the phenomenon. Conclusion In most of the cases included to this systematic review, language disorders related to adult cerebellar lesions seem to be mild or restricted. Noteworthy, language symptoms can show high individual variation among patients with a cerebellar lesion. This systematic review confirmed earlier findings (De Smet et al., 2007) of connection between cerebellar lesions and language disorders. However, according to previous view (van Dun et al., 2016), coexisting cognitive disorders and functional abnormalities in cortical and subcortical areas seem, at least partially, to explain this connection. References De Smet, H. J., Baillieux, H., De Deyn, P. P., Mariën, P., & Paquier, P. (2007). The cerebellum and language: The story so far. Folia Phoniatrica et Logopaedica, 59, 165–170. van Dun, K., Manto, M., & Mariën, P. (2016). The language of the cerebellum. Aphasiology, 30, 1378–1398. A7 Functional connectivity underlying language reorganization in chronic post-stroke aphasia using resting-state magnetoencephalography Priyanka Shah-Basak1,2, Gayatri Sivaratnam1, Selina Teti1, Alexander Francois-Nienaber1, Aneta Kielar3, Jed Meltzer1,2,4,5; 1Rotman Research Institute, Baycrest Health Sciences, Toronto, 2Canadian Partnership for Stroke Recovery, Ottawa, 3Department of Speech, Language, and Hearing Sciences, University of Arizona, 4Department of SpeechLanguage Pathology, University of Toronto, 5Department of Psychology, University of Toronto Background: Post-stroke aphasia is a consequence of localized stroke-related damage as well as global disturbances in highly interactive and bilaterallydistributed brain networks. It is widely accepted now that aphasia is a network disorder and that it should be treated as such when examining the reorganization and recovery mechanisms after stroke. A number of functional MR imaging (fMRI) studies have explored changes in functional connectivity (FC) and network properties in post-stroke aphasia. However, the results from these studies are limited to very low-frequency ranges because of fMRI’s low temporal resolution. In contrast, electrophysiological methods such as magnetoencephalography (MEG) offers a much higher temporal resolution. In the current study, we sought to investigate FC using resting-state MEG (rs-MEG) in the alpha frequency band (8-12 Hz) by estimating amplitude envelope correlations across nodes in the whole-brain. Methods: Rs-MEG was recorded for 300 seconds in 21 chronic stroke survivors with aphasia and in 20 age- and sex-matched controls. Source-level MEG activity was reconstructed at voxels spaced 10 mm apart using a synthetic aperture magnetometry (SAM) beamformer. Principal component analysis was used to The Society for the Neurobiology of Language Poster Session A extract a representative signal from each of 72 cortical and subcortical atlas-defined regions (or nodes) from the voxel-wise source activity. The representative node signals were band-pass filtered into the alpha band (812 Hz), and were corrected for spatial leakage using the closest symmetric method introduced by Colclough and colleagues (2015). The amplitude envelope obtained from Hilbert transformation was down-sampled, and pairwise Pearson’s correlation coefficients were computed between each pair of nodes as a measure of FC. The 72×72 adjacency matrices of amplitude correlations were compared between groups in a Partial Least Squares (PLS) analysis, and were subjected to graph theoretical analysis. Within the stroke group, correlations between FC and the fluency measure from the Western Aphasia Battery (WAB) were computed. Results: The intra- and inter-hemispheric FC was greatly reduced in stroke among the temporal, parietal and frontal language regions. These reductions in stroke were accompanied by significant increases in FC among the right language homolog regions, involving the right angular and inferior frontal gyri as well as the domain-general anterior cingulate gyri (p<0.001, interpreted at a bootstrap ratio of 3.0). The small-worldliness (p=0.025) was significantly reduced in stroke, with increases in characteristic path length (p=0.038). At the regional level, the hubs based on degree and betweenness were shifted from left in controls to the right angular and inferior parietal regions in stroke. Betweenness was increased for the right parietal regions in stroke. Finally, correlations with WAB fluency scores indicated that better performance is associated with rich intra-right and interhemispheric connections among the temporo-occipital and fronto-parietal regions, especially involving the frontal orbital cortex. Conclusions: Our findings support the adaptive role of connections formed with the right homolog regions and domaingeneral regions in language reorganization after stroke. These findings will aid in hypothesis-driven examination of changes in connectivity and network properties as a function of language and non-invasive brain stimulation therapies. A8 Propagation speed within the ventral stream predicts treatment response in chronic post-stroke aphasia Janina Wilmskoetter1, Barbara Marebwa1, Graham Warner2, Alexandra Basilakos3, Julius Fridriksson3, Chris Rorden4, Leonardo Bonilha1; 1 Department of Neurology, College of Medicine, Medical University of South Carolina, 2Department of Neuroscience, College of Graduate Studies, Medical University of South Carolina, 3Department of Communication Sciences and Disorders, University of South Carolina, 4Department of Psychology, University of South Carolina Background: The preservation of direct, white-matter connections between language-related brain regions has been linked to aphasia recovery after stroke. However, little is known about the importance of indirect, alternative pathways when direct connections are lost or not existent. We hypothesized that the propagation speed (number of steps between two regions) within the ventral and dorsal streams of language processing is predictive 43 Poster Session A of the treatment response of individuals with chronic post-stroke aphasia. Specifically, we hypothesized that the propagation speed within the ventral stream and dorsal stream is predictive of a change in semantic and phonological errors, respectively. Methods: We leveraged data from 69 individuals with chronic, left-hemisphere, post-stroke aphasia who were enrolled in a clinical trial assessing the futility of transcranial direct current stimulation (tDCS) in aphasia treatment. We measured individuals’ naming abilities with the standardized Philadelphia Naming Test (PNT) before and one week after the three-week-long computerized anomia treatment. Using probabilistic tractography individuals’ whole brain connectomes were constructed based on diffusion tensor MRIs acquired before treatment. We calculated the number of steps (nodes) between brain regions of the ventral and dorsal language processing stream as a proxy for the propagation speed of information spreading between regions. The sum of all internodal shortest connections was calculated for the ventral and dorsal stream separately. We performed multivariable linear regression modeling to predict the change in naming performance from pre- to post-treatment. Results: The propagation speed within the ventral stream significantly predicted the change in correct naming responses (standardized coefficients beta: -.522, p=.023) and semantic errors (standardized coefficients beta: .654, p=.003) from pre- to post-treatment, while controlling for dorsal stream propagation speed, lesion volume, age, education, and treatment type. The ventral stream did not predict change in phonological errors, nor did the dorsal stream predict change in correct naming, semantic or phonological errors. Conclusions: In this study we demonstrate that the propagation speed between brain regions of the language network is a crucial marker for treatment responses in individuals with chronic poststroke aphasia. Specifically, a faster propagation speed between regions belonging to the ventral stream was linked to more correct naming responses and less semantic errors. Propagation speed is a useful measure that compliments traditional lesion symptom mapping by assessing the integrity of residual neural networks. Disorders: Developmental A9 Inconsistency in lateralisation of language functions: a risk factor for language impairment? Abigail Bradshaw1, Zoe Woodhead1, Paul Thompson1, Dorothy Bishop1; 1University of Oxford Introduction: Language laterality (the greater involvement of the left hemisphere in language processing than the right) is generally assumed to be beneficial for language functioning. Disruption to language lateralisation has long been proposed as a cause of developmental language impairments; however this has so far only been investigated by looking for laterality strength differences between groups on a single task. Alternatively, it is possible that the relevance of laterality to language functioning may relate more to whether different language functions show common lateralisation; inconsistent laterality across language sub-processes may represent 44 SNL 2019 Program a less efficient network that places greater demands on interhemispheric integration. This study tested the hypothesis that inconsistent lateralisation across different language functions would be associated with poorer language abilities. This was tested using an ultrasound technique (functional transcranial Doppler ultrasound, fTCD) with a large sample of individuals who varied in their language abilities. Methods: This study was preregistered on Open Science Framework (https://osf. io/fvhxq/), and a power calculation used to determine sample size. 104 adult participants were tested in this study, 67 of which had a developmental disorder affecting language (e.g. dyslexia, dyspraxia, ASD) and 37 of which had no diagnosis relating to language. All participants were tested with a comprehensive language assessment battery and fTCD measurement of laterality for three tasks designed to engage different language sub-processes (sentence generation, phonological decision and semantic decision). The whole sample was divided into those showing consistent and inconsistent lateralisation across our three tasks, and their language scores on our standardised assessments compared by means of a multivariate test. We further explored whether the developmental disorders group demonstrated an increased incidence of inconsistent laterality patterns. Results: We identified 31 participants within our sample who demonstrated inconsistent lateralisation across our tasks; comparison of their language scores to those of the 73 individuals showing consistent lateralisation did not find significant differences on any language measure. However, the developmental disorder group demonstrated an increased incidence of inconsistent lateralisation (representing 26 of the 31 inconsistent participants) and weaker correlations between laterality indices across pairs of tasks. This suggests a higher level of within individual variability in lateralisation for our developmental disorder group. Conclusions: The findings suggest that having inconsistent lateralisation for different language sub-processes is not associated with poorer language skills. However, experience of a developmental disorder may increase the likelihood of having a more distributed organisation of language functions across the hemispheres, potentially reflective of attempts at compensatory reorganisation. This pattern of findings suggests the need to reconsider assumptions about the direction of causality between altered laterality patterns and disrupted development of language skills. A10 Auditory window of integration relates to language disorder: An event-related potentials study of children with autism spectrum disorder Elaine Kwok1, Janis Oram Cardy1; Communication Sciences and Disorders, Western University, Canada 1 Introduction: Auditory perception results from neural integration of acoustic elements over a brief timeframe, termed the auditory window of integration (AWI) (Bregman, 1990). As children age, the AWI becomes smaller, allowing for more refined perception of rapidly changing auditory stimuli (including speech). An immature (larger) AWI has been reported in individuals with The Society for the Neurobiology of Language SNL 2019 Program  developmental language disorder (McArthur & Bishop, 2004). Few studies have explored the relation of AWI to language disorder associated with autism spectrum disorder (ASD). In older children and adolescents with ASD, those with co-occurring language impairment (ASD+LI) are likely to have a longer AWI compared to peers with ASD without language disorder (ASD–LI) (Oram Cardy, Flagg, Roberts & Roberts, 2008). This study explored the AWI of school-age children with ASD using electroencephalography. Methods: Participants were 6–11 year olds with ASD and at least average WASI Performance IQ. Children with ASD+LI (n=7) scored below –1.25SD on a standardized oral language measure, the CELF-4, whereas children with ASD–LI (n=12) scored within normal limits. Auditory evoked potentials (AEPs) were recorded using a 128-channel EGI system while the following four blocks of auditory stimuli (225 trials each) were presented: Single pure tone (One Tone); Two Tones separated by 100ms (TT100); Two Tones separated by 200ms (TT200); Two Tones separated by 400ms (TT400). For each participant, we compared a segment of 0–400ms AEP response from each of the TT conditions to the One Tone condition using intraclass correlation coefficient (ICC), an index of waveform similarity (Bishop & McArthur 2004). For the TT400 condition, only the AEP response to the first of the two tones was included in this comparison, thus generating a reference ICC value. A significantly lower ICC in the TT100 and TT200 conditions relative to this reference ICC would suggest the presence of a response to the second tone. Conversely, a similar ICC would suggest the absence of a second tone response (i.e., the two tones fall within the AWI and may be processed as one tone). Results: Children with ASD–LI and ASD+LI did not significantly differ in age or Performance IQ (p ≥ 0.541), but those with ASD+LI had significantly weaker oral language (p < 0.001). For the group with ASD–LI, the ICC values from both the TT100 (M = 0.78, SD = 0.39) and TT200 (M = 0.77, SD = 0.35) conditions were significantly lower than that of TT400 (M =0.94, SD = 0.32), p < 0.05, suggesting a second tone AEP response was present in both conditions. By contrast, for the group with ASD+LI, only the ICC value from the TT200 condition (M = 0.92, SD = 0.33) was significantly lower than the TT400 condition (M = 1.09, SD = 0.36), p < 0.05, suggesting that a second tone AEP response was only present in the TT200 condition. Conclusions: Results suggested that the AWI was less than 100ms in children with ASD–LI and between 100ms and 200ms in children with ASD+LI. Immature AWI may be related to language disorder associated with ASD. A11 Gray and white matter developmental trajectories associated with persistence and recovery of childhood stuttering Simone Koenraads1, Gregory Spray2,3, Ho Ming Chow2,4, Emily Garnett2, Soo-Eun Chang2,3; 1Erasmus University Medical Center, Rotterdam, 2University of Michigan, Ann Arbor, 3 Michigan State University, East Lansing, 4Nemours/Alfred I. DuPont Hospital for Children The Society for the Neurobiology of Language Poster Session A Developmental stuttering is a complex neurodevelopmental disorder affecting 5-8% of preschoolage children. Most children recover, while 1% go on to develop chronic life-long stuttering. The neural bases for persistence versus natural recovery from stuttering remain unclear. Previous neuroimaging studies in children who stutter (CWS) have shown structural anomalies in speech-motor brain areas relative to fluent peers. Past studies, however, involved relatively small sample sizes and cross-sectional designs that limited our ability to track developmental changes linked to persistence and recovery of stuttering. In this first longitudinal study examining gray (GMV) and white matter volume (WMV) developmental trajectories, we conducted voxel-based morphometry (VBM) to compare children with persistent stuttering (pCWS), recovered from stuttering (rCWS), and fluent controls. Based on previous findings reported by us and others, we hypothesized that pCWS and rCWS exhibit reduced GMV in the left inferior frontal gyrus (IFG) and ventral premotor cortex. We also hypothesized that rCWS would show increases in WMV with age in the superior temporal gyrus (STG) and greater GMV and WMV in structures supporting speech planning and timing (cerebellum, basal ganglia). Based on data reported on persistent stuttering, we expected pCWS would exhibit greater somatosensory area volume increases as well as right sided motor area volume increases. A total of 286 MRI scans from 43 CWS (26 pCWS and 17 rCWS) and 44 fluent peers between 3 and 12 years of age were entered for analysis. Each participant was scanned up to four times, with an average inter-scan interval of 1 year. We examined group differences (pCWS, rCWS, fluent control) and group by age interactions in GMV and WMV, incorporating sex, IQ, brain size, and socioeconomic status as covariates of no interest. In terms of gray matter, both pCWS and rCWS exhibited significantly reduced GMV relative to controls in the left IFG. pCWS showed greater GMV relative to controls in the left postcentral gyrus, right precentral gyrus and greater GMV relative to rCWS in the left cerebellum and left postcentral gyrus. rCWS showed greater GMV in the left cerebellum relative to controls and right thalamus relative to pCWS. There were significant age-related GMV increases in the bilateral putamen for rCWS relative to the other two groups (all contrasts thresholded at p< .05 corrected). In terms of white matter, rCWS showed significantly greater WMV in the bilateral STG relative to controls and greater right STG relative to pCWS. They also showed significant age-related increases in the left STG and bilateral cerebellum (IX) relative to controls. In sum, we found structural changes that suggest increased motor and somatosensory involvement in pCWS, and greater auditory, subcortical and cerebellar involvement in rCWS. Both pCWS and rCWS exhibited less GMV in the left IFG, replicating earlier reports that pointed to reduced GMV in left IFG in CWS as a group. Greater bilateral STG and subcortical involvement may reflect structural compensatory developmental changes conducive to recovery from stuttering. These findings provide novel insights into possible neural mechanisms of persistence and recovery of childhood stuttering. 45 Poster Session A A12 Neuroimaging-genetics profiles in persistent developmental stuttering Soo-Eun Chang1,2, Claudia Benito- Aragón3,4, Ho Ming Chow1,5, Jorge Sepulcre3,6; 1Department of Psychiatry, University of Michigan, Ann Arbor, 2Cognitive Imaging Research Center, Department of Radiology, Michigan State University, East Lansing, 3Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 4University of Navarra School of Medicine, University of Navarra, Pamplona, Spain, 5 Nemours/Alfred I. DuPont Hospital for Children, Wilmington, DE, USA, 6Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School The neurobiological underpinnings of developmental stuttering, a speech disorder characterized by disrupted speech fluency, remain unclear. While recent studies have identified several genetic profiles associated with stuttering, how these specific genetic backgrounds impact brain structure and neuronal circuits and how they influence the emergence of stuttering remains unknown. Here we present two studies that aimed to identify the topological relationships between Allen Brain Atlas genetic expression profiles and structural/functional brain signatures in stuttering. In study 1, we conducted voxelbased morphometry (VBM) to examine brain structural changes associated with stuttering, and, in study 2, we identified the large-scale functional connectivity network that underlies stuttering using graph theory principles. In both studies, we performed spatial similarity analysis to examine whether the brain distinctive features of stuttering intersect with the protein-coding transcriptome data of the Allen Human Brain Atlas, using both a priori knowledge of previously reported stuttering genes in the literature, as well as data-driven approaches. Using these complementary strategies in Study 1 and 2, we were able to find that GNPTG – a gene involved in the lysosomal enzyme targeting pathways – was significantly co-localized with both the cortical volume changes and cortical network associated with stuttering. Moreover, an enrichment analysis demonstrated that the genes identified with the stuttering cortical network shared a significantly overrepresented biological functionality of Neurofilament Cytoskeleton Organization, mitochondrial adenosine triphosphate (ATP) synthesis and detoxification of reactive oxygen species (ROS). Gene ontology enrichment analysis identified with volumetric differences showed significant overrepresentation of genes involved in energy metabolism in mitochondria. These findings suggest that lysosomal dysfunction may be related to changes in specific metabolic pathways that impart deleterious effects on neurofilament organization in neuronal circuits of the stuttering speech network. While the connection between mitochondrial functions and stuttering remains to be elucidated, emerging evidence has shown that lysosome and mitochondria interact physically and functionally, and these interactions play an important role in modulating metabolic functions of the two organelles. In sum, based on parallel analyses of functional and structural MRI data associated with stuttering and gene expression maps, we report that 46 SNL 2019 Program stuttering-related functional connectivity networks and brain volume changes co-localize with gene expression of the lysosomal trafficking gene GNTPG. Mutations in this gene and other similar genes embedded within the same cortical topology of cerebral networks suggest that these mutations could modulate the function of these networks. Our findings point to the auditory-motor integration network as highly vulnerable to neuronal circuit dysfunctions associated with GNPTG mutations. These novel findings help bridge structural and functional neural network anomalies and gene mutation findings previously linked to stuttering. Development A13 Language learning in the adult brain: TMS-induced disruption of the left dorsolateral prefrontal cortex facilitates statistical language learning Eleonore Smalle1, Riikka Mottonen2; 1Department of Experimental Psychology, Ghent University, 2School of Psychology, University of Nottingham Adults do not learn languages as easily as children do. It has been hypothesized that the late-developing prefrontal cortex that supports executive functions competes with implicit, procedural learning mechanisms that are also important for language learning (Poldrack & Packard, 2003). In agreement with this hypothesis, we found in our previous TMS study that a disruption of the left Dorsolateral Prefrontal Cortex (DLPFC) improved procedural learning of novel words in adults by using a Hebb-learning paradigm (Smalle et al, 2017). Here, we used a statistical learning paradigm, which has been widely used to investigate implicit language learning in infants (Saffran & Kirkham, 2018). Thirty-six young adults were exposed to continuous streams of auditory syllables that included repeating three-syllable word-forms while watching a silent film. Inhibitory continuous theta-burst stimulation was applied to the left DLPFC (N = 18) or to the control site (N = 18) before exposure. During the exposure EEG was recorded using a TMS-compatible EEG system. Online learning was indexed by an EEGbased measure that quantified neural entrainment at the frequency of the repeating words relative to the individual syllables. Learning was also tested offline (15 min after the exposure) with a forced-choice recognition task. Overall, the DLPFC-disrupted group showed enhanced learning of the novel word-forms relative to the control group. The results support the hypothesis that a mature prefrontal cortex competes with implicit statistical learning mechanisms, which support auditory language learning. References: Poldrack, R. A., & Packard, M. G. (2003) Competition among multiple memory systems: converging evidence from animal and human brain studies. Neuropsychologia, 41, 245–251. Saffran, J. R., & Kirkham, N. Z. (2017). Infant Statistical Learning. Annual review of psychology, 69, 181-203. Smalle, E. H. M., Panouilleres, M., Szmalec, A., & Möttönen, R. (2017). Language learning in the adult brain: disrupting the dorsolateral prefrontal cortex facilitates word-form learning. Scientific Reports, 7, 13966. The Society for the Neurobiology of Language SNL 2019 Program  A14 Temporal language areas appear necessary to wire up frontal cortex for language Greta Tuckute1,2, Zachary Mineroff2, Idan Blank2,3, Hope Kean2, Evelina Fedorenko2,4,5; 1 University of Copenhagen, 2Massachusetts Institute of Technology, 3University of California, Los Angeles, 4Harvard Medical School, 5Massachusetts General Hospital High-level language processing is supported by a bilateral fronto-temporal brain network, which is lateralized to the left hemisphere (LH) in most individuals. How this network emerges ontogenetically remains debated. There is general agreement that frontal cortex exhibits protracted development (e.g., Fuster, 2002), suggesting that frontal language areas must emerge later and/or mature more slowly than temporal language areas. But are temporal areas necessary for the development of the language areas in the frontal lobe, or do frontal language areas instead emerge independently? We shed light on this question through a case study of an individual (EG) born without a left temporal lobe, likely as a result of pre/ perinatal stroke. We investigated the high-level language network and a control network (the domain-general multiple demand (MD) network; Duncan, 2010) in EG relative to two controls groups (n=96 and n=57) using functional localizer tasks in fMRI (Fedorenko et al., 2010). First, we asked whether the language network in the language-dominant hemisphere in EG is similar to control participants. As expected in cases of early LH damage (e.g., Lenneberg, 1967), EG had a fully functional language network in her right hemisphere (RH), comparable in topography, extent, and strength of activation to the LH language network in the control groups. Moreover, EG performed within normal range on standardized language assessments. Second, the critical question was whether EG’s intact left frontal lobe would contain languageresponsive areas despite her biologically nonintact left temporal lobe. If so, that would suggest that frontal language areas can emerge without temporal language areas in the same hemisphere. If not, it would suggest that temporal language areas are critical for the emergence of the frontal ones, and that frontal inter-hemispheric connections are not sufficient to wire up the frontal cortex for language. We found that EG’s RH frontal language areas have no LH homologs: no reliable response to language were detected anywhere on the lateral surface of EG’s left frontal lobe. To ensure that EG’s left frontal lobe was functional aside from the lack of its engagement in language, we investigated the MD network, a bilateral network implicated in executive functions and goaldirected behaviors. The MD network was robustly present in EG’s right and left frontal lobes, suggesting that her left frontal cortex is capable of supporting nonlinguistic cognitive functions. Thus, even though EG’s left frontal cortex supports high-level cognitive functions, no language-responsive areas were present in this area. Temporal language areas, and presumably the intrahemispheric fronto-temporal connections, therefore appear to be critical for the emergence of frontal language areas, and frontal inter-hemispheric connections do not appear sufficient to wire up left frontal cortex for language. The Society for the Neurobiology of Language Poster Session A Language Therapy A15 The influence of post-learning high-intensity exercise on consolidation of novel word forms in healthy older adults: a randomized controlled trial. Marie-Pier McSween1,2,3,6, Katie L. McMahon2, Megan L. Isaacs1,6, Kylie Maguire3, Amy D. Rodriguez4, Kirk I. Erickson5, Jeff S. Coombes3, David A. Copland1,6; 1School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, 2School of Clinical Sciences, Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, 3School of Human Movement and Nutrition Sciences, The University of Queensland, St Lucia, 4Centre for Visual and Neurocognitive Rehabilitation, Atlanta, Georgia, 5University of Pittsburgh, 6The University of Queensland Centre for Clinical Research, Herston There is increasing evidence suggesting that language learning and memory performance can be modulated by a single bout of high-intensity exercise. In young adults, exercising at a high-intensity after learning new words has shown positive effects including a greater retention of newly learnt words when assessed 24 hours after the initial learning. In older adults, the potential benefits of post-learning high-intensity exercise on retention of newly learnt words are yet to be investigated. Thus, the aim of the current study was to investigate the effects of post-learning high-intensity exercise, in comparison to stretching exercises, on consolidation of novel words forms in healthy older adults. Fifteen healthy older adults (mean age= 67; range= 60-79; gender= 8F/7M) participated in this between-group randomized controlled trial and attended three sessions within a three-week period. During the first session, participants completed a baseline neuropsychological assessment and a cardiorespiratory fitness assessment. Participants returned one week later to perform an associative novel word-learning task prior to either engaging in a 33-minute bout of high-intensity interval cycling (i.e. 5-minute warm-up, 4x4 minutes at 85-95% of maximum heart rate interspersed by 3x3 minutes at 50-65% of maximum heart rate and a 3-minute cool-down) or stretching exercises (i.e. attention control). The associative novel word-learning task comprised five exposures to 15 pictures of familiar objects each paired with a nonword, interspersed by five recalls of the newly learnt nonwords. Retention of words was assessed within five minutes post-exercise using a recall task and one week later using a recall and recognition task. Eight participants (5F/3M) performed stretching exercises (mean heart rate= 63bpm) and seven participants (3F/4M) performed a single bout of high-intensity interval cycling (mean heart rate= 132bpm) following the word learning task. All proportional data were arcsine transformed prior to analysis. Proportional pre-exercise recall data (trials 1-5) were submitted to a repeated measures ANOVA, which showed no significant difference between exercise groups (F(1, 13)= 0.108, p= 0.748, η²= 0.008). Proportional post-exercise recall data were submitted to univariate ANOVAs, which showed no significant difference between exercise groups when recall was assessed within 5 minutes post-exercise (F(1,13)= 0.395, p= 0.541, η²= 0.029) and one-week later (F(1, 47 Poster Session A 13)= 0.447, p= 0.515, η²= 0.033). Proportional one-week delayed recognition accuracy data were submitted to a univariate ANOVA, which showed no significant difference between exercise groups (F(1, 13)= 0.502, p=0.491, η²= 0.037). This study was the first to investigate the effects of post-learning high-intensity exercise on novel word consolidation in healthy older adults. Preliminary results from this study do not show any benefit of high-intensity exercise over stretching exercises in new word learning consolidation in older adults. Further investigations with larger sample sizes are necessary to verify this result. Investigations into the impact of individual differences in baseline learning abilities and baseline fitness levels on word learning are also warranted in order to better understand the effects of post-learning exercise. Language Genetics A16 ​A Neurogenetic Study of Dyslexia Risk in Neonates Stanimira Georgieva1, ​Topun Austin2, Gusztav Belteki3, Zoe Kourtzi1, Victoria Leong1; 1​Department of Psychology, University of Cambridge, 2​Department of Paediatrics, University of Cambridge, 3​Department of Obstetrics & Gynaecology, University of Cambridge ​ yslexia is a neurodevelopmental difficulty in learning D to read. Several candidate risk genes have now been identified, but their relationship to the neurobiological etiology of dyslexia remains unknown. It has been suggested that polymorphisms in these susceptibility genes (which are involved in neurodevelopmental processes in-utero) produce abnormalities in cortical microcircuitry, as well as structural and functional connectivity that lead to deficits in neural oscillatory processing of speech sounds. Here, we assess whether the predicted associations exist between single nucleotide polymorphisms (SNPs) from four major dyslexia susceptibility loci (DCDC2, KIAA0319, ROBO1 and DYX1C1), and neural speech processing EEG indices in 85 neonates at high- or low-familial risk for dyslexia. Results of genotyping revealed that, for 82 of the 85 infants, Stepwise Discriminant Function Analysis identified two SNPs (rs333491 on ROBO1 and rs793862 on DCDC2) which significantly classified infants’ familiar dyslexia risk status above chance. Preliminary neural analysis further revealed higher values of 0-lagged phase-locking at a theta (4.7 Hz) rate to infant-directed speech for infants carrying the homozygous major allele of the ROBO1 SNP, as compared to both heterozygous and homozygous minor allele carriers. New data collection and neural analyses are on-going. SNL 2019 Program thirty-five genes in language function. A key assumption being that the genes and mechanisms contributing to language disorders are the same factors that underpin typical language development. An alternative approach is offered through the neurological and genetic characterisation of individuals with extreme ability, rather than disability. We leverage this approach to better understand mechanisms relevant to typical language development. Here we report on a novel molecular mechanism identified through the investigation of an extreme language trait – the ability to speak backwards. We identified a father and daughter who can rapidly and accurately reverse phonemes to speak backwards fluently. We showed that this remarkable phenotype is underpinned by exceptional working memory. Using fMRI, we established that their extreme ability is supported by visual semantic loops within the left fusiform gyrus. Our analysis of this family identified a novel coding variant (c.G262A, p.G88R) in the gene RIC3; a chaperone of the α7 subunit of the nicotinic acetylcholine receptor (nAChR) which is critical for cholinergic synaptic transmission. The α7 nAChR is one of the most abundant receptors in the mammalian brain and plays a pivotal role in brain development. Traditionally, these receptors are associated with neuropsychiatric disorders such as schizophrenia and Alzheimer’s disease. Through electrophysiological studies using Xenopus laevis oocytes as a model for nAChR activity, we detected a positive functional effect of the RIC3 variant upon cholinergic synaptic transmission. This provides direct evidence for the role of RIC3 as a nAChR chaperone, and of its mediatory role in memoryrelated circuits. Expanding on these findings, we are currently recruiting and testing additional participants with the ability to speak backwards. Preliminary assessment suggests that this trait is heterogeneous, and not necessarily driven by superior working memory in all cases. This indicates that backwards speech is a complex trait with many neurological and genetic drivers likely to play a role. The successful identification of RIC3 and the characterisation of its role in working memory demonstrates the utility of extreme traits to uncover novel mechanisms that underpin language. By expanding this successful approach to a wider cohort of backwards speakers, we aim to characterise the cognitive processes that underpin backwards speech and use this to reveal novel neuromolecular pathways involved in language acquisition. Computational Approaches A17 gnikaepS sdrawkcaB: from extreme language traits to mechanisms Hayley Mountford1, Stefan Prekovic2, Nayeli A18 Translating computational modeling into clinical practice: BiLex as a tool to simulate treatment outcomes in bilingual aphasia Claudia Penaloza1, Uli Grasemann2, Maria 1 2 Language ability is highly heritable and therefore has a strong genetic component. Despite decades of study, we understand little of the genetic and molecular mechanisms which underpin typical language development. Insights into the genetic basis of language have primarily come from the study of language disorders, implicating over Background. Bilinguals with aphasia (BWA) present varying degrees of impairment and recovery in their two languages. Thus, identifying the language that should be targeted in treatment is a current challenge in clinical practice. Computational models that accurately simulate rehabilitation outcomes in BWA may help predicting Gonzalez-Gomez1, Isabel Bermudez-Diaz1, Dianne Newbury1; Oxford Brookes University, 2Netherlands Cancer Institute 48 Dekhtyar1, Risto Miikkulainen2, Swathi Kiran1; 1Boston University, The University of Texas at Austin The Society for the Neurobiology of Language SNL 2019 Program  individual response to therapy provided in each language. Aims. We used BiLex (Peñaloza et al., under review), a computational model that can simulate L1 and L2 naming ability in healthy bilinguals, to simulate treatment effects on the treated language, and cross-language transfer effects on the untreated language in BWA. Behavioral methods. We employed a retrospective behavioral dataset of 13 Spanish-English BWA (mean age = 55.61 years) reported elsewhere (Kiran et al., 2013). All BWA received naming treatment based on semantic feature analysis in English (n = 6) or Spanish (n = 7). Individual scores on naming probes for treated words and untreated translations were collected to measure single session effects and were used as the target for simulations. Effect sizes were also computed to determine if treatment effects were significant in each language. Computational approach. First, an individual instance of the BiLex model was trained to simulate prestroke naming abilities using each participant’s age at testing, L2 age of acquisition and L1 and L2 prestroke exposure and use as model training parameters. Next, each BiLex model was lesioned to simulate L1 and L2 naming deficits as measured by standardized language tests in each BWA. Finally, each BiLex model was retrained to simulate treatment outcomes in the treated and the untreated language. Each treatment session received by a BWA was replicated as a retraining cycle for the corresponding BiLex model. After each retraining cycle, the model’s naming performance was tested in each language to measure retraining effects in both languages as done behaviorally with each BWA. Simulated naming performance in each language was then compared to the actual treatment effects for each BWA using cross-correlations. Results: Treatment gains in the treated language were significant for 10 BWA. Three BWA also showed significant transfer effects to the untreated language. Cross-correlations between behavioral treatment and computational simulation times-series data ranged between 0.48 and 0.96 for the treated language and between -0.15 and 0.63 for the untreated language. Conclusions. Overall, our results indicate that BiLex could simulate therapy effects in the treated language for most BWA, and transfer effects to the untreated language when those were observed in the BWA. These findings support the potential of BiLex to predict individual treatment outcomes in BWA by comparing overall simulated treatment effects when treatment is provided in one versus the other language. Future research with BiLex could inform clinical decisions on the language that should be targeted in treatment to observe maximum gains in BWA. A19 Resolving dependencies during naturalistic listening Jonathan Brennan1, Andrea Martin2, Donald Dunagan3, Lars Meyer4, John Hale3; 1University of Michigan, 2Max Planck Institute for Psycholinguistics, 3University of Georgia, 4Max Planck Institute for Human Cognitive and Brain Sciences Language comprehension requires the listener to determine how non-adjacent words in an utterance relate to each other. For example, given the utterance “Mary gave the book she finished to Eleanor.”, listeners rapidly The Society for the Neurobiology of Language Poster Session A recognize that Mary finished a book and that Eleanor is receiving the book from Mary. The working memory mechanisms that process these dependencies have been associated with a number of evoked responses and oscillatory power changes in EEG [1,2,3,4]. The present study aims to test the generalizability of these prior findings in two ways: (1) we test here a broad range of dependency types, rather than assessing dependencies between noun phrases and verbs only; (2) we examine evoked responses and oscillatory power changes in an everyday setting, as our participants performed the naturalistic task of simply listening to an audiobook story. STIMULUS AND MODELING: We use an openly available dataset of N=33 EEG recordings collected while participants listen to an audiobook story [5]. The text of the story is parsed using a dependency parser [6]. With this annotation in hand, we define two sets of word-byword metrics that dissociate two different sub-processes of working memory during dependency processing: A “storage” metric sums across the dependencies that are unresolved at each particular word—thus mimicking working memory storage demands. A “retrieval” metric sums across all dependencies that are completed at a particular word, weighted by the time passed since word encoding—thus mimicking working memory retrieval demands. We define “storage” and “retrieval” for all dependencies in the text, as well as for linguistically coherent sub-sets (i.e., filler-gap, passive, embedding, sub-categorization). EEG DATA AND STATISTICAL ANALYSIS: The raw data are divided into epochs from –0.3–1 sec around the onset of content words. Epochs and channels are visually inspected for artifacts, and ICA is applied to remove ocular signals. We analyze evoked data (0.1–40 Hz band-pass) and alpha-band (8–12 Hz) power. We then queried these data using linear regression models within subject, containing the dependency measures alongside a set of lexical and sub-lexical control variables (e.g., word frequency, sound power, word order) at each electrode and time-point. Beta coefficients from these regressions are clustered at the group level using a non-parametric permutation test. RESULTS: The evoked signal shows an early anterior negativity that varied as a function of the total retrieval demands per word. This is consistent with [7,8]. The evoked signal also shows a late anterior positivity associated with “storage” of filler-gap dependencies. In support of [4], “storage” of filler-gap dependencies also correlates with increased left anterior alpha power. These findings confirm, in the context of naturalistic story listening, three different neural indices of working memory processes that are familiar from sentence-level experimental paradigms. In doing so,they contribute evidence from an ecologically-natural domain in favor of the hypothesized link between grammatical relationships and memory processes. A supporting figure and a reference list is available at https://tinyurl.com/ y2h5pckx. 49 Poster Session A A20 Predictability of semantic representations during pre-utterance planning Maryam Honari-Jahromi1, Brea Chouinard2, Esti Blanco-Elorrieta3,4, Liina Pylkkänen3,4, Alona Fyshe2; 1University of Victoria, 2University of Alberta, 3University of New York, 4University of New York, Abu Dhabi Decoding techniques have uncovered the neural semantic representations of words and phrases during comprehension, but not during pre-utterance planning. Such techniques use machine learning to predict stimulus properties from brain recordings, using pre-established word-vectors. Here, we used decoding to investigate neural representations of color words and nouns, before the utterance, and to compare semantic representations of isolated words to words in phrases. Methods: We used 208-channel MEG (magnetoencephalography) data from twenty right-handed, native English speakers naming coloured pictures on a coloured background. Condition was determined by instruction: Isolation, “Say object colour” (or object); Phrase, “Say object name and colour” (red bag on green background yields “Red bag”); List, “Say background color then object name” (white bag on red background yields “Red bag”). Stimuli appeared for 1500ms or until the participant’s response. We applied existing machine learning methods to the 0-700ms of MEG data after stimulus presentation, using Skip-gram vectors compiled from the Google news-articles dataset (vocabulary size: 692,000). The resultant 300-dimensional vectors and the MEG-data were combined to track neural representations across time (0-700ms) and condition. We used a sliding 100ms time-window, every 5 ms, to obtain decoding accuracies, then computed temporal generalization matrices (TGMs). TGMs determine how well the learned weights from a certain time or condition map onto other times or conditions, thus providing information about decodability across time. Training and testing on the same time-window provides information about decoding accuracy, while training and testing on different time windows (e.g., train ~100ms, and test ~200, 350, or 600ms), provides information about the decodability of representations across time. TGMs can also compare across conditions. Results: Naming accuracy was >97%, with no latency differences between list and phrase conditions. Our overall decoding was significantly above chance (p<.05) for adjectives (71.03%, 64.41%) and nouns (62.97%, 70.5%) in the list and phrase conditions respectively. Adjective TGMs: Adjective findings were relatively consistent, with nearly all TGMs identifying robust mental representations immediately prior to utterance (i.e., ~450-700ms) and no maintained or repeated representations. Uniquely, the list TGM indicated dissimilar early (~100ms) versus late (~600ms) representations, possibly differentiating an early visual representation, from a later semantic or motor one. There were no notable across-condition results. Noun TGMs: Generally, noun representations were decodable earlier than adjectives (i.e., ~100ms) and were more protracted (i.e., ~150-450ms), but were not decodable just before the utterance. Unlike adjectives or the list condition, isolated nouns representations were decodable at different times in the phrase condition, 50 SNL 2019 Program indicating a delayed repetition of the noun representation. Conclusion: We found ample information in MEG data for decoding during pre-utterance planning. We found robust mental representations of adjectives immediately prior to their utterance, but no maintenance of or repeated representations. In contrast, noun representations were prolonged in the phrase condition, compared to the list condition and adjectives, and reappeared at later times, indicating delayed repetition of the noun representation. Future research could investigate regions-of-interest instead of whole brain analysis, and evaluate adjectives that meaningfully alter the noun (i.e., rotten tomato, broken kettle). Syntax A21 Investigating the role of inter-hemispheric communication in age-related increase in right-hemisphere P600 grammaticality effect: a combined ERP and DTI study Po-Heng Chen1, Jen-Shiang Wong1, Wan-Ting Lin1, Wen-Yih Isaac Tseng1, Joshua Oon Soo Goh1, Chia-Lin Lee1; National Taiwan University 1 Syntactic processing is strongly lateralized toward the left hemisphere (LH) in young right-handers but tend to additionally engage the right hemisphere (RH) with age. The present study investigated how this age-related bilateralized syntactic processing is moderated by interhemispheric communication. Event-Related Potentials to grammatical or ungrammatical two-word pairs (a centrally-presented syntactic cue followed by a target word presented to either visual field) were collected from 35 right-handed healthy older adults and were time locked to the laterally-presented target word. Consistent with the aging literature, bilateral syntactic processing was observed, with significant P600 grammaticality effects with both visual-field (VF) presentations. All participants underwent behavioral tasks to measure inter-hemispheric inhibition and coordination. Results from the bilateral flanker task, in which the target arrow and the distracting arrow were presented in different VF and either congruent or incongruent with regard to their direction, showed reliable flanker effect regardless of whether the distractor was presented to which VF, suggesting symmetrical inter-hemispheric inhibition in older adults. In addition, older adults performed better in the word matching task when two semantically related or unrelated words were presented bilaterally than unilaterally, suggesting a trend of benefits from cross-hemispheric coordination. However, individual differences in these tasks did not correlate with the magnitudes of the RH P600 effect. Finally, for a subset of 20 participants, diffusion tensor imaging (DTI) data were obtained and microstructural tissue integrity of the corpus callosum (CC) was estimated with fractional anisotropy (FA). Extant neuroimaging data suggest that the anterior and posterior part of the corpus callosum (CC), genu and splenium, may affect the extent of functional lateralization differentially. While weaker structural integrity of genu is associated with greater RH activity, implicating a role of inter-hemispheric inhibition for genu, larger size of splenium is associated with better The Society for the Neurobiology of Language SNL 2019 Program  performance in demanding language tasks, implicating a role of inter-hemispheric excitation for splenium. Hence, examining the associations between the microstructure of genu and splenium and RH P600 effects could help clarify the modulating factors for the additional RH syntactic processing. Our results showed that FA of the splenium was predictive of the RH P600 effects, with larger FA values associated with larger P600 effects. Correlation between FA of the genu and RH P600 effect, however, was not reliable. Together, this study replicated bilateral P600 grammaticality effect in healthy older adults. We did not observe evidence supporting the relation between reduced inter-hemispheric inhibition and increased RH syntactic processing in these older adults. However, our results suggest that age-related additional syntactic processing in the non-dominant RH may be associated with the cross-callosal excitation via the splenium. A22 ERPs profiles for Chinese sentence processing: Relevant factors in noun-noun-verb structures with BA and BEI and effects of adjective placement violations Max Wolpert1,2, Hui Zhang3, Shari Baum2,4, Karsten Steinhauer2,4; 1 Integrated Program in Neuroscience, McGill University, 2Centre for Research on Brain, Language and Music, Montreal, 3School of Foreign Languages and Cultures, Nanjing Normal University, 4 School of Communication Sciences and Disorders, McGill University Mandarin Chinese and English both have poor inflectional morphology and canonical subject-verb-object word order, but are otherwise largely dissimilar languages. We took advantage of the cross-linguistic similarities and differences between the languages to conduct two EEG sentence processing experiments with Mandarin monolinguals in China, with the goal of confirming structural targets for a future study on English-Mandarin bilingualism and first language attrition in Canada. In our first experiment, we investigated the effect of sentence structure and animacy on argument structure interpretation in noun-noun-verb (NNV) sentences. While English speakers rely on word order as their primary cue, Mandarin speakers give greater consideration to semantic knowledge to determine who did what to whom (Liu, Bates, & Li, 1992). For instance, the two sentences “the apple the boy eats” and “the boy the apple eats” are both grammatical and have the same meaning in Mandarin regardless of word order, namely that the boy eats the apple. Mandarin also has two coverbs, BA and BEI, which unambiguously assign undergoer and actor status, respectively, to their subsequent noun phrase. Participants’ (n=25) EEG was recorded while reading NNV sentences with or without BA and BEI. After each sentence, participants indicated which noun they interpreted as the actor. Behavioral judgments showed main effects of structure (F=532, p<0.001) and direction of semantic plausibility (F=77, p<0.001) and a significant interaction between structure and semantic direction (F=11, p<0.001), and post-hoc t-tests showed that bare NNV sentences without coverbs were the most affected by semantic direction (all ps<0.001). We analyzed mean amplitude of ERPs at the verb at Cz and The Society for the Neurobiology of Language Poster Session A Pz electrodes from 300 to 500 ms, and preliminary results showed a greater N400 for the semantic anomalous compared to the semantic congruent condition for BA (t(20)=2.3, p=0.03) and marginally significant for BEI (t(25)=2.0, p=0.06) sentences. This contrasts slightly with Bornkessel-Schlesewsky et al. (2011), who investigated similar structures in spoken Chinese (with 25 native speakers of Mandarin in Germany) and found an N400 effect only for implausible BEI (but not BA) sentences. These differences could be due to our participants being monolingual Mandarin speakers with very limited foreign language exposure or to experimental design differences (e.g., materials, task, or modality). In our second experiment, we investigated Mandarin monolinguals’ (n=18) processing of adjective-noun order violations. Because adjectives generally precede nouns in both English and Mandarin, we predicted that these structures would be resistant to language attrition, but it was unclear if violations in Chinese would elicit the same ERP effects as in English. Mean amplitude at Cz and Pz electrodes showed a greater N400 in 300 to 500 ms time window (t(17)=3.4, p=0.004) and P600 approaching significance in 550 to 900 ms time window (t(17)=1.7, p=0.1) for sentences with adjective-noun order violations, in line with results from English native speakers and Chinese learners of English (Steinhauer 2014). These data not only shed new light on Mandarin monolingual sentence processing, but also show promising targets for current efforts to study cross-linguistic influences in bilingual sentence processing and first language attrition. Meaning: Combinatorial Semantics A23 Predictive Abilities During Visual Narrative Comprehension Emily Coderre1, Elizabeth O’Donnell1, Emme O’Rourke1, Neil Cohn2; 1University of Vermont, 2Tilburg University Prediction during language processing remains a debated topic, but is thought to free up neural resources and facilitate processing of subsequent material. Prediction is typically measured by examining how the amplitude of the N400 event-related potential (ERP) component is modulated by cloze probability, the expectancy of a specific word given the contextual constraints of a preceding sentence. Highly predictable words in semantically constraining “high cloze” sentences generate reduced N400 amplitudes compared to words in lessconstraining “low cloze” sentences. Importantly, nonverbal stimuli also elicit N400s, offering the potential to examine the role of prediction in other modalities, such as visual narratives which convey meaning across image sequences. Visual narrative comprehension relies on similar cognitive mechanisms as language comprehension. However, it remains unknown whether cloze probability modulates visual narrative sequences in the same way as in language. Here, we test this question using a cloze probability paradigm with visual narratives. In a pre-test, 200 healthy adults were shown the starting panels of a comic strip and asked “what comes next?” Cloze ratings for each strip corresponded to the percentage of participants who agreed on the predicted ending (ranging from 0-100%). From these 51 Poster Session A ratings we generated high cloze (>40%, M=60%) and low cloze (<25%, M=20%) conditions. Anomalous strips were also created by replacing critical panels (cloze ratings 0-83%, M=40%) with semantically incongruous panels from different strips. The anomalous condition served as a control and should elicit the highest N400 amplitudes of the three conditions. For the EEG experiment, 22 adults (who did not offer pre-test ratings) viewed 60 strips in each condition during EEG recording using a 128-channel Geodesic Sensor net and NetStation 5. ERPs were time-locked to the onset of the critical panel in each strip. Analyses were conducted at nine scalp sites representing frontal, central, and parietal regions over the left hemisphere, midline, and right hemisphere. Results demonstrated enhanced negativities for anomalous conditions compared to both high and low cloze conditions at fronto-central sites from 500-700 ms. Differences also occurred between all three conditions from 500-600 ms in the expected pattern (anomalous sequences showing the largest amplitudes, followed by low cloze and high cloze). Across all items, cloze ratings correlated significantly with amplitude from 400-900 ms. Correlations were strongest over midline and right centro-parietal scalp and were all positive, indicating that as cloze rating decreased (i.e. as sequences became less predictable), amplitude decreased (i.e. became more negative). These results indicate that predictability has an analogous effect on visual narrative sequences as on sentences. High cloze sequences showed reduced ERP amplitudes compared to low cloze sequences from 500-600 ms. This is analogous to an N400 effect, although occurring slightly later (perhaps due to the more complex visual information inherent in this modality). Amplitude also correlated with cloze ratings such that less predictable sequences generated larger N400 amplitudes. Prediction thus appears to play a similar role in visual narrative comprehension as in language comprehension, providing further evidence that sequential images are processed using similar cognitive and neural mechanisms as language. A24 Domain-general neural mechanisms responsible for integrating incoming semantic content against multiple contextual aspects: evidence from semantic and nonsemantic tasks Francesca Martina Branzi1, Matthew Lambon Ralph1; 1MRC Cognition and Brain Sciences Unit, University of Cambridge The integration of spoken or written narratives into a meaningful structure has been linked to a neural network that includes perisylvian temporal and left inferior frontal regions, but also parietal cortex and cerebellum. However, the question of whether activity in these regions reflects mechanisms for integrating information into a relevant context not uniquely implicated in language processing has received much less attention. In this functional magnetic resonance imaging (fMRI) study we addressed this question by testing healthy participants in semantic and nonsemantic tasks (between-subjects design) reflecting two important aspects of semantic integration during naturalistic language processing. 52 SNL 2019 Program The first refers to its time-extended dimension. Hence, integration processes were measured across two paragraphs of text (narrative reading task) and two paragraphs of number sequences (number pattern task). The second refers to the dynamic nature of integration, requiring sometimes to update information when context or situation changes. Therefore, we compared brain regions involved in integration when the second paragraph (target) was preceded by a low−congruent (LC) context paragraph (disruption of semantic or numerical coherence) versus when the same second target paragraph could be integrated in a highly congruent context paragraph (HC). To reveal brain regions involved in time-extended integration processes in the two tasks, we compared the above-mentioned two conditions against a no context control condition (NC) (i.e., when the same target paragraph could not be integrated with a contextual support). In both tasks, the resultant fMRI data were analysed only for the identical target paragraph, ensuring that any observed difference must reflect the influence of the preceding contexts. Key brain areas and networks involved in domain-general integration processes were established by using both univariate and multivariate (independent component analysis-ICA) analyses. In accord with previous evidence (Duncan, 2010), we hypothesised that domain-general mechanisms for semantic integration (LC&HC>NC) would have been supported by a frontoparietal network, maximally engaged during update of contextual information (LC>HC). Furthermore, according to evidence on prediction-error (Moberget et al., 2016), we also hypothesised that the cerebellum would have been sensitive to integration, and particularly to shifts of context (LC condition). Univariate analysis’ results revealed that activity in inferior frontal gyrus-IFG, left dorsal angular gyrus-dAG and cerebellum was similarly modulated by integration (LC&HC>NC) in semantic and nonsemantic tasks. Accordingly, ICA’s results revealed a fronto-parietal network similarly engaged in the two tasks. Furthermore, we found that IFG, left precentral gyrus, right insula, and superior parietal lobe were sensitive to shifts of context (LC>HC) similarly in both tasks. Finally, despite some regions were similarly modulated by shifts of context in the two tasks (e.g. IFG), they were recruited by different networks in semantic and nonsemantic tasks (ICA). The present study established the core neurocomputations supporting integration processes not limited to language processing. These findings are relevant not only for basic research, but also for understanding co-occurring deficits in neurological patients. A25 Composition without syntax or plausibility: LATL conceptual combination occurs in the absence of syntactic phrase closure or semantic plausibility Alicia Parrish1, Liina Pylkkänen1,2; 1New York University, 2New York University, Abu Dhabi Institute The interplay of syntactic, logico-semantic and conceptual routines in sentence processing is a central question for the neurobiology of language. The combinatory role of the left anterior temporal lobe (LATL) has been relatively The Society for the Neurobiology of Language SNL 2019 Program  well characterized and appears to be conceptual in nature. At the same time though, the LATL also correlates with syntactic processing steps during narrative comprehension. Here we asked whether combinatory effects in the LATL can be obtained independent of syntax. -- METHODS -- 25 participants read sentences via RSVP in a picture verification task during a magnetoencephalography (MEG) recording. Critical stimuli were phrases in which (i) syntactic merge was blocked by word category cues on the modifier (‘pleasant sunny’ vs. ‘pleasantly sunny’), or (ii) syntactic closure of the highest phrasal node was blocked by number mismatch between a determiner and noun (‘these herbal tea’ vs. ‘this herbal tea’). We additionally manipulated the semantic plausibility of each combination, yielding the full paradigm of Semantic Plausibility x Syntactic Closure x Word Category (target word in brackets): (a) ‘this/ these herbal [tea] drinker(s)…’ (plausible, closure/nonclosure, noun); (b) ‘this/these impolite [tea] drinker(s)…’ (implausible, closure/non-closure, noun); (c) ‘pleasantly/ pleasant [sunny] days…’ (plausible, closure/non-closure, adjective); (d) ‘sharply/sharp [sunny] days…’ (implausible, closure/non-closure, adjective). If conceptual combination is sensitive to morphological agreement or animacy cues and only occurs once a predicted syntactic/ semantic feature becomes available, then (i) a sentence with syntactic agreement mismatch will fail to combine ‘herbal’ with ‘tea’ due to the lack of a predicted plural feature, and (ii) a semantically implausible sentence will fail to compose due to a mismatch in predicted animacy features. Non-combinatory control conditions were included using numeral modification (“one tea”, “two tea”), as this environment has been shown not to elicit an early LATL combinatory effect. -- RESULTS -- [LATL composition effect]: Across all experimental stimuli, we observed increased activity in a significant spatiotemporal cluster widely distributed within the LATL at 200-290ms for all critical target words as shown in (a-d) as compared to the non-combinatory controls (p = 0.028). Thus the LATL composition effect occurred both in the absence of plausible meaning and closure of the largest syntactic phrase. [Plausibility effect]: For noun stimuli, we observed an increase in activation for the Implausible condition in posterior temporal cortex (190-320ms, p = 0.03), indicating that the parser was sensitive to the plausibility manipulation. [Agreement mismatch effect]: We found greater activation for the syntactic-mismatch condition compared to the non-mismatched condition in LIFG at 390-430ms (p = 0.04) and left STC 350-430 (p = 0.025), an effect much later than the conceptual composition effect. [Word category effect]: The largest word category effect was observed in the Left Angular Gyrus, with greater activation for nouns compared to adjectives at 130-230ms (p <0.01). -- CONCLUSION -- Effects of conceptual composition in the LATL occurred despite contradictory evidence from earlier semantic and syntactic featurebased predictions. This study supports a processing model where LATL composition is blind to syntax and to the local plausibility of the combination. The Society for the Neurobiology of Language Poster Session A A26 Disentangling syntactic and semantic components in basic adjective-noun composition: an MEG study Arnold Kochari1,2, Ashley Lewis1,3, Herbert Schriefers1; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 2Institute for Logic, Language and Computation, University of Amsterdam, 3Haskins Laboratories 1 The possibility to combine smaller units of meaning - e.g. words - to create new and more complex meanings e.g., phrases and sentences - is a fundamental feature of human language. While natural language utterances are clearly more complex, we can take composition of a minimalistic phrase as a starting point for investigating brain dynamics supporting linguistic combinatory processing. Specifically, in this project we investigated semantic and syntactic composition of adjective-noun phrases using MEG data. Processing a noun in a basic compositional context (“white horse”) as opposed to a non-compositional context (“zgftr horse”) has been found to be supported by the left anterior temporal lobe (LATL) with its activity peaking at 200-250 ms after noun onset (e.g., Bemis & Pylkkänen 2011; Westerlund & Pylkkänen 2014). Such early activity reflecting compositional processing was found to be modulated by adjective type: when the adjective is scalar (meaning it depends on the noun, like “large” in “large house” vs. “large mouse”), semantic composition seems to happen later than when the adjective is intersective (like “wooden” whose meaning does not depend on the noun; Ziegler & Pylkkänen 2016). In our study, we first attempted a conceptual replication of this finding in Dutch. Our secondary goal was to isolate syntactic composition in such adjective-noun phrases, which has not yet been investigated in similar MEG studies. In Dutch, adjectives have to agree with the grammatical gender of the noun that they modify (“een klein paard” [a small horse], but “een kleine vogel” [a small bird]), and we made use of this feature. In one of the experimental conditions, instead of real adjectives, nouns were combined with nonwords that had an ending agreeing with the grammatical gender of the noun - pseudoadjectives (“een #derige paard”). In these phrases syntactic composition was possible based on morphosyntactic features but semantic composition was impossible since pseudoadjectives lacked meaning (established in a pre-test). Participants (N= 40) saw an ‘adjective’ followed by a noun word-byword and answered a comprehension question after each trial. We presented 80 nouns in 4 adjective conditions (scalar, intersective, pseudoadjective, letter string [no composition]). We recorded MEG signals using a wholehead MEG system with 275 axial gradiometers and obtained individual MRI scans for each participant. Source activity was estimated using minimum norm estimates. We looked at the ROI (BA21) and time-windows where the original study reported effects. Compatible with the basic compositional processing taking place at 200-250 ms, we observed larger activity for compositional as opposed to non-compositional contexts, although in the scalar adjective condition rather than intersective as in the original study. However, we did not observe more LATL activity for intersective as opposed to scalar adjectives; 53 Poster Session A SNL 2019 Program this effect thus does not seem to be robust. Given that the previous studies often reported an effect with a slightly different ROI and time-window, our failure to replicate these effects highlights the need to do a systematic investigation of the regions that support composition in adjective-noun phrases. the complexity down to two-words offers the advantage of isolating the binding process from the contribution of other cognitive processes, such as working memory load. The paradigm might therefore be ideally suited to study the effects of healthy ageing on binding (data collection currently ongoing). A27 Effects of semantic binding and plausibility on ERPs and oscillatory power in the theta, alpha and beta band Katrien Segaert1,2, Roksana Markiewicz1, Ali Mazaheri1,2; Meaning: Discourse and Pragmatics School of Psychology, University of Birmingham, 2Centre for Human Brain Health, University of Birmingham 1 Beyond accessing single-word meaning, language users must compute a representation for the message-level meaning of phrases and sentences. Using EEG, we investigated the neural processes involved in semantic binding and examined both the evoked response (i.e. ERPs: activity phase-locked to the word) and the induced response (i.e. oscillatory activity time-locked but not phase-locked to the word). We compared wordlists for which no binding occurs, to two-word phrases for which semantic binding takes place. We moreover manipulated whether semantic binding was plausible versus implausible. More plausible words might be easier to integrate in a message-level interpretation (e.g. Hagoort, Hald, Bastiaansen & Petersson, 2004), even in a twoword phrase task where the effects of predictability are minimized. We measured the EEG of 29 participants who read two-word phrases. Target words (e.g. monkey, yacht) were presented in 3 conditions: a no binding condition (e.g. mknwkjw monkey, bjkwd yacht), an implausible semantic binding condition (e.g. lavish monkey, elastic yacht) and a plausible semantic binding condition (naughty monkey, sailing yacht). Half of the target words were animate and the other half inanimate. The list of plausible and implausible adjectives was matched for frequency, number of syllables and length. Our results were as follows. There was an N400 effect for target words for which no binding could take place, compared to target words in the semantic binding conditions (p<.05, time interval 0.3 to 0.46 s from the onset of the target word). Target words for which semantic binding takes place (compared to no binding), elicited a smaller theta (4-7Hz) increase (p<.05, time interval 0.5 to 0.8 s from the onset of the target word) and a smaller alpha (8-12Hz) increase (p<.05, time interval 0.95 to 1.15 s from the onset of the target word). There were no significant ERP effects for the plausibility manipulation. However, binding plausible target words elicited less beta (15-20Hz) suppression (p<.05, time interval 0.2 to 0.35 s from the onset of the target word) than binding implausible target words. All statistics were computed using cluster-based permutation tests, which accounted for multiple comparisons (Maris & Oostenveld, 2007). These findings are in line with previous studies linking the N400 and oscillatory power differences in the alpha and beta range to semantic integration, with modulations in function of the ease of integration (e.g. Wang et al. 2012). This suggests that the nature of the binding process is invariant and a minimal paradigm with two-word phrases can be used to study binding. Pushing 54 A28 Definitely saw it coming? An ERP study on the role of article gender and definiteness in predictive processing Damien Fleur1, Monique Flecken1,2, Joost Rommers2, Mante S. Nieuwland1,2; 1Max Planck Institute for Psycholinguistics, 2 Donders Institute for Brain, Cognition and Behaviour People sometimes anticipate specific words during language comprehension. Consistent with word anticipation, pre-nominal articles elicit differential neural activity when they mismatch the gender of a predictable noun compared with when they match (e.g., for Dutch, Otten and Van Berkum, 2009; Van Berkum et al., 2005; for Spanish, Foucart et al., 2014; Gianelli & Molinaro, 2018; Martin etal., 2018; Molinaro et al., 2017; Wicha et al., 2003, 2004). However, the functional significance of this pre-nominal effect is unclear. A minimal interpretation is that people predict the noun (with or without its gender) and then use article gender, once available, to confirm or change the noun prediction (e.g., Otten & Van Berkum, 2009; see also Otten et al., 2007; Otten & Van Berkum, 2008; Van Berkum et al., 2005). However, a stronger claim has been made, namely that people predict a specific article-noun combination including the gendermarked form of the article (Kutas et al., 2011; Wicha et al., 2003, 2004; DeLong et al., 2005). We contrasted these accounts in an ERP study (N=48) on Dutch mini-story comprehension, with pre-registered data collection and analyses (https://osf.io/6drcy), capitalizing on gendermarking on Dutch definite articles and the lack thereof on indefinite articles. Participants read mini-story contexts that strongly suggested either a definite or indefinite noun phrase (e.g., ‘het/een boek’, the/a book) as its best continuation, followed by a definite noun phrase with the expected noun or an unexpected, different gender noun (‘het boek/de roman’, the book/the novel). If gender-mismatch effects reflect the prediction of specific article-noun combinations, including article form, then we expect to observe an interaction effect: a gendermismatch effect for expectedly definite articles but not for unexpectedly definite articles. Alternatively, if gendermismatch effects only reflect the incremental use of article gender-information to update a prediction, rather than the consequences of an article form misprediction, we expect to observe no interaction. We observed an enhanced negativity (N400) for articles that were unexpectedly definite or mismatched the expected gender, with the former effect being strongest. This negative ERP effect extended into the 500-700 ms time window. Pre-registered analyses and exploratory Bayesian analyses did not yield convincing evidence that the effect of gender-mismatch depended on expected definiteness. Although these results do not constitute clear evidence against prediction of a specific article form, they suggest that article form The Society for the Neurobiology of Language SNL 2019 Program  Poster Session A prediction is not required to elicit a pre-nominal effect. An additional finding of interest was a much larger N400 effect of unexpected definiteness than of unexpected gender; this may reflect the fact that unexpected gender may signal a potentially very small change in upcoming meaning, whereas unexpected definiteness may signal a strong change in meaning because it changes information structure of the discourse Such changes in semantic processing, and the potential meanings they afforded, may be reflected in N400 activity (Bornkessel-Schlesewsky & Schlesewsky, 2019; Kutas & Federmeier, 2011; Rabovsky et al., 2018; Van Berkum, 2009). Pre-print: https://www. biorxiv.org/content/10.1101/563783v1 window. The results of the 9-year-old participants showed no P600 effect for the critical word, but the presence of a statistical tendency in the P600 window in the final word (F(1,10)=4.83, p=.053). The 15-year-old group showed a P600 effect only in the critical word (F(1,11)= 5.83, p<.05), but no statistical differences in the final word. The 9-year-old participants seem to achieve the verbal irony integration process until they read the ironic sentence´s final word, while the 15-year-old participants were able to process the verbal irony by the time they read the ironic sentence critical word. These results suggest that the two age groups are at different moments in the development of irony processing. A29 Brain Irony Processing Developmental Trajectory Gloria Avecilla-Ramirez1, Silvia Ruiz Tovar1, Hugo A30 Neural Mechanisms of Language Use in Economic Decision-making Siyuan Zhou1, Xialu Bai1, Yu Zhai1, Kaiyu Li1, Verbal irony is a contextually inappropriate and intentional linguistic expression characterized by contradicting the information provided by the speaker. It is a linguistic phenomenon that is acquired later rather than early in life; it is not until approximately the age of 8 to 9 that children are successful in recognizing any aspect of the communicative function of irony, and it is not until adolescence that they are able to understand its discursive function. Through different approaches, research on verbal irony has provided evidence about social, cognitive and linguistic abilities that might make children able to progressively both convey and grasp ironic intended meanings. Searching for N400 and P600 effects, Event-Related Potentials (ERPs) have been used to study verbal irony processing in adults using reading paradigms. The objective of this study was to study the development of irony processing using the EventRelated Potential (ERPs) technique. To do this, brain electrical activity associated with the processing of irony was analyzed in Mexican children and adolescents at two specific stages of development: ages 9 and 15. The participants of the study were eleven 9-year-olds and twelve 15-year-olds, who were asked to read stories with ironic content. All participants were interviewed in order to demonstrate if he or she could understand and explain verbal irony after the EEG recording. A total of fifty brief stories (twenty ironic, twenty non-ironic and ten fillers) were used for the ERP paradigm. Each story has a context and a target sentence. Each target sentence has two target words that represent the stimuli to which ERPs were synchronized: the critical word (always the second word of the sentence), whose ironic or literal meaning depends on context, and the final word of the sentence. The N400 and P600 components were analyzed both in the critical word and in the final word of the ironic statement. No N400 effect was observed in any of the word positions (critical and final). This result supports previous findings in adults, suggesting that no semantic integration difficulty arises during ironic comprehension. Independent repeated measures ANOVAs, with three within-subject factors (2 Conditions x 8 Scalp regions x 2 Hemispheres), were carried out for a 550-850 msec While consumption is an evolutionarily important social behavior, language plays a key role in precisely communicating complex information between a seller and a buyer. However, neural mechanism remains unclear about how a seller successfully persuaded a buyer to buy her/his products. Previous studies studied this question mainly from a single person perspective. The present study addressed this issue by employing the fNIRS-based hyperscanning approach. One hundred and fifty-six participants were recruited and randomly split into 52 3-member groups. Two members of a group were assigned a role of seller, whereas the third member was assigned a role of customer. In the experiment, the two sellers took turns to introduce their products to a customer and persuaded her/him to buy her/his product. The customer was allowed to ask questions. Moreover, the customers’ decision was dynamically assessed during the selling process. Brain activities were collected from three persons simultaneously. Interpersonal neural synchronization (INS) was computed for each pair of the participants. Results showed a significantly enhanced INS between the customers and the salesmen who succeeded in selling compared to that between the costumers and the salesman who failed in selling at TPJ. And the enhancement of the INS positively correlated with the difference in selling performance of succeeded and failed salesmen. Moreover, the enhancement of the INS also positively correlated with the customer’s dynamic purchase intention. While TPJ is the key brain areas of the theory of mind, our findings suggest that people gradually make decisions in the process of verbal communication, which is modulated by the mentalizing process. These findings provide important insights into the neural mechanisms of language use in an economic decision process. Corona Hernández1, Karina Hess1, Lucero Díaz Calzada1, Josué Romero1; 1Autonomous University of Queretaro The Society for the Neurobiology of Language Faxin Zhou1, Yuhang Long1, Hui Zhao1, Jinglu Chen1, Chunming Lu; 1 Beijing Normal University Meaning: Lexical Semantics A31 Acquisition Of Concrete Vs. Abstract Semantics: ERP Evidence Nadezhda Mkrtychian1, Daria Gnedykh1, Diana Kurmakaeva1, Evgeny Blagovechtchenski1, Svetlana Kostromina1, Yury Shtyrov1,2; 1St. Petersburg State University, 2Aarhus University 55 Poster Session A The brain underpinnings of storage and processing of concrete vs. abstract semantics remain poorly understood. Whereas numerous studies have demonstrated the socalled «concreteness effect» – processing advantages for concrete over abstract semantics manifest in e.g. faster and more accurate behavioral responses – its neural origins are unclear. Here, we address this question by recording the brain’s EEG responses to novel concrete and abstract words in the process of their acquisition. Twenty novel Russian words with new concrete or abstract meanings, matched for length, bi- and tri-gram frequency, were presented to 20 healthy adult Russian speakers in context of sentences (5 per word) that helped reveal the new word meaning. To test the brain activity for the newly acquired words, high-density EEG was recorded in a passive reading block immediately after the learning session, which included novel words, their lexical competitors, matched control words and pseudowords, and infrequent target stimuli. Global Field Power analysis indicated three most prominent peaks at ~100 ms, 200 ms, and 332 ms after the stimulus onset. ERP amplitudes were extracted from 40-ms wide windows around these peaks and compared between conditions using rmANOVAs as well Wilcoxon signed rank test and clusterbased permutation statistics, with corrections for multiple comparisons. Behaviourally, both semantic types were acquired with similar efficiency. There were no significant differences between novel concrete and abstract words in behavioral results (reaction times and accuracy scores) for recall, lexical decision, and semantic definition tasks, although recognition task showed a faster response for abstract words than concrete. Crucially, the ERP analysis revealed significant effects at all main response peaks. The first time window showed more positive response to concrete concepts than abstract ones over central areas. There was also a generally reduced activity to concrete concepts compared to control pseudowords. The second peak exhibited reduced negativity in the right hemisphere for novel concrete concepts as opposed to both abstract concepts and control pseudowords. The most profound differences were observed in the third interval, which revealed more negative-going activity the central and prefrontal electrodes of both hemispheres in response to new abstract words in comparison to concrete ones. These results suggest neutrally more efficient contextual acquisition of concrete semantics specifically, leading to reduced ERP amplitudes, similar to what was shown previously for word vs. non-word comparisons. In the same time window, the comparison with untrained control pseudowords showed a more positive-going ERP in the central and prefrontal areas for the new concrete concepts, whereas no such difference was found between the new abstract concepts and control pseudowords in any of the intervals. Thus, despite the overall similar behavioural efficiency of controlled contextual acquisition of concrete and abstract semantics, the underpinning brain activation diverged, suggesting distinct brain mechanisms of their processing. Crucially, while novel concrete items exhibited different ERP patterns from unfamiliar pseudowords suggesting their successful 56 SNL 2019 Program integration into the neural lexicon, the new abstract words did not show this pattern, indicating the advantage of concrete over abstract semantics at the neural level. Supported by RF Government grant contract No.14. W03.31.0010. A32 Embedding (im)plausible clauses in propositional attitude contexts: Modulatory effects on the N400 and late components Lia Călinescu1, Anna Giskes1, Mila Vulchanova1, Giosuè Baggio1; 1Norwegian University of Science and Technology How do comprehenders track dependencies between the plausibility of embedded clauses and of the sentences in which they occur? We investigated processing of Norwegian propositional attitude sentences with plausible or implausible complement clauses (‘Magnus {knows/ believes/dreams/doubts/imagines} that mosquitos live off {blood/vodka}’). Using ERPs, we tested the hypothesis that the amplitude of the N400 component is sensitive to the plausibility of the attitude sentence as a whole, as opposed to that of the complement clause. If the hypothesis is correct, the N400 should be larger for implausible clauses (vodka>blood) under know and believe, it should be canceled (vodka~blood) under dream, and it should be reversed (vodkablood), except for imagine (innbille in Norwegian has a counterfactual sense), where the effect was suppressed, but not reversed. Moreover, we observed modulations of late ERP components for implausible clauses (vodka>blood) under doubt and imagine, i.e., when the plausibility of the attitude sentence requires an implausible complement clause. We propose that (a) the N400, in general, reflects processing of semantic relations in phrasal and clausal contexts, and it is not necessarily sensitive to sentence plausibility, and (b) later components index integration of information from embedded clauses into a sentence model. Keywords: Semantics; Language processing; Propositional attitudes; Intensionality; N400; EEG; ERP; Norwegian A33 Spatio-temporal dynamics of noun and verb naming in early bilinguals Shuang Geng1, Lucia Amoroso1,2, Nicola Molinaro1,2, Manuel Carreiras1,2,3; 1BCBL, Basque Center on Cognition, Brain and Language, 2Ikerbasque, Basque Foundation for Science, 3University of the Basque Country UPV/EHU Despite decades of research, the question of how conceptual knowledge is represented and retrieved from memory remains controversial. In particular, there is still an ongoing debate on whether words representing objects (nouns) and words representing actions (verbs) recruit different or similar functional networks in the brain. Furthermore, within this context, little is known about how words from different languages are represented and accessed in bilingual speakers. Here, we aimed to shed light on these two aspects by analyzing brain oscillations during picture naming of nouns and verbs in a group of early Spanish-Basque bilinguals. We recorded neuromagnetic signals with a 306-sensor The Society for the Neurobiology of Language SNL 2019 Program  Elekta Neuromag system while 20 early high-proficient Spanish-Basque bilinguals performed a noun and verb picture-naming task in both languages (i.e., separate blocks). We performed time–frequency analysis to examine how power varied based on category and language. Differences between conditions were assessed using cluster-based permutation tests and sensor-level effects were source-reconstructed using Beamforming techniques. The analysis revealed power increases for verbs as compared to nouns in the theta (4-8Hz), alpha (8-12Hz) and beta (13-28 Hz) frequency-bands. However, when comparing categories across languages no differences were observed neither for noun nor verb naming. Source reconstruction of the sensor level effects showed the involvement of different neural networks for nouns and verbs depending on frequency-band. Briefly, theta differences between categories were mainly located in inferior occipito-temporal regions, while alpha and beta differences were localized in the angular gyrus, the posterior middle temporal gyrus, the anterior temporal pole and ventral premotor regions. In this study, we aimed to address how oscillatory dynamics mediate the lexico-semantic retrieval of nouns and verbs and whether this process differed or not across languages. Overall, our results suggest the recruitment of partially different networks during noun and verb processing. Interestingly, our findings support the existence of common brain networks and similar oscillatory dynamics across languages in early bilinguals. A34 Overlapping patterns during semantic processing of abstract and concrete concepts revealed with Granger Causality analysis Mansoureh Fahimi Hnazaee1, Elvira Khachatryan1, Sahar Chehrazad2, Miet De Letter3, Marc M. Van Hulle1; 1Laboratory for Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven, 2Numerical Analysis and Applied Mathematics Section, Department of Computer Science, KU Leuven, 3Department of Speech, Language and Hearing Sciences, Ghent University Abstract nouns reflect the invisible world of qualities which cannot be touched or sensed, unlike concrete nouns that refer to notions and things. Even though linguistic theories about concrete and abstract words are rather indecisive, neuroscience has found clear evidence of a “concreteness” effect, arising from a superior perceptive ability for one category over the other in patients with language impairments due to brain injury or developmental disorders. Neuroimaging studies of healthy subjects have also given a spatial and temporal account of processing differences, but the results are inconclusive. A description of the neural pathways during abstract word reading, how the pattern of information flow develops over the different stages of lexical and semantic processing, and how this dynamic connectivity pattern compares to that of concrete word processing still needs to be laid out. We conducted a high-density EEG study of 24 healthy young volunteers using a categorization task where the evaluation of the concreteness of the words is implicit, meaning that subjects were not aware of the purpose of the task. An implicit task has shown to The Society for the Neurobiology of Language Poster Session A have reduced effects of concreteness, nevertheless, it is closer to the natural state of understanding concepts in real life. The word dataset in this task was controlled for word frequency and the number of letters, and also additionally for the three dimensions of affective meaning as defined by Osgood (valence, arousal, potency). Using source reconstruction, we obtained high spatiotemporal resolution data, and at the same time, reduced the effect of signal mixing that occurs on the scalp level. Next, we employed two different techniques of statistical analysis to identify regions on the cortical surface that exhibited both common, and differential activity between the two types of concepts. A multivariate, time-varying and directional method of analyzing connectivity based on the concept of Granger Causality (Partial Directed Coherence) revealed a dynamic network that transfers information from the right superior occipital lobe along the ventral and dorsal stream, toward the anterior temporal and orbitofrontal lobes on both hemispheres. Some regions along these pathways were primarily involved in the receiving or sending of information. A clear difference in information transfer between abstract and concrete word processing was observed during the time window of semantic processing, specifically for information transferred towards the left anterior temporal lobe. Further exploratory analysis confirmed a generally stronger connectivity pattern for the processing of concrete concepts. We believe our study lays the experimental groundwork necessary for the future development of a refined theory on the processing of abstract concepts. A35 Mapping Multimodal Convergence Zones Using Representational Similarity Analysis and a High-Dimensional Grounded Model of Conceptual Content Jiaqing Tong1, Leonardo Fernandino1, Colin Humphries1, Lisa Conant1, Joseph Heffernan1, Jeffrey Binder1; 1Medical College of Wisconsin The nature of conceptual representations in the brain is a topic of ongoing debate. There is strong evidence that concept retrieval entails varying degrees of activation of sensory-motor cortical areas, depending on the experiential content of the concept. Presumably, however, these experiential representations become progressively more abstract at higher processing levels as featural information is combined within and across modalities. The aim of the current study was to test whether this abstraction process involves loss of experiential information, i.e., conversion of a multimodal representation to an amodal one. We used representational similarity analysis (RSA) of fMRI data to identify cortical areas that encode conceptual similarity structure as defined by a combination of 65 sensory-motor-affective-spatialtemporal-cognitive experiential dimensions (Binder et al., 2016). The existence of such regions would call into question the assumption that multimodal abstraction necessarily entails a complete loss of experiential information. Methods: Nineteen healthy, right-handed, native English speakers were shown 242 English words (141 nouns, 62 verbs, and 39 adjectives) during 3T BOLD fMRI using a fast event-related design. Each stimulus 57 Poster Session A was presented visually 6 times over 2 scanning sessions. Participants were asked to think about the meaning of each word. To encourage compliance, on 10% of trials a semantic decision task was presented after the stimulus, in which the participant saw 2 words and had to choose the word most similar in meaning to the previous stimulus. A single general linear model of the BOLD signal included each of the words as 242 regressors of interest. This analysis generated beta coefficient maps for each word in each participant, which were then registered to the Human Connectome Project surface template. A surfacebased RSA searchlight procedure was used to identify cortical regions in which activation patterns encoded information about the words. Neural dissimilarity matrices (DSMs) were computed for 5mm radius patches around each vertex on the cortical surface. The conceptual DSM, based on the 65-dimensional experiential model of concept representation, was computed as the pair-wise cosine similarities between the vector representations of the 242 test concepts. The Pearson correlation between the neural DSM and the conceptual DSM was computed for each surface patch. Correlation values were then converted to Fisher Z scores, and a group level t-test of these values (against zero) was computed. The thresholded (p < .001) t map was corrected for multiple comparisons via permutation testing. Results: Semantic similarity computed from the experiential model was significantly correlated with neural similarity in multiple heteromodal regions, including bilateral lateral and ventral temporal cortex, bilateral angular gyrus, left inferior frontal gyrus pars triangularis, bilateral superior frontal gyrus, left medial prefrontal cortex, and bilateral posterior cingulate gyrus and precuneus. Discussion: The neural representation of concepts in heteromodal cortical areas reflects multi-dimensional experiential information. These results call into question the existence of completely amodal concept representations. Binder, J. R., Conant, L. L., Humphries, C. J., Fernandino, L., Simons, S. B., Aguilar, M., & Desai, R. H. (2016). Toward a brainbased componential semantic representation. Cogn Neuropsychol, 33: 130-174 A36 Anterior temporal lobe regions necessary for naming individuals: Voxel-based lesion-symptom mapping in patients undergoing left temporal lobe resections Jeffrey R Binder1, Sara B. Pillay1, Jia-Qing Tong1, William L. Gross1, Wade M. Mueller1, Manoj Raghavan1, Lisa L. Conant1, Linda Allen1, Christopher T. Anderson1, Chad Carlson1, Colin J. Humphries1, Leonardo Fernandino1, Lisa Schwartz1, Robyn M. Busch2, Mark Lowe2, John T. Langfitt3, Madalina Tivarus3, Daniel L. Drane4, David W. Loring4, Monica Jacobs5, Victoria Morgan5, Jerzy P. Szaflarski6, Leonardo Bonilha7, Sara J. Swanson1; 1Medical College of Wisconsin, Milwaukee, 2Cleveland Clinic Foundation, 3 University of Rochester, 4Emory University, 5Vanderbilt University, 6 University of Alabama, 7Medical University of South Carolina Debate concerning the role of the anterior temporal lobe (ATL) in semantic cognition has led recently to proposals that this large structure may contain multiple functionally distinct regions. fMRI studies suggest a role for the left dorsolateral ATL (anterior superior temporal sulcus and 58 SNL 2019 Program surrounding cortex) in processing knowledge about social concepts and individual people. We showed recently that surgical resections in the left ATL impair naming of specific individuals more than naming entities in other categories in an auditory description naming (ADN) task (Swanson et al., 2018; American Epilepsy Society Meeting). Here we use voxel-based lesion-symptom mapping (VLSM) to localize the critical region within the ATL that causes this deficit, using pre- to postoperative change scores as the outcome measure. Change on an ADN test involving non-unique living things was used as a covariate to control for language, semantic, and domain-general task requirements not specific to naming unique individuals. Method: Participants were 32 people (18 women) with left language dominance confirmed by either fMRI or Wada testing who underwent partial left temporal lobe resection for drug-resistant temporal lobe epilepsy. They completed pre- and 6-month postoperative ADN testing. The ADN test required production of a name in response to a brief orally-presented description and included 24 trials testing naming of famous individuals with proper noun names (Proper Animate; e.g., “the reindeer with a red nose”) and 24 trials testing naming of living things with common noun names (Common Animate; e.g., “a bird that drills holes in trees”). Surgical lesions were mapped manually using high-resolution postoperative MRI, then mapped to a common template using nonlinear morphing of non-lesioned structures. Lesions varied widely in location and extent, including standard ATL resections with variable caudal and superior temporal gyrus extension, focal lateral and ventral resections, selective temporal pole removals, and selective hippocampal ablations. VLSM analyses identified the lesion correlates of pre- to post-surgery change scores on the Proper Animate ADN task, with change scores on the Common Animate ADN task as a covariate to control for general language (speech perception, speech production) and executive (attention, working memory) processes. The resulting maps were thresholded at voxel-wise p < .005 and cluster-corrected at FWE p < .05 as determined by randomization testing. Results: Patients showed greater declines (p <.001) on the Proper Animate (mean percent change = -19.3%) than on the Common Animate (mean percent change = -4.7%) condition. Post-operative declines on Proper Animate retrieval (independent of change in Common Animate retrieval) were associated with lesions in a focal region centered on the left anterior superior temporal sulcus posterior to the temporal pole. There was no relation to lesions in the temporal pole, ventral temporal lobe, or medial temporal lobe. Summary: Left ATL regions critically necessary for naming specific animate individuals are located in the dorsolateral sector of the ATL, centered in the anterior superior temporal sulcus. We hypothesize that this region combines social, affective, and language-based knowledge central to concepts of animate individuals. The Society for the Neurobiology of Language SNL 2019 Program  A37 Here and there, what and where: investigating the role of the dorsal visuospatial stream in linguistic spatial encoding. Roberta Rocca1,2, Marlene Staib1, Kristian Tylén1,2, Kenny Coventry3, Torben Lund4, Mikkel Wallentin1,2,4; 1Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, 2Interacting Minds Center, Aarhus University, 3School of Psychology, University of East Anglia, 4Center of Functionally Integrative Neuroscience, Aarhus University Hospital INTRODUCTION: Literature on spatial language suggests that the neural underpinnings of spatial expressions and non-linguistic spatial processing overlap along dorsal visuospatial pathways. Taken together, these results might suggest that language processing is generally organized along a ventral-dorsal divide between neural substrates for semantics and spatial relations mirroring the distinction between object identification and location in vision. This hypothesis has been formulated on a theoretical basis, but the existence of a dorsal stream for language processing has never been addressed empirically. In our study, we aimed at explicitly testing this question on a naturalistic fMRI dataset with near word-level temporal resolution (TR=388ms). METHODS: We conducted a fast-fMRI experiment where 28 participants listened to a dialogue with highly frequent occurrences of wh-words (“where”, “what”, “who”) and spatial demonstratives (“here”, “there”). Demonstratives are purely spatial words, used to direct attention towards specific locations in the environment on the basis contrastive distance cues (near vs. far). Where, what, and who prime the processing of spatial information, object identity, and personal identity respectively, thus functioning as proxies to the divide between semantics and spatial relations in language. We modelled neural response to each word of interest using FIR models (20 bins, 500ms lags), which yielded 28 (participants) x 20 (time bins) x 5 (word types) beta maps. Each of these maps represented response to a specific word, at a specific time point after stimulus onset, for a specific participant. We computed Pearson’s correlation between beta maps for demonstratives and beta maps for each whword for each subject and time point, both at whole-brain level and in 60 AAL regions. This yielded 28 (participants) x 20 (time bins) x 6 (word combinations) correlation values representing similarity in response to each pair of words over 10s after stimulus onset. RESULTS: A linear mixed effects analysis on whole-brain similarity values showed that, as expected, topographical similarity was significantly higher between spatial demonstratives and where compared to what and who. Zooming in on local patterns, we found that higher global similarity was driven by response patterns in superior parietal and frontal areas belonging to the dorsal processing stream, thus speaking in favour of a functional specialization of the dorsal stream for linguistic spatial encoding. DISCUSSION: We interpret our results as suggestive of a functional role of the dorsal stream and related pathways in linguistic encoding of space, as opposed to ventral structures (the “what” stream) supporting semantic and conceptual processing. While a dorsal-ventral divide had been hypothesized before on theoretical grounds, our The Society for the Neurobiology of Language Poster Session A study is the first to provide direct evidence for the role of the dorsal stream in language processing. Moreover, direct involvement of the visuospatial dorsal stream in linguistic spatial encoding supports distributed accounts on the neurobiology of language processing. Rather than relying on a specialized circuitry, language engages a non-segregated architecture, where neural structures supporting perceptual tasks are dynamically recruited in a context-dependent fashion. A38 Can Language Cue the Visual Detection of Biological Motion? Ksenija Slivac1, Alexis Hervais-Adelman2, Peter Hagoort1,3, Monique Flecken1,3; 1Max Planck Institute for Psycholinguistics, 2University of Zurich, 3Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Motion perception is an evolutionarily salient part of our visual system, with a robust processing dichotomy between general and biological motion, reflected behaviourally and neurally (Howard et al., 1996). The perception of both of these motion patterns has been widely investigated using point-light animations, which highlight the compositional nature of their processing, and a strong reliance on Gestalt principles and prior knowledge for their successful detection (Pastukhov, 2017). The salience of motion is also reflected in language, with fine-grained distinctions in motion type and level of specificity. Studying the interaction between these two systems, we test the extent to which the detection of biological motion in a multistable, dynamic, pointlight environment can be cued by language. Specifically, we test the degree of specificity required to achieve a linguistic cueing effect for biological form-from-motion detection. To this end, we carried out two psychophysical experiments (N=40 each), with motion detection (Do you see human motion, yes/no?) and motion discrimination (Is the human figure in the upright or upside down position?) tasks respectively. The experiments use a cueing paradigm to probe the detection of a human point-light figure (PLF; walker, rower, etc.), concealed within a random dot motion (RDM; predominantly upward or downward motion) mask. In addition to coherent PLFs, the participants saw either scrambled (Exp. 1) or inverted (Exp. 2) figures, shown to disrupt the biological motion detection. To ensure the priming of biological motion perception, rather than biological form or general horizontal movement, Exp. 2 included a condition with human figures frozen in a canonical pose, translated back and forth at the speed matching the RDM speed (gliders), in addition to the naturally moving PLFs (naturals). Linguistic cues varied in specificity regarding features of (biological) motion: (in)congruent biological motion (rower, dancer); biological form (brother, father); general motion (snow, smoke). Before each experiment, individual motion detection thresholds were obtained using a Bayesian adaptive staircase procedure at 75% accuracy level. During the thresholding, the coherence of the RDM was adjusted as a function of PLF detection (the less coherent the RDM was, the harder it was to detect the PLF hidden among its dots). Results show that only linguistic cues fully congruent with the biological motion of the 59 Poster Session A (natural) masked PLFs enhanced motion detection (Exp. 1: accuracy, RT, C, Beta, BPPD, Exp. 2: RTs). Furthermore, general motion cues, reminiscent of general motion patterns of the RDM mask itself, decreased the detection of biological motion in Exp. 1. In Exp. 2, incongruent biological motion cues interfered with the detection of biological motion. No effect of linguistic input was found for the gliders. In the context of these results, we discuss preliminary fMRI and representational similarity analysis (RSA) data as well. Findings suggest that motion language is capable of triggering our highly specialised biological motion detection mechanism, enhancing recognition of human-like stimuli in a dynamic and noisy setting. Specifically, we show that such effects are only obtained when the language fully overlaps in degree of specificity on core biological motion features: kinematics and (human) form. A39 Visual and verbal narrative comprehension in children and adolescents with autism spectrum disorders: an ERP study Mirella Manfredi1, Neil Cohn2, Pemella Sanchez Pinho3, Elizabeth Fernandez3, Paulo Sergio Boggio3; 1 Department of Psychology, University of Zurich, 2Department of Communication and Cognition, Tilburg University, 3Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo It has long been claimed that children with autism face deficits in comprehending verbal materials, while comprehension of visual materials remain intact. However, verbal materials often are presented with sequencing (sentences, narratives), while visual materials use a single image. What if visual materials were also sequential, like in the visual narratives found in comics? We thus examined semantic processing during the presentation of both spoken sentences and visual narratives for children with ASD compared to typically developing children. In addition, this work sought to observe the ERPs evoked by visual narratives in children at all, given that no apparent prior studies have yet examined the neurocognitive processing of such ubiquitous materials. We presented auditory sentences with congruent or incongruent final words (a common noun), and separately, we presented 3-panel long visual narratives with congruent or incongruent final panels. The experimental group involved twenty-four ASD school-aged children (mean age = 12.6) and sixteen age-matched TD children (mean age = 11.4). In the auditory sentences, we observed a focal central N400 effect to incongruent words as compared to congruent ones, which was slightly attenuated for the children with ASD compared to the TD children. Following the N400, a sustained negativity was larger to incongruous than congruous words, but only for the TD children, possibly reflecting a cost of further processing the inconsistent auditory information that was absent for children with ASD. Critical panels in visual narratives evoked a greater N400 amplitude to incongruent than congruent panels with a fronto-central scalp distribution. No differences were observed between the N400 effect in ASD and control group. However, as in sentences, a fronto-central negativity maintained after the N400 to incongruent 60 SNL 2019 Program critical panels, reflecting sustained processing from the preceding N400. In addition, incongruent panels evoked a centro-parietal late positivity (a P600/LPP), but only for the TD children. This late response could indicate updating processes evoked only by neurotypical children, suggesting that ASD children may not have been as sensitive to integrating the discontinuities of the incoming visual information. Overall, our results seem to suggest that ASD children differ more from TD children in the later time windows (550-750ms) than in the early stages of processing across both verbal (late negativity) and visual (P600/LPP) modalities. This suggests that later “interpretive” or “updating” processing may be slower for ASD children in the processing of meaning. These findings suggest that ASD children face processing deficits in both verbal and visual materials when integrating meaning across sequential units, though such impairments may arise in different parts of the interpretive process, depending on the modality. Morphology A40 Eliciting ERP components for morphosyntactic agreement mismatches in grammatical sentences Émilie Courteau1,3, Lisa Martignetti1,2, Phaedra Royle1,3, Karsten Steinhauer2,3; 1School of Speech Language Pathology and Audiology, Faculty of Medicine, University of Montreal, 2School of Communication Sciences and Disorders, Faculty of Medicine, McGill University, 3Centre for Research on Brain, Language and Music (CRBLM) French subject-verb agreement has specific properties relevant to the study of agreement processing, which have not been systematically studied in the ERP literature. Furthermore, there is increasing interest in ERP methods that do not rely on violation paradigms. Therefore, we examined whether the auditory presentation of a grammatical sentence in French combined with a picture that doesn’t match its morphosyntactic features would elicit the same ERP components as in classic error-based paradigms. We created various cross-modal number (singular/plural) mismatches to elicit LAN-P600 for agreement mismatches (Molinaro et al, 2011; Royle et al, 2013) and semantic verb/action (rather than noun/object) mismatches to elicit N400s (Royle et al., 2013; Willems et al, 2008). Twenty-eight French-speaking adults listened to sentences describing scenes depicted while their EEG was recorded. We varied the type and amount of number cues available in each sentence using two manipulations. First, we manipulated the verb type, using either verbs whose number cue was audible through subject (clitic) pronoun liaison (LIAISon verbs: e.g., elle/s aime/nt [ɛlɛm]/ [ɛlzɛm] ‘she/they love’), or verbs whose number cue was audible on the verb ending (CONSonant-final verbs, e.g., il/s rugi-t/-ssent [ilʁyʒi]/[ilʁyʒɪs] ‘he/they roars/roar’). Second, we manipulated the sentence-initial context: each sentence was preceded either by a neutral context (e.g., In the evening) providing no number cue, or by a subject noun phrase (NP, e.g., Les lions [lelijɔ̃] ‘The lions’) containing a subject number cue on the determiner. Number mismatches were created through mismatches between the number of visually-presented agents and The Society for the Neurobiology of Language SNL 2019 Program  morphosyntactic number cues in the auditory stimuli. Accuracy for acceptability judgments was nearly at ceiling throughout our conditions (86.5% to 97.6%). As expected, the semantic action/verb mismatch elicited classic N400s followed by additional negativities. Number mismatches in sentence-initial contexts elicited broadly distributed N400s followed by a P600, suggesting that non-linguistic visual information can be immediately used (in less than 500 ms) to make strong predictions about appropriate linguistic representations. For number mismatches disambiguated on LIAIS verbs, we observed an earlyonset sustained anterior negativity (eAN), followed by a centro-parietal N400 and a P600, indicating that eANs are not specific to phrase structure violations (Hasting & Kotz, 2008; contra Friederici, 2002, 2011). CONS verbs elicited an eAN which faded due to an overlapping P600 and reappeared after the P600, a pattern previously described for various syntactic violations in auditory ERP studies (Steinhauer & Drury, 2012). Thus, eAN and P600 temporarily cancelled each other out. The fact that the frontal negativity lasted beyond the P600 duration (as in previous auditory agreement studies, e.g., Hasting & Kotz, 2008) suggests that the P600 does not always reflect the final stage of sentence evaluation processes. The present study demonstrates for the first time that perfectly grammatical sentences can elicit classic ERP components usually found in morpho-syntactic violation paradigms. We discuss how distinct psycholinguistic processes modulated the ERPs as a function of (1) number (singular vs plural mismatch) and (2) type of mismatch disambiguation (determiner, LIAIS and CONS verb). Possible applications of this new cross-modal paradigm in developmental research will also be addressed. Multilingualism A41 Right hemisphere dominates tonal bilingualism: multimodal imaging evidences Zhao Gao1; 1University of Electronic Science and Technology of China The research technique on bilinguals have largely dependent on task-related functional connectivity. However, the inherent mechanism of bilinguals is still unclear and multimodal evidences are insufficient. With the development of resting-state functional magnetic resonance imaging (rfMRI), we can non-invasively investigate the intrinsic changes of functional connectivity patterns. In this study 30 Bai-Han Chinese simultaneous bilinguals and 28 Han Chinese monolinguals with gender and age matched were scanned in a resting state. The resting-state functional connectivity between bilinguals and monolinguals was compared to explore the changes of brain functional connectivity of bilingual network. The Voxel Based Morphometry (VBM) was used additionally to examine the difference in gray matter density and the Diffusion Tensor Imaging (DTI) was to discover structural variance in white matter. Resting-state functional connectivity analyses found significantly increased functional connectivity between right pars orbitalis and other three regions respectively - right caudate, right pars opercularis, and left inferior temporal gyrus in Bai-Han Chinese bilinguals. The volume of gray matter in right The Society for the Neurobiology of Language Poster Session A pars triangularis was also found greater in bilinguals than monolinguals. Consistent with previous literature, the mean fractional anisotropy of right superior longitudinal fasciculus (parietal bundle) was higher in bilinguals than that in monolinguals. Our findings suggested that the intrinsic language network in simultaneous bilinguals had been shaped distinctively from monolinguals in terms of function and structure. It provides a comprehensive reference for bilingual studies and their language mechanism in brain. A42 EEG Resting-State Indices as Markers of ForeignLanguage Aptitude in Older Adults Maria Kliesch1,2,3, Martin Meyer3,4; 1Zurich Center for Linguistics, University of Zurich, 2Chair of Romance Linguistics, Institute of Romance Studies, University of Zurich, 3Chair of Neuropsychology, Department of Psychology, University of Zurich, 4Cognitive Neuroscience, Department of Psychology, University of Klagenfurt, Austria Foreign language (FL) aptitude is a construct that presupposes a specific talent for learning languages, and is used to explain why some people learn a new language easier, faster or better than others. Most scholars agree that the concept covers a range of different cognitive factors, and that it is a result of genetic, developmental and experiential factors. However, cognitive measures interact strongly with motivational aspects (Dörnyei, 2010). Recent studies by Prat et al. (2016, 2019) could show that EEG resting-state indices (i.e. mean spectral power and coherence metrics), too, are able to predict the learning rate of FL learners, thus suggesting a paradigmfree way of assessing FL aptitude. With increasing age, individual differences in both cognitive capacity and resting-state indices are exacerbated in older adults (Christensen, 2001), so that investigating FL learning in old adulthood can help shed light on the underlying processes of FL aptitude. This study investigates the role of EEG resting-state indices, cognitive capacities and motivational aspects in predicting individual FL attainment, and assesses whether resting-state indices change as a function of FL proficiency. A sample of 27 Swiss German adults between 64-74 years of age participated in a 7-months Spanish course for beginners. The training consisted of one weekly hour of classroombased instruction, 30 weekly minutes of FL testing and 3 weekly hours of individual learning using a language learning software at home. Before and after the training, 12-minutes resting-state EEG were recorded, from which mean spectral power and functional networks were estimated. In addition, training motivation and cognitive capacities, including working memory capacities and verbal fluency in the native language, were assessed each week throughout the training. A passive control group performed the same cognitive and electrophysiological measures. To assess the weight of each predictor (EEG indices, cognitive capacities, motivation) on FL proficiency (concatenated score of three FL tests), we conducted multilevel models analyses with random effects per subject. To investigate group differences, two learner groups were formed based on L2 attainment, and good learners were contrasted with poor learners and the 61 Poster Session A passive control group. Our analyses showed that, in line with previous theories on FL aptitude, working memory capacities and verbal fluency in the native language significantly predicted individual FL attainment. The effects, however, were mitigated by controlling training motivation and overall wellbeing. At the same time, better learners showed higher mean spectral power in the beta band and higher network density before the training. Comparison of post-training EEG indices revealed a reorganization of beta-band functional connectivity in better language learners as compared to slower learners and participants in the passive control group. Our findings confirm that individual differences in FL aptitude persist in older learners, and corroborate former studies in that EEG indices lend themselves as paradigm-free markers of future FL proficiency. Further, our results show that successful FL learning reorganizes beta-band functional connectivity, which is commonly reduced in patients suffering from mild cognitive impairment. SNL 2019 Program results highlight the rapid instantiation of new L2 labels and the influence of grammatical gender on lexical access. They contrast with previous results at the sentential level, showing either lesser (Foucart & Frenck-Mestre, 2010) or no effects (Sabourin & Stowe, 2006) of grammatical gender concord violations when grammatical gender is not the same across their two languages A44 The impact of language learning experience on phonological working memory: A functional magnetic resonance imaging study Shanna Kousaie1,2, Shari Baum2,3, A43 Grammatical gender inhibition in learning of a second language vocabulary: ERP evidence Cheryl Frenck- Natalie Phillips2,4,5, Vincent Gracco2,3,6, Debra Titone2,7, Jen-Kai Chen1,2, Denise Klein1,2; 1Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, 2Centre for Research on Brain, Language and Music, McGill University, 3School of Communication Sciences and Disorders, Faculty of Medicine, McGill University, 4Department of Psychology/Centre for Research in Human Development, Concordia University, 5Bloomfield Centre for Research in Aging, Lady Davis Institute for Medical Research and Jewish General Hospital/McGill University Memory Clinic, Jewish General Hospital, 6Haskins Laboratories, 7Department of Psychology, McGill University Herein we present electrophysiological evidence of extremely rapid learning of new labels in an L2 for existing concepts and the extension to related concepts, via computerized games, which, however, was constrained by grammatical gender congruence. This adds to ample evidence of the parallel activation of a bilingual’s two languages at the phonological, orthographic and lexical levels. Less examined is the question of how grammatical gender overlap in the L1 and L2 influences either lexical activation in proficient bilinguals or the establishment of a new L2 lexicon. A body of studies has reported gender congruency effects in proficient bilinguals both in speech production tasks (Bordag & Pechmann, 2007; Morales, Paolieri & Bajo, 2011) and in translation tasks (Pechmann et al., 2008;Salamoura & Williams, 2007), although the results of these studies are complex and not always replicated in production (Costa, 2003). We recorded ERPs both prior to exposure with the second language and 4 days later, following a 3 day training session. Results show rapid changes in cortical activity, associated with learning. Prior to exposure, no modulation of the N400 component was found as a function of the correct match vs. mismatch of audio presentation of words and their associated images. Post training, a large N400 effect was found for mismatch trials compared to correctly matched audiovisual trials. In addition, images that were semantically related to learned words (eg. for the learned word “spider” the image of a web was presented), produced a reduction of the N400 compared to mismatched pairs. However, these results only obtained, for learners, for L2 labels that had the same grammatical gender in their L1. For control participants, no effect of gender was observed; i.e. the effect obtained in the L2 learner group was not due to any particularities of the to-be-learned lexicon in the L2. Our Phonological working memory (PWM) is a component of executive function that is important for storing and manipulating speech sounds. The processing of speech sounds in PWM facilitates the acquisition of vocabulary and grammar, making PWM an important building block for language learning. A network of fronto-parietal brain regions, including the left inferior frontal cortex and anterior insula, left inferior parietal lobule and bilateral superior temporal gyri, has been implicated in PWM [1]. Furthermore, phonological units differ across languages and the brain becomes fine-tuned to the units of an individual’s native language (L1) early in brain development [e.g., 2]. This raises questions about how bilingualism impacts PWM processes and whether this varies as a function of when the second language (L2) is learned. Previous research finds that left anterior insula recruitment is a marker of L2 attainment [3] and that early exposure to a language shapes the brain for PWM, even if that language is discontinued [4]. The current investigation further examines the implications of the timing of language learning on PWM processes. Participants underwent functional magnetic resonance imaging (fMRI) while they completed an auditory n-back task comprised of three conditions (0back, 1back, 2back) in both English and French. The three conditions varied in terms of PWM demands, with the 0back condition imposing minimal PWM demands (participant required to identify a target stimulus) and the 2back condition imposing increased PWM demands (participant required to decide whether the current stimulus matches the stimulus two previous in the stimulus sequence). Participants were native speakers of English and varied with respect to their age of L2 (French) acquisition (AoA). Simultaneous (n=10; mean AoA=0), early sequential (n=8; mean AoA=4.6), and late sequential (n=7; mean AoA=7.7) bilinguals were included and groups were matched in terms of chronological age, years of education, and proficiency in L1 and L2. Behaviourally, there were no differences in performance Mestre1,2,3, Ana Zappa1,3, Jean Mari Pergandi1,4, Daniel Mestre1,2,4; 1 Aix-Marseille University, 2Centre National de Recherche Scientifque, 3Laboratoire Parole et Langage, 4Centre de Réalité Virtuelle de la Mediterranné 62 The Society for the Neurobiology of Language SNL 2019 Program  across the two languages for any of the groups. In terms of the fMRI results, simultaneous bilinguals showed similar recruitment of brain regions implicated in PWM in both of their languages. In relation to previous research that compared PWM in bilinguals and monolinguals [i.e., 4], the pattern of activation in the insula region in the simultaneous bilinguals in the current study resembles the pattern observed in monolinguals, while the pattern of activation in response to increased cognitive load (i.e., 2back-0back) in cognitive control regions (i.e., bilateral parietal and middle frontal regions) resembles the pattern observed in bilinguals from the previous research. Ongoing analyses including sequential bilinguals that relate activation patterns to AoA point to age of language learning as impacting the networks involved in PWM. Overall, the current findings increase our understanding of how the brain is set up for phonological language processing. [1] Fiez, J.A. (2016). In G. Hickok and S.L. Small (Eds.), Neurobiology of Language, Academic Press: San Diego. [2] Kuhl, P.K., et al. (2005). Lang Learn Dev, 1, 237. [3] Chee, M.W., et al., (2004). PNAS, 101, 15265. [4] Pierce, L.J., et al. (2015). Nat Commun, 6, 10073. Prosody A45 Clause-type prediction and detection based on prosody: Earlier than you thought Yang Yang1, Leticia Pablos2,3, Stella Gryllia2, Niels Schiller2,3, Lisa Cheng2,3; 1 Guangdong University of Foreign Studies, 2Leiden University Center for Linguistics, 3Leiden Institute for Brain and Cognition [INTRODUCTION] Clause-type (wh-question or declarative) has been shown to be prosodically marked in Mandarin and listeners can identify clause-type based on prosody. Nevertheless, how early prosody plays a role in clause-type detection and prediction is barely known. We fill this gap by conducting an auditory ERP study on Mandarin wh-questions, their string-identical whdeclaratives (declaratives containing wh-words), preceded by contexts biasing wh-questions or wh-declaratives. We chose this paradigm for these reasons: 1) Mandarin whquestions and wh-declaratives offer a good test case for prosody’s role on clause-types, as Mandarin is not only a wh-in-situ language where wh-words remain at their base positions but also a wh-indeterminate language where the same wh-word such as shénme can have both interrogative (‘what’) and non-interrogative interpretations (‘something’). 2) Our behavioral studies show that whquestions and wh-declaratives are marked by different prosody starting from subject (the first word of sentence) and therefore the predicted clause-types based on context and the unpredicted target clause-types marked by prosody should give rise to clear incongruities of clausetypes for listeners. [PRESENT STUDY] An ERP study was conducted with 24 Mandarin native speakers listening to sentences preceded by contexts (2-3 sentences) that bias towards either wh-questions or wh-declaratives. The contexts in each set are only different at the final sentence (i.e., ‘XX asked:’ biases towards a question versus ‘This is something XX is sure of:’ biases towards a declarative). By combining contexts and target sentences, we obtain four conditions in each set: (a) Declarative-biased The Society for the Neurobiology of Language Poster Session A context, wh-declarative prosody (subject-adverb-verbdiǎnr-shénme-prepositional phrase) (D-D in short). (b) Wh-question-biased context, wh-question prosody (sub.adv.-verb-diǎnr-shénme-pp) (Q-Q). (c) Declarative-biased context, wh-question prosody (D-Q). (d) Wh-questionbiased context, wh-declarative prosody (Q-D). By comparing context-target incongruent condition (c) with (a) and incongruent condition (d) with (b), at one or more critical words (time-locked to subject, verb and shénme), we expect to find clause type prediction and detection effects such as negativities (Szewczyk & Schriefers, 2013), or the unexpectedness or integration difficulty related effects like N400 or P600 effects. [RESULTS & DISCUSSION] Opting for a non-biased analysis, omnibus ANOVAs were performed repeatedly using sliding 100ms long windows to localize potential effects with respect to the onset time of all critical words respectively. Results showed that when comparing (c) (D-Q) with (a) (D-D), at subject position, (c) elicited negativities in the 300-400ms time-window in the left and middle hemisphere. The 300400ms left-lateralized negativities can be interpreted as an early detection that the prosody of the subject is not the predicted one based on context, indicating the early and essential role of prosody for clause-typing. It is of interest that we did not find any significant differences in the critical regions thereafter (verb, shénme), perhaps due to the fact that as speech unfolds, the incongruent clausetype becomes sort of “expected” or “familiar”. Also, we did not find significant differences between (d) (Q-D) with (b), indicating an asymmetry in accommodating different clause-types. Control, Selection, and Executive Processes A46 Validating the Survey of Experience in CodeSwitching Environments (SECSE) Angelique Blackburn1, Brenda Guerrero1; 1Texas A&M International University The way bilinguals use language in daily life has been shown to elicit long-term differences in neural responses, in particular in the event-related potential linked to interference suppression—the N2 effect during the Flanker and Simon tasks [Blackburn (2018), SAGE Research Methods Cases. 10.4135/9781526440976]. These neural differences can be explained by the Adaptive Control Hypothesis, that bilinguals communicate in three different contexts that each require different amounts of interference suppression: a single-language context in which one language is used and interference from the other language is suppressed, a dual-language context in which bilinguals must suppress high levels of interference as they strategically switch between languages with changes in conversations, and a dense code-switching context in which bilinguals do not need to suppress interference because they can use more than one language within a conversation [Green & Wei (2014). Lang Cogn Neurosci, 29(4), 499-511]. To fill a need for a measurement of a bilingual’s environment-based language habits, we created an assessment tool to measure how much time a bilingual spends in each context. The Survey of Experience in Code-Switching Environments (SECSE) is a 5-10 minute assessment of the relative time spent 63 Poster Session A in each environment across the lifespan. The survey was found to be valid when comparing the calculated percentage of time spent in each context to self-report of participants who were trained to recognize the three bilingual language contexts (N = 47, p < .05). The percentage of respondents who agreed with the score increased from 76.6% to 91.5% when survey scores were converted from percentages into nominal categories, indicating that participants may more consistently report categories (e.g., rarely in a context) than percentages (e.g., 10% of the time in a context). We are currently testing reliability in a larger sample and confirming survey validity by assessing whether a bilingual’s primary language context on SECSE predicts interference suppression ability, measured as the amplitude of the N2 effect during the Flanker task. Evidence of a larger N2 effect in participants who have been categorized using SECSE as dual-language bilinguals, as predicted by the Adaptive Control Hypothesis, would further validate this survey. The survey is available for free download. A47 Investigating the semantic control network and its structural decline in mild cognitive impairment and mild dementia Anna Dewenter1,2, Joanna Sierpowska1,2, Roy P. C. Kessels1,2, Vitória Piai1,2; 1Donders Centre for Cognition, Radboud University, 2Department of Medical Psychology, Radboudumc Although the episodic and semantic memory systems have long been studied separately, recent literature revealed shared neural substrates, including the medial temporal lobe (MTL). One important aspect of semantic memory is semantic control, which tailors automatic spreading activation during concept retrieval between highly related concepts to suit the current context. One way to explore the role of the MTL in the semantic memory system is through individuals with mild cognitive impairment and mild dementia, who show episodic memory deficits due to MTL atrophy (Korf et al., 2004). Some individuals show additional language impairment and executive dysfunction, but it is unknown what role semantic control plays in the observed deficits. Most evidence on the brain network for semantic control comes from stroke-induced lesions to perisylvian language areas or healthy participants undergoing stimulation of these areas, which has led to a corticocentric neuroanatomical model comprising two central hubs located in the left inferior frontal gyrus (LIFG) and the posterior middle temporal cortex (Whitney et al., 2010). However, little is known about the role of the MTL and the white-matter circuitry subjacent to these hubs in semantic control. Importantly, ventral white-matter pathways have been shown to contribute to semantic processing (e.g., Sierpowska et al., 2019). The present study proposes to examine a possible extension to the cortical semantic control network, by investigating the role of the MTL in individuals with pre-dementia. Semantic control was investigated with a word-picture verification task. Participants indicated whether a word matched a subsequently presented picture as quickly and accurately as possible. Congruent (“pilot” – picture: pilot) and incongruent word-picture pairs were presented, 64 SNL 2019 Program the latter being semantically related (“pilot” – picture: airplane) or unrelated (“knife” – picture: airplane). Semantic control was operationalized as the reaction time and error percentage difference between the related and unrelated condition. MTL atrophy was rated by two independent raters on T1-weighted images ranging from 0 (no atrophy) to 4 (severe hippocampal volume loss). Reaction times (RTs) were analyzed using linear mixed modelling and error rates using generalized linear mixed modelling. Results of patients (n = 7; aged 60 – 87) and matched controls (n = 18) confirm longer reaction times in related relative to unrelated trials (p < 0.001), attesting to the increased need for control, with patients making disproportionately more errors in the related condition (p = 0.02). Despite MTL atrophy related to healthy aging, no pathological atrophy was found for controls (ranging from 0–2 versus the pre-dementia group 0–4). Further investigation of the high error percentage in the predementia group revealed two individuals with severe MTL atrophy (score of 4) scoring 6.7 and 8.0 standard deviations above the mean error percentage of controls in related trials requiring enhanced control. These results tentatively indicate a contribution of semantic control to the deficits observed in pre-dementia patients and place the MTL as a potential node in the semantic control network. Additionally, the outcome of ongoing analyses of the diffusion-weighted data focusing on the ventral whitematter pathways will further supply neuroanatomical specificity to these findings. Signed Language and Gesture A48 Reducing the visual resolution of lexical signs increases working memory load Josefine Andin1, Emil Holmer1, Krister Schönström2, Mary Rudner1; 1Linköping University, Sweden, 2Stockholm University Degrading the sound quality of words increases working memory (WM) load in the speech domain. To investigate the potential generalization of this effect across the language modalities of sign and speech, we studied the effect of reducing the visual resolution of lexical signs on WM load. In a functional Magnetic Resonance Imaging (fMRI) study, 16 deaf early proficient users of Swedish Sign Language (SSL) and 22 hearing nonsigners performed an n-back WM task based on signs lexicalized in SSL. The resolution of the video-recorded sign stimuli was manipulated. Thus, sign stimuli were either clear or degraded. There were three levels of WM load in the n-back task achieved by manipulating n=1, 2 or 3. All participants had normal or corrected-to-normal visual acuity and contrast sensitivity. Due to recruitment constraints, the signers included in the analysis were significantly older than the non-signers. However, there was no difference between groups in non-verbal intelligence. ANOVA of n-back performance collected in the scanner showed no difference between groups. However, performance was poorer across groups when stimuli were degraded compared to when they were clear and when WM load was greater. There was a significant interaction between visual resolution and WM load such that the effect of stimulus degradation was greater when The Society for the Neurobiology of Language SNL 2019 Program  load was greater but there was no interaction with group. These behavioural results generalize the effect of stimulus degradation to sign language. Whole brain fMRI analysis showed increasing activation of the fronto-parietal WM network as load increased. WM processing of clear compared to degraded stimuli led to greater activation of the ventral visual stream and the opposite contrast led to greater activation of the dorsal visual stream. Further, nonsigners compared to signers showed greater activation in the dorsal visual stream while signers compared to non-signers showed more activation in the superior temporal lobe. This pattern of results shows that WM for signs is sensitive to both load and visual resolution irrespective of sign language knowledge. In particular, it suggests that WM for degraded or less well represented signs (in non-signers) is less reliant on identification in the ventral stream and more reliant on localization in the dorsal stream. Results also confirm previous findings of differences between deaf signers and hearing individuals in the engagement of superior temporal cortex during a visual cognitive task. Methods A49 Revisiting the connectional neuroanatomy of pars opercularis and the ventral precentral fiber intersection area using data from the Human Connectome Project Baboyan Vatche1, Gregory Hickok1; 1University of California, Irvine Evidence from diffusion imaging, postmortem dissection, and direct electrical subcortical stimulation studies indicate that a variety of association fibers within the language network converge near the ventrolateral prefrontal cortex. More recently, these studies suggest that the fiber organization of the ventral precentral cortex (vPMC) is different from classical descriptions, and is a critical projection site for not only pyramidal pathways, but also for motor and auditory association fibers involved in speech function. Despite this fact, this densely connected area - recently referred to as the ventral precentral fiber intersection area - remains poorly understood due to it’s complex organization. Using data from 100 subjects from the Human Connectome Project, we sought to revisit the connectional neuroanatomy of two central hubs within the language network: vPMC and Brodmann Area 44. Probabilistic fiber tractography was performed on the multi-shell dMRI data for both BA 44 and vPMC, bilaterally. To restrict analysis to dorsal speech production fibers, the tracking algorithm discarded any streamlines passing through the temporal stem region, thereby removing ventral stream pathways involved in the widely distributed semantic network. Results were then projected onto standardized surfaces to characterize and compare the connectivity of vPMC and BA 44. First, we performed a Principal Components Analysis (PCA) along the 2D surface mesh, which for each subregion, revealed a single component accounting for 70% of the total variance in connectivity across each group while no other component accounted for more than 2% of total variance. The principal component for vPMC was comprised of The Society for the Neurobiology of Language Poster Session A weights attributed to nodes within the ventral precentral sulcus, the supplementary motor area (SMA), the posterior insular cortex, the supramarginal gyrus, and the planum temporale. Conversely, BA 44 was comprised of weights attributed to nodes in the inferior frontal sulcus (i.e., dorsal BA 44), the pre-SMA, middle insular cortex, and a stronger contribution to the pSTS than what was seen with vPMC. This finding suggests that BA 44 is the primary projection site for the arcuate fasciculus and not the vPMC; contrary to previous reports. Second, we directly compared the differences in multivariate connectivity patterns between each subregion, by computing the average connectivity within each of the 180 brain areas comprising the HCP Multimodal Parcellation atlas. The multivariate matrix for each subregion was then fed into a Linear Discriminant Analysis (LDA) classification algorithm to establish the linear combinations of brain areas producing the best separation between the two subregions. The LDA algorithm, by assigning negative weights to areas mapping onto BA 44 and positive weights mapping onto the vPMC, revealed sets of regions comprising parallel dorsal streams; with vPMC being associated with the somatosensory cortex, SMA, and the planum temporale while BA 44 showed associations with distributed areas along the superior temporal sulcus, preSMA, and area 55b. This finding was then corroborated by a third analysis, where a univariate GLM comparing differences in connectivity at each node of the surface revealed to distinct clusters, with vPMC showing a Fronto-ParietalSMA preference and BA 44 showing a Fronto-TemporalpreSMA preference in connectivity. A50 GEREC protocol for interactive mapping of language and memory processes in temporal lobe epilepsy Sonja Banjac1, Elise Roger1, Emilie Cousin1,2, Marcela Perrone-Bertolotti1, Cédric Pichat1,2, Laurent Lamalle2, Lorella Minotti3, Philippe Kahane3, Monica Baciu1; 1Université Grenoble Alpes, CNRS LPNC UMR 5105, 2Université Grenoble Alpes, UMS IRMaGe CHU Grenoble, 3Université Grenoble Alpes, GIN ‘Synchronisation et modulation des réseaux neuronaux dans l’épilepsie’ & Neurology Department Introduction: In patients with temporal lobe epilepsy (TLE) the benefit of temporal surgery must be carefully evaluated taking into consideration the risk of inducing the impairments since surgery can lead to postoperative memory (Baxendale et al., 2006) and language deficits (Davies et al., 1998). A number of protocols have been proposed for evaluating language and memory preoperatively in these patients (e.g. Aldenkamp et al., 2003), however, they tested language and memory separately. Considering that mesial temporal regions are implied in both of these processes, it should be essential to evaluate the neural basis of language-and-memory jointly in TLE patients that are being considered for surgery. The present protocol was designed to do so. Methods: The GEREC protocol consists of three runs: (1) block-based run, Generation (GE), during which participants should covertly generate a sentence after hearing a word; (2) event-based run, Recognition (REC), during which, after being presented with an image, 65 Poster Session A participants respond whether they have heard the object from the picture in the first run or not; and (3) blockbased run, Recall (RA), in which participants should, after hearing the same words from the first run, remember and covertly generate the same sentences they did in the GE. The GE and RA runs are designed to activate intermixed language-and-memory network by engaging episodic memory encoding and retrieval respectively. Moreover, the protocol is ecological and adapted to patients (easy to perform and has a short duration). Before using it with patients, the protocol needs to be tested in healthy participants, which is the objective of this work. Twenty healthy adults aged 18 to 30 completed the experimental protocol. The fMRI data were acquired at 3T and the manufacturer-provided gradient-echo/ T2* weighted EPI method was used. Images were first spatially pre-processed. Results: The statistical analyses revealed that GE run (P<0.05, FWE-corrected) activated the expected frontotemporal network including left inferior frontal, bilateral middle and superior temporal regions and temporal pole as well as the contralateral cerebellum, specifically 6 and Crus 1. Encoding activated (P<0.001, uncorrected) left hippocampus and middle temporal gyrus. Initially designed as a memory task, REC activated (P<0.05, FWE-corrected) a large language-and-memory network, including bilateral inferior ocipito-temporal, left parietal and hippocampal regions, but also the left frontal inferior, bilateral SMA and cerebellum. It could be that, despite not being explicitly instructed to do so, when seeing the images, participants automatically performed picture naming. The RA run, on the other hand, activated (P<0.05, FWE-corrected) more strongly the language network consisting of left frontal inferior and bilateral temporal regions. Conclusion: Our findings support the utility of GEREC protocol for mapping a shared languageand-memory network. The main intention of this protocol is to reliably map this network in patients with TLE and, in conjunction with the results of neuropsychological testing, it can provide valuable information for planning the reconstructive surgery. Finally, it can also be considered as a practical foundation for exploration of language and memory interconnection. Language Production A51 Testing the unitary theory of language lateralisation using functional transcranial Doppler ultrasound in adults Zoe Woodhead1, Abigail Bradshaw1, Alexander Wilson1, Paul Thompson1, Dorothy Bishop1; 1University of Oxford Introduction: Hemispheric dominance for language is often assumed to be unidimensional and consistent across language domains, but this assumption can be questioned. Discrepant laterality across different language tasks could be simply due to measurement error; alternatively, task differences may represent meaningful individual variation in the hemispheric organization of different language networks. It has been difficult to distinguish these possibilities, because relatively little is known about the reliability of lateralisation in individuals. In this study we used an ultrasound technique (functional transcranial Doppler sonography, fTCD) to assess strength 66 SNL 2019 Program and test-retest reliability of language lateralisation on a range of language tasks. We tested the hypotheses that laterality would vary across tasks, but be stable across sessions. Furthermore, we predicted that there would be more than one latent factor that predicted individual differences in laterality across tasks, which would provide evidence against the theory that language lateralisation is a unidimensional construct. Methods: Methods were preregistered prior to data collection (https://osf.io/ tkpm2/). Laterality within the middle cerebral artery territory was tested using fTCD in 37 adults with typical language development (7 left-handers). Each participant was tested twice using six different language tasks including tests of speech production and comprehension, and phonological, semantic and syntactic decisions. We compared the strength and reliability of lateralisation for each task, and used Structural Equation Modelling to test whether individual differences in laterality across tasks could be explained by a unidimensional model (where laterality on all tasks covaried with each other), or whether a model with two factors (where covariances clustered into two independent factors) was a better fit to the data. Results: Significant left lateralisation was observed for all tasks except for a syntactic decision task. Lateralisation was strongest for production of meaningful speech (a sentence generation task). Testretest reliability was good for all tasks (ranging from R=0.57 to 0.84) except for a task involving production of automatic speech sequences, e.g. days of the week (R=0.13). The Structural Equation Modelling showed that, for most people, a single lateralised factor explained most of the covariance between tasks. A minority, however, showed dissociation of asymmetry across tasks, giving a second factor. Interestingly, these participants were all left handed. The second factor had the strongest loadings from receptive tasks (sentence comprehension and syntactic decision). Conclusion: The results suggest that variation in strength of language lateralisation reflects true individual differences and not just error of measurement. In general, they support the idea of language lateralisation as a unidimensional construct – individuals and tasks may vary in their strength of lateralisation, but lateralisation measurements across all tasks tend to correlate together. However, this pattern may be broken in a minority of left handed individuals. A52 Picture naming yields highly reproducible cortical activation patterns: test-retest reliability of MEG recordings Heidi Ala-Salomäki1, Jan Kujala1,2, Mia Liljeström1, Riitta Salmelin1; 1Aalto University, 2University of Jyväskylä, Finland Language production deficits occur frequently in patients with brain injury, e.g. following stroke. Magnetoencephalography (MEG) provides a direct measure of cortical processing, unaffected by the neurovascular disruption that occurs after stroke, and is thus well suited for assessment of functional brain damage in post-stroke patients. For this purpose, reliable measures of brain activity related to language production in individual participants are paramount. It is, however, unclear how reproducible the MEG activations are in The Society for the Neurobiology of Language SNL 2019 Program  various language-related tasks. In this MEG study, the aim was to identify source-localized naming-related evoked activity and modulations of cortical oscillatory activity that show high test-retest reliability across measurement days in healthy individuals. For patients with a severe speech production disorder picture naming can be a challenging task. Therefore, we also determined whether a semantic judgment task would suffice to induce comparably consistent activity. We analyzed MEG data collected from 19 healthy participants on two separate days. To identify activity related to picture naming (“Name the object in the picture”) or semantic judgment (“Is this a picture of a living object (‘yes/no’)?”) task, we compared these tasks with a visual task (“Say ‘yes’ when there is a red cross in the middle of the picture”). We identified task-related activity that was statistically significant (p<0.001, uncorrected, repeated measures t-test) on both measurement days and further used ICC (intraclass correlation coefficient) (Shrout and Fleiss, 1979) to determine the task-related activity that was consistent (ICC>0.4) in individual participants across measurement days. Analyses were performed for selected time windows (200–400 ms, 400–600 ms and 600–800 ms) time-locked to stimulus presentation, based on a neurocognitive model of speech production (Indefrey and Levelt, 2004) and earlier MEG findings (Salmelin et al., 1994; Vihla et al., 2006). The source-localized evoked activity related to the picture naming task was found to be highly consistent across measurement days in the left frontal (400–600 ms), central (400–800 ms), parietal (200–800 ms), temporal (200–800 ms), occipital (600–800 ms) and cingulate (600– 800 ms) regions, as well as the right parietal (600–800 ms) region. In the semantic judgment task, consistent evoked activity was spatially and temporally more limited, occurring in the left parietal (600–800 ms), temporal (600– 800 ms) and occipital (400–600 ms) areas, and the right parietal region (600–800 ms). Consistent modulations of oscillatory activity were observed mainly in the occipital cortex (400–800 ms) but not in cortical regions typically associated with language processing. The present study emphasizes the individual-level reproducibility of MEG evoked activity dynamics in picture naming. Due to the high test-retest reliability the task shows great promise for clinical evaluation of language function, as well as for longitudinal MEG studies of language production in clinical and healthy populations. References: Indefrey and Levelt (2004), Cognition, 92, 101–144. Salmelin, Hari, Lounasmaa, and Sams (1994), Nature, 368, 463–465. Shrout and Fleiss (1979), Psychological Bulletin, 86, 420– 428. Vihla, Laine, and Salmelin (2006), Neuroimage, 33, 732–738. A53 A computational mechanistic account of hemisphere differences in language processing Ya-Ning Chang1, Matthew Lambon Ralph1; 1MRC Cognition and Brain Sciences Unit, University of Cambridge A classic view of the neural basis of language has postulated the left hemisphere is dominant for language. However, a growing number of neuroimaging studies have demonstrated contributions, albeit weaker, from language The Society for the Neurobiology of Language Poster Session A regions in the non-dominant right hemisphere. While neuroimaging studies in healthy subjects increasingly show language network is leftward asymmetric but bilateral, patients studies commonly report that aphasia results predominantly from left hemisphere damage in stroke patients. It appears difficult to reconcile the seemingly contradictory findings, and there has yet to have a clear mechanistic account to explain mechanisms underlying changes to the language network between the two cerebral hemispheres. Using neural network modelling, we constructed a bilateral model of language processing to explore how the left and right hemispheres contribute normal and impaired language processing. Specifically, we implemented the model with reference to hemispheric structural asymmetry to investigate if bilateral but asymmetric (left > right) activation in the majority of healthy subjects could result from the greater computational resources available for language processing in the left hemisphere compared to that in the right. With the consideration of hemispheric structural asymmetry, we also investigated if damage to the language regions in the left hemisphere would be more likely to result in impaired language performance (aphasia) because of the removal of major language processing resources. Lastly, we examined if the dynamic changes in activation patterns between the two hemispheres in aphasia recovery could be related to the differential capacity of the left and right hemispheres. The bilateral model demonstrated a link between differential computational resources with language lateralisation. In particular, damage to the model with different levels of severity reproduced the changes in brain activation patterns observed in post-stroke aphasia. The resulting patterns suggest the unequal distribution of resource available for language processing in both the hemispheres is key to accounting for language lateralisation in normal and impaired populations. The simulations provide insights into neural machinery underlying hemispheric language lateralisation and its shifts for recovery from post-stroke aphasia. A54 Dynamics of Word Production in the Transition from Adolescence to Adulthood: an adult-like behaviour in spite of some child-like neurophysiological patterns Tanja Atanasova1, Raphaël Fargier2, Pascal Zesiger1, Marina Laganaro1; 1 University of Geneva, 2Aix-Marseille University To date, studies have been focusing on childhood language development or on language changes related to ageing, while the transition period from childhood to adulthood has only received marginal attention. This is at odds with the knowledge that adolescence reflects a key period in cognitive, social and cortical maturation. A vivid example is the lexical—semantic network, which grows throughout one’s entire life, with approximately 40,000 new words learnt from the first decade to adolescence. Lexical selection also becomes more efficient, with production latencies progressively shortening. In fact, variations in the mental processes involved in word production and their time-course are likely to accompany the observed behavioural changes. Previous EEG/ERP 67 Poster Session A studies have shown functional and temporal differences in speech planning processes among school-age children and young adults in picture-naming tasks (Laganaro et al., 2015), with the recruitment of different neural networks in the P100 and the N2/N170 time windows between 10- to 12-year-old children and young adults. The before mentioned N2/N170 component seems to be likely associated with lexical semantic processes in picture naming (Indefrey, 2011). Our aim here was to fill the gap and investigate when and how the youngsters develop an adult-like activation in single word production. We performed a picture-naming experiment under highdensity EEG/ERP recording with participants from three age groups: 20 children (10-12 years old), 20 adolescents (15-17 years old) and 20 young adults (20-30 years old). Production latencies and accuracy were not significantly different between adolescents and adults, but differed from those of children for both groups. Waveforms of the three groups were analogous, with boosted and delayed amplitudes in children. We performed microstate analysis of the ERP signal, which allowed us to simultaneously track functional and time-course changes during the development in word encoding processes. Children and adults confirmed their qualitative and quantitative distance and minor topographic differences between adolescents and young adults were shown, however, only some adolescents displayed a similar pattern to that of adults, while others presented a child-like activation pattern in the time-window following the P100 component. The individual factor of chronological age could not be related to any of the presented variations. Our results show mostly adult-like patterns in picture naming in the group of adolescents, but with functional and temporal changes still intervening in specific timewindows underlying the lexical semantic word encoding. The stabilization of production latencies, adult-like in adolescents, and the electrophysiological data illustrate that the acceleration of word production in picture naming occurring in adolescence is not a mere speeding up of mental processes but is also mediated by qualitatively different neural underpinnings. References: Indefrey, P. (2011). The spatial and temporal signatures of word production components: a critical update. Frontiers in Psychology, 2(255). Laganaro, M., Tzieropoulos, H., Fraunfelder U. H. & Zesiger P. (2015). Functional and timecourse changes in single word production from childhood to adulthood. Neuroimage, 111, 204-214. SNL 2019 Program physiological role of oxytocin. One consists in the nasal administration of the neuropeptide. Another approach is to investigate the effect of functional polymorphisms in the oxytocin receptor gene. Here we investigated drug*polymorphism interactions in a double-blind within-subject design. We performed a cue-target fMRI reading paradigm in 50 healthy, male participants. In two overt reading conditions participants read a sentence with either neutral or happy intonation, a covert reading condition served as a common baseline. Participants were studied once under the influence of intranasal oxytocin and in another session when they received placebo (randomized order). For group analyses, participants were divided into two groups, AA/AG and GG, based on their rs53576 oxytocin receptor polymorphism (Riem et al., 2011). In carriers of the A allele, the oxytocin receptor is thought to be less effective compared to GG homozygotes (Tops et al., 2011). Neither administration of oxytocin or placebo, nor receptor genotypes affected intensity contours or fundamental frequencies of participants’ utterances significantly. However, there were drug*polymorphism interactions in speakingrelated brain activity, independent whether prosody was neutral of happy. Under placebo, carriers of the A allele showed decreased activation when preparing to speak in the pre-supplementary motor area and cingulate motor area in comparison to GG homozygotes. The administration of oxytocin increased activation in this preparatory set of regions in carriers of the A allele. In this group, oxytocin administration decreased speakingrelated activation in auditory feedback processing regions and primary somatosensory cortex compared to placebo. Administration of oxytocin had a less prominent effect on speaking-related brain activity in GG homozygotes. Our results suggest that oxytocin assists in preparatory processes for speech production. Gating of preparatory processes in people with less efficient oxytocinergic modulation through oxytocin administration reduces the need of speech feedback monitoring. This suggests oxytocin facilitates feedback control of verbal communication, which assures that what was planned to be said actually was conveyed. Such a control is essential in social communication. A56 The Role of Cortico-Subcortical Loops in the Learning of Speech Motor Sequences Snežana Todorović1,2,3, Gispert-Sanchez2, Christian A. Kell1; 1Brain Imaging Center and Department of Neurology Goethe University Frankfurt, 2Molecular Neurogenetics, Goethe University Frankfurt Valérie Chanoine1,2,3, Andrea Brovelli4, Jean-Michel Badier5, Sonja A. Kotz6, Elin Runnqvist1,3; 1Aix-Marseille Université, 2Institute of Language, Communication and the Brain, Aix-en-Provence, 3 Laboratoire Parole et Langage (UMR 7309), Aix-en-Provence, 4 Institut de Neurosciences de la Timone, Marseille, 5Institut de Neurosciences des Systèmes (UMR 1106), Marseille, 6Basic & Applied NeuroDynamics Laboratory, Maastricht University Speech is a means of human social communication. It has been proposed that many socio-affective behaviors, such as speech, are modulated by the neuropeptide oxytocin (Heinrich et al., 2009). While oxytocin is known to modulate speech perception (Tops et al., 2011), it is not known whether it also modulates speech production. There are different approaches of investigating the Despite their central role for speaking, the neural mechanisms sustaining speech motor sequence learning are still not fully understood. The Medial frontal cortex (MFC), the Basal Ganglia (BG), and the Cerebellum (CB) seem to play a central role during the acquisition of skills that are related to, part of, or a prerequisite for speech motor sequence learning. Interestingly, one of a A55 Oxytocinergic Modulation of Speech Production Charlotte Vogt1, Mareike Floegel1, Suzanna 68 The Society for the Neurobiology of Language SNL 2019 Program  few studies that directly examined the neural correlates of speech motor sequence learning (Segawa et al., 2015) observed activation in the BG and MFC during speech motor sequence learning, while lack of activation in the cerebellum was attributed to statistical power. One plausible cognitive account for the common activation of BG, CB, and MFC is that all three are involved in learning but in different stages. There is evidence for the CB being more active in the earlier phases and BG in the consolidation phase in motor sequence learning, and the pre-supplementary motor area in MFC is found to be more strongly activated in the production of speech sequences in earlier phases of learning. An alternative explanation for the co-activation of the three areas is that they form a functional network. This is consistent with a growing body of evidence on the reciprocal structural and functional connectivity between BG, CB, and MFC. This empirical background raises the question to what extent the three brain areas work independently, in concert, or alternate in their contribution to learning. To answer these questions, we are testing the following hypotheses: 1) All three areas will be activated during the learning of new speech motor sequences. 2) BG and CB are expected to form two pathways of ascending neuromodulatory input to MFC during speech motor sequence learning. These two cortico-subcortical loops might have different and non-interacting impact, or BG and CB may also be functionally connected. 3) We will assess whether the role of the pathways remains static over time or evolves dynamically. In a MEG experiment, participants overtly pronounce previously unknown sequences of the type CCVCC (C=consonant, V=vowel) that are either legal (control condition) or illegal in their native language after audiovisual exposure. In a behavioural experiment we established a learning curve by comparing learning of 30 or 60 repetitions in either chunking or interleaving sequences. Behavioural data consisted of utterance transcriptions that allowed categorizing correct and incorrect utterances as well as utterance duration and onset time. For the MEG data, single-trial time courses of High Gamma Activity (HGA) are estimated at the level of brain areas of interest (source-level) and the cortical networks are mapped using HGA. At the behavioral level, learning of illegal speech motor sequences can occur after 45 repetitions (approximately one hour). This allows the process of learning to be observed at the neurocognitive level from trial to trial and shed light on the brain structures and their cooperation that underpin the learning of speech motor sequences. A57 Hemispheric lateralization during speech production is modulated by articulatory-phonological task demands Heather Payne1, Eva Gutierrez-Sigut1,2, Mairéad MacSweeney1; 1University College London, 2University of Valencia Functional Transcranial Doppler sonography (fTCD) provides a measure of hemispheric lateralization by monitoring relative changes in blood flow in the left and right middle cerebral arteries during cognitive tasks (Aaslid et al., 1991, Deppe et al., 2004). As a portable and non-invasive method, it has been used to assess The Society for the Neurobiology of Language Poster Session A lateralization of language processing in typical and atypical populations across a range of language tasks (Knecht et al., 1998; 2001, Vingerhoets & Stroobant, 2002, Bishop et al., 2009, Badcock et al., 2012, Chilosi et al., 2012; Gutierrez-Sigut et al., 2015, 2016, Hodgson & Hudson, 2017, Payne et al., 2019). However, the factors underlying within- and between- subject variability in the extent of lateralization have not been established (Bishop, 2013; Woodhead et al., 2019). Here, we probe the sensitivity of fTCD by comparing the strength of lateralization during two well-matched speech production tasks. Drawing on psycholinguistic and neurobiological models of word production (e.g. Indefrey & Levelt, 2004, Hickock, 2012) and single word reading (Jobard et al., 2003), we predicted that an increasing reliance upon phonological planning and manipulation would result in greater activity from left posterior temporo-parietal to anterior perisylvian regions including left inferior frontal gyrus. These are core areas in the vascular territories of the MCA and should therefore impact on the extent of lateralization measured by fTCD. Participants (n = 19) were shown a series of single words and asked either to read each word aloud or generate a legal rhyming word. In line with our predictions, rhyme generation was more strongly left-lateralized than reading (t(18) = 2.70, p = .015, d = 0.6). While 14/19 participants showed no significant lateralization in either direction during single word reading, 13/19 showed significant left lateralization during rhyme production. This study highlights the contribution of task demands to individual variability in lateralization, as well as emphasizing that ‘language dominance’ is not a unitary concept. A58 Complete Cortical Dynamics of Single Word Articulation Nitin Tandon1,2, Kiefer J Forseth1, Xaq Pitkow3,4; McGovern Medical School, 2Memorial Hermann Hospital, 3Baylor College of Medicine, 4Rice University 1 Speech production involves an integrated multistage process that seamlessly translates conceptual representations in the brain to an articulatory plan. Evaluating this progression of cognitive operations requires high-resolution recordings combined with an analytic approach to model discrete neural states. We used a large-scale electrocorticographic dataset with complete coverage of cortical structures in the language dominant hemisphere to identify the timing of activation of regions relative to articulation and subsequently applied an Autoregressive Hidden Markov Model (ARHMMs) to resolve trial-by-trial state transition sequences in distributed networks. Intracranial electrodes (n=22311,129 patients), including either surface grid electrodes and penetrating stereotactic depth electrodes, were implanted for the evaluation of drug-resistant epilepsy. Patients performed picture naming of common objects from the Snodgrass and Vanderwart image set. A surfacebased mixed-effects multilevel analysis of broadband gamma activity in the language-dominant hemisphere was used to identify loci with significant activity. This revealed 12 regions, listed here in order of activation: early visual cortex, fusiform gyrus, intraparietal sulcus, parahippocampal gyrus, supplementary motor area, pars 69 Poster Session A triangularis, superior frontal sulcus, pars opercularis, subcentral gyrus, posterior insula, superior temporal gyrus, and posterior middle temporal gyrus. In a second analysis, we included additional psycholinguistic parameters in a mixed-effects model to capture broadband gamma power driven by complexity specific to visual, semantic (familiarity), lexical (frequency), and phonologic (weighted phonemic positional probability) domains. This was used to ground the observed network nodes in extant theories of speech production. To investigate the dynamic trial-bytrial evolution of neural states from stimulus presentation to articulation, we implemented an ARHMM. This framework combines the interpretability of multivariate autoregressive analysis with the sophistication of nonlinear analysis embedded in the switching Markov characteristic. Each latent network state was defined as a 3rd order autoregressive tensor and associated covariance matrix, encoding dynamics conserved across the entire patient population. Importantly, we present a major novel advance in stitching datasets across patients to solve a pernicious problem in electrocorticography: sparse cortical sampling in each individual patient. With this framework, we identify 5 distinct cortical states in picture naming (as well as a baseline state). These states broadly correspond visual recognition, conceptualization, formulation, articulation, and monitoring. A consistent, though not strictly serial, sequence of states was observed across trials; furthermore, the duration of the formulation and articulation states predicted reaction time (p<0.01). The latent dynamics of each state were quantified using pairwise partial directed coherence, revealing distinct networks driven by (Granger) causal outflow from a sparse subset of network nodes. Our work pairs largescale electrocorticography with sophisticated analyses to answer long-standing questions about models of language production. Speech Motor Control A59 Time course and neural signature of speech phonetic planning as compared to non-speech motor planning Monica Lancheros1, Anne-Lise Jouen1, Marina Laganaro1; 1University of Geneva The relationship between speech abilities and nonspeech oral abilities has been discussed for many years given that they share the same anatomical structures. However, it is only recently that the speech motor control literature has investigated whether they share the same neural substrates (e.g. Salmelin & Sams, 2002; Price, 2009). A clear notion of the link between speech and non-speech systems is relevant for comprehending the human motor behavior in general and speech motor planning in particular, as well as to understand motor speech disorders. Here we capitalize on the comparison between the production of monosyllabic words (high frequency syllables), monosyllabic pseudo-words (low frequency syllables) and non-speech oral sequences to investigate the spatio-temporal dynamics of the “latest” stages of production processes, where the linguistic message is transformed into the corresponding articulated speech. The non-speech oral sequences 70 SNL 2019 Program were closely matched to the words and pseudo-words, in terms of acoustic and somatosensory targets. In order to separate linguistic encoding from speech encoding, we used a delayed speech and non-speech production task, where speakers prepare an utterance, but produce it overtly only after a cue appearing after a short delay -Experiment 1-. Additionally, to avoid the preparation of the phonetic speech plan within the delay, it was filled with an articulatory suppression task in half of the participants (Laganaro & Alario, 2006) -Experiment 2-. We compared, both behaviorally and on high density electroencephalographic (EEG) evoked potentials (ERPs), the production of those voluntary non-speech vocal tract gestures to the production of French syllables. Experiment 1, on one hand, revealed significant differences between the production of non-speech stimuli and pseudo-words, both in terms of reaction times (RTs) and in terms of stable electrophysiological patterns at an early timewindow. However, and more importantly, we did not find any difference in the electrophysiological patterns occurring in the 300 ms preceding articulation across the three stimulus-types. Experiment 2 (with articulatory suppression), on the other hand, revealed significantly longer RTs for non-speech stimuli as compared to words. ERP results revealed the same global electrophysiological patterns across stimulus-types; however, the microstate duration and the general explained variance (GEV) of two late time windows, encompassing the 300 ms before articulation, were significantly different between non-speech and words, and between non-speech and pseudo-words. Results of Experiment 2 thus suggest that the latest stages of production of those three different stimulus-types recruit the same brain networks but they are involved differently, in terms of duration and timing, when producing non-speech versus pseudo-words and, non-speech versus words. Given that Experiment 1 did not yield any significant difference on late ERPs before articulation, it ensures that results of Experiment 2 were not solely driven by late pre-articulatory processes, but likely by the encoding processes disabled by means of the articulatory suppression task. Disorders: Acquired A60 Enhancing Speech Motor Learning Recovery With Noninvasive Brain Stimulation: Behavioral and neurobiological evidence Adam Buchwald1, Nicolette Khosa1, Stacey Rimikis1, E. Susan Duncan2; 1New York University, 2Louisiana State University Introduction. The potential to enhance stroke recovery using transcranial direct current stimulation (tDCS) has been the topic of active research and controversy. While tDCS has been studied as a tool to facilitate word production in aphasia (Fridriksson et al., 2018), it has not been previously tested as a tool to facilitate recovery involving acquired speech impairment. We report a single-subject intervention design successfully using tDCS applied to perilesional pre-central gyrus as an adjunct to a speech motor learning treatment, with enhanced behavioral outcomes supported by changes in resting state functional connectivity. Methods. P1 (64, The Society for the Neurobiology of Language SNL 2019 Program  M, RH) suffered extensive LH MCA CVA >10 years ago, affecting fronto-parietal regions. He presented with moderate-severe AOS, with slowed, dysfluent speech containing frequent distortions. Treatment design. We employed a multiple-baseline/multiple-probe crossover design with two 9-session treatment phases treating different targets. Motor learning-based treatment sessions (~40mins) included pre-practice (~5mins) followed by practice (feedback on 25% of items). Probes were obtained weekly during treatment, and baseline and maintenance sessions tested all items from both phases (N=192). Three sessions at each point evaluated shortterm (2wks post each phase) and long-term (6 months post-treatment) maintenance. tDCS protocol. Electrode placement was determined using individualized current modeling based on structural MRI (anode: T3, cathode: F4 which maximized current flow to residual premotor and motor cortices). P1 received active tDCS in Phase 1, and sham tDCS during Phase 2. Current (1mA) was administered at the beginning of each treatment session (active: 20 minutes; sham: 30 seconds). Neuroimaging. MRI data were acquired on three occasions (3T Siemens Prisma). Session 1 (pre-tx) included the structural T1 used for targeting (1mm3 voxels). Sessions 2 and 3 were performed two days after the end of each treatment phase. Each session included twelve minutes of restingstate fMRI acquisition (TR=1650, TE=35ms, FA=72º, FoV=216x216mm, 50 axial slices [2mm], in-plane voxel size=2.4x2.4mm). To evaluate connectivity changes, we selected six homologous regions associated with speech production using the Fan et al. (2016) parcellation. Regions corresponded to pars opercularis (x2), ventral pre-central gyrus (x2), ventral post-central gyrus and anterior insula. Functional connectivity was calculated using interregional time series correlations: (i) within the left hemisphere, (ii) within the right hemisphere, and (iii) inter-hemispherically. Results Behavioral data were scored for accuracy by two blinded independent coders. For both phases, all maintenance sessions (short-term and long-term) were more accurate than baseline (no overlapping data). Effect sizes (Cohen’s d) were computed at maintenance and follow-up. Both treatment phases exhibited small effect sizes at maintenance (active: 2.06; sham: 2.90), with a larger long-term retention effect size for active (4.67) vs. sham (2.20), suggesting enhanced retention of speech motor learning during active stimulation. Between Session 1 (baseline) and Session 2 (following active tDCS), resting-state connectivity increased within both the left hemisphere (p<0.0001, t=5.34, 95% CI:[0.101,0.237]) and inter-hemispherically (p<0.0001, t=6.86, 95% CI:[0.105,0.194]). No other comparisons between time points yielded significant changes. Discussion While further exploration/replication is needed, we interpret these behavioral and neurological findings as preliminary support for adjuvant effects of tDCS paired with traditional speech motor learning intervention. The Society for the Neurobiology of Language Poster Session A Speech Perception A61 Sustained oscillations in MEG and tACS demonstrate true neural entrainment in speech processing Benedikt Zoefel1, Sander van Bree1, Ediz Sohoglu1, Matthew H Davis1; 1MRC Cognition and Brain Sciences Unit, University of Cambridge A range of existing findings in the literature are consistent with oscillatory neural mechanisms contributing to speech perception and comprehension. Intelligible speech evokes more coherent brain responses at low frequencies (~4 Hz) than unintelligible speech (e.g., Peelle et al., 2013). Furthermore, fMRI responses to intelligible speech (Zoefel et al., 2018) and word report scores (Riecke et al., 2018) depend on the phase lag between transcranial alternating current stimulation (tACS) and the rhythm of heard speech. These (and other) studies suggest changes in the alignment of neural oscillations to speech rhythm, termed “neural entrainment”; this neural entrainment is often argued to underlie successful speech processing and comprehension (e.g., Giraud and Poeppel, 2012; Peelle and Davis, 2012). Nonetheless, putative oscillatory effects on speech processing can also be explained as due to neural activity evoked by speech; neural responses will appear to be rhythmic since they are evoked by a rhythmic stimulus (Zoefel et al., 2018b). To distinguish these views, we performed two experiments in which we tested whether rhythmic sensory (Experiment 1) or electrical (Experiment 2) stimulation produces sustained oscillatory responses which continue after the end of the stimulus. Effects of previous rhythmic stimulation which continue in the absence of stimulation provide stronger evidence for true involvement of oscillatory activity (cf. Kosem et al, 2018). In Experiment 1 (N = 20), we measured brain responses with magnetoencephalography (MEG) during and after presentation of rhythmic intelligible and unintelligible speech sequences, presented at one of two different rates (2 Hz and 3 Hz). We found that auditory MEG responses, specific to the stimulation rate, briefly outlast the offset of the speech stimulus. However, this sustained response did not depend on the intelligibility (16- vs 1 channel vocoded speech) or the length (2s vs 3s) of the preceding speech sequences. These findings suggest brief sustained oscillations produced by rhythmic speech, though these do not appear to be affected by cognitive factors (such as intelligibility and prediction) that we might expect to influence neural oscillations. In Experiment 2 (N = 19), many of the same participants were asked to report spoken words embedded in noise. These words were presented at six different phase delays, during or immediately after bilateral tACS at 3 Hz focussed on the STG. We found that word report accuracy fluctuates rhythmically for spoken words presented after the offset of tACS, at a frequency corresponding to the stimulation. This sustained, phasic modulation of performance was numerically larger than, but not significantly different from, that observed for tACS that continued at the time of speech presentation. These findings show that modulation of neural oscillations produces a perceptual effect that is sustained beyond the end of tACS. 71 Poster Session A A62 Sex-related differences in sensorimotor processing of different speakers David Thornton1, David Jenson2, Tim Saltuklaroglu3, Ashley Harkrider3; 1Gallaudet University, 2 Washington State University, 3University of Tennessee Health Science Center Growing evidence that speech perception tasks elicit sensorimotor activity, and that this activity varies due to context, sex, cognitive load, and cognitive ability. However, it is unknown as to whether the sex of the speaker and demands of the task differentially effect males and females during speech perception tasks. This study investigated whether speaker sex and task demands (i.e. passive listening or active discrimination) influence sensorimotor and auditory cortical activity in males and females differently. Raw EEG data were collected from 27 males and 29 females during passive listening to, and discrimination of /ba/ and /da/ syllable pairs spoken by a synthetic female or male speaker. Independent component analysis identified sensorimotor and auditory components characterized by alpha/beta and alpha peaks, respectively. Time-frequency decomposition revealed no significant differences between male and female groups in any testing conditions. Both males and females displayed stronger mu activation in response to male speakers compared to female speakers before, during, and after stimulus presentation. Findings from this study suggest that speaker sex does influence at least anterior dorsal stream activity in a similar fashion for both males and females, but task demands differentially alter anterior dorsal stream activity in each sex group. These findings may at least partially explain the high variability in findings across neuroimaging studies that feature males and females in the same population. A63 Decoding of second-language proficiency level from multivariate neural data Giovanni Di Liberto1, Jingping Nie2, Jeremy Yeaton1, Bahar Khalighinejad2, Shihab Shamma1,3, Nima Mesgarani2; 1Laboratoire des Systèmes Perceptifs, UMR 8248, CNRS, Ecole Normale Supérieure, PSL University, 2Department of Electrical Engineering, Columbia University, 3Institute for Systems Research, Electrical and Computer Engineering, University of Maryland Speech comprehension requires our brain to process a variety of acoustic and linguistic properties. Recent research has found detailed insights on how, where, and when some of those properties are processed. However, there remains considerable uncertainty on how these mechanisms differ in the case of second-language listening. In this work, we recorded electroencephalography (EEG) signals from native English (L1 group; N = 22) and native Chinese speakers (L2 group; N = 50) as they listened to English sentences. Behavioural measures of proficiency were derived by means of a standardised language test, which assigned each participant to a language level according to the Common European Framework of Reference for Languages (CEFR). Multivariate linear regression was used to quantify the coupling between EEG signals from each participant and the corresponding speech stimulus properties at the level of acoustics, phonemes, and semantics. This coupling 72 SNL 2019 Program allowed us to investigate the effect of language skills on the brain responses to speech at various processing levels. We found that cortical responses to speech differ between L1 and L2 listeners and change with the English proficiency level within the L2 group. The similarity of high-level cortical responses (semantic level) between L2 and L1 participants increased with language proficiency, while significant but more complex effects emerged for responses to lower-level properties of speech. Crucially, classifiers fitted on both low- and high-level cortical responses could correctly identify the proficiency of an L2 participant (low vs. high) with over 80% accuracy. We contend that the present finding provides a novel perspective on the cortical processing of a secondlanguage and sets the basis for a novel procedure to decode language proficiency from multivariate brain data. A64 Cortical entrainment in the alpha rather than the theta frequency range predicts speech comprehension in noise Antje Strauß1,2, Vincent Aubanel2, Anne-Lise Giraud3, Jean-Luc Schwartz2; 1Zukunftskolleg, University of Konstanz, Germany, 2CNRS, GIPSA-Lab, Université Grenoble Alpes, 3 University of Geneva, Campus Biotech Cortical entrainment to the syllable rhythm has been repeatedly shown to be correlated with speech comprehension, however, usually by using highly controlled, artificial speech stimuli. This is why the exact nature of the process remains unclear. In the current EEG study (N=16), we therefore used more naturalistic speech material by contrasting natural, isochronous and randomly rhythmic sentences in noise that all had a mean imposed syllable frequency of 5 Hz. First, we show that in noise, natural sentences are better understood than isochronous or randomly rhythmic sentences. This finding suggests that any temporal manipulation of natural speech is detrimental for comprehension even if temporal predictability as such is increased in isochronous sentences. Second, we find that overall Brain-Audio Coherence shows a 5 Hz peak only for isochronous sentences over central electrodes suggesting a strong acoustic byproduct. Third, we find a positive correlation of single-trial Brain-Audio Coherence with single-trial speech intelligibility scores over right frontotemporal electrodes in the alpha frequency range (10.7 Hz ±0.7 Hz) irrespective of the rhythmic condition. Since previous models and experimental work rather suggested that syllable entrainment (as reflected by Brain-Audio Coherence at 5 Hz in the current data) plays a crucial role for speech comprehension, we conducted post-hoc analyses in the theta frequency range (5 Hz ±0.7 Hz). These analyses reveal, finally, that correlations with speech intelligibility scores are significant for both isochronous and natural but not for randomly rhythmic sentences. However, individual slopes of linear regression lines are steeper for isochronous than natural sentences. These results suggest that speech rhythm-independent alpha oscillations are used for speech comprehension in noise. Additionally, tracking the syllabic rhythm to improve performance is only employed if temporal predictability is a reliable bottom-up acoustic cue as in isochronous The Society for the Neurobiology of Language SNL 2019 Program  sentences. In over-learned natural speech rhythms, syllable entrainment presumably plays a secondary role. In randomly rhythmic sentences, decent speech intelligibility scores are achieved even without syllable entrainment. Altogether, these data contribute to clarify the respective roles and limitations of temporal predictions vs. speech intelligibility in cortical entrainment. This work was supported by the European Research Council under the 7th European Community Program (FP7/2007–2013 Grant Agreement No. 339152 – “Speech Unit(e)s”). A65 Animacy in relative clauses differentially affects older adults’ speech processing Ira Kurthen1, Martin Meyer1,2,3, Matthias Schlesewsky4, Ina Bornkessel-Schlesewsky4; 1 University of Zurich, 2Cognitive Psychology Unit (CPU), University of Klagenfurt, Austria, 3Tinnitus-Zentrum, Charité – Universitätsmedizin Berlin, 4Centre for Cognitive and Systems Neuroscience, University of South Australia, Adelaide Age-related hearing loss (ARHL) is a condition affecting not only the inner ear, but also brain structure and function. ARHL impacts speech understanding on different levels, from transmission of the acoustic signal over its encoding to the extraction of meaning, but cognitive ability is consistently found to alleviate these negative effects. Most research focuses on the influence of ARHL and cognition on low-level features of the acoustic signal, e.g. background noise. However, ARHL has also been shown to influence understanding of embedded clauses. This research has focused mostly on comparing object-relative clauses (ORCs) to subjectrelative clauses (SRCs), but ORCs are not per se more complex than SRCs. Indeed, the so-called ‘SRC advantage’ is only found with an animate subject in the main clause. Therefore, this study investigates the influence of ARHL and cognition on the processing of ORCs with respect to animacy. A sample of older monolingual native English speakers (aged 60-76) was presented acoustic sentences while their electroencephalogram was recorded. The sentences contained SRCs and ORCs with an animacy manipulation: 50% of each sentence type had an animate main clause subject and an inanimate relative clause subject, while the other 50% had an inanimate main clause subject and an animate relative clause subject. Participants rated the sentences on their acceptability. For the ORCs, event-related potentials (ERPs) timelocked to the onset of the relative clause subject were computed. Participants also performed cognitive tests tapping into working memory and inhibition, as well as a hearing threshold test. Significant differences in ERPs were investigated via cluster-based permutation tests. For analyses investigating the influence of participantlevel factors (age, hearing loss, cognition), mean ERP voltage values for a baseline period and for the N400 time window (300 – 500 ms after onset) were extracted and linear mixed-effects models with random intercepts for participants and items were computed. Preliminary analyses with the first 17 participants showed that acceptability ratings were significantly lower for ORCs with an animate main clause subject than for all the other conditions. There was a significant difference at medial The Society for the Neurobiology of Language Poster Session A parieto-occipital electrodes between the ERPs in the N400 time window, with ERPs time-locked to the onset of the animate relative clause subject being more negative than ERPs time-locked to the onset of the inanimate relative clause subject. It was further investigated whether the N400 effect could be explained by participant-level factors. There were significant interaction effects between condition and age and condition and working memory, with higher age and lower working memory being related to higher N400 amplitudes. The ‘SRC advantage’ was present only for relative clauses with an animate main clause subject. Also, ORCs with an animate main clause subject elicited an N400 compared to ORCs with an inanimate main clause subject. This provides converging evidence for ORCs with an animate main clause subject being harder to process. Interestingly, age as well as working memory predicted N400 amplitude, suggesting that animacy in object-relative clauses differentially affects older adults’ speech processing, depending on their age and cognitive ability. A66 Spectro-temporal prediction errors support perception and perceptual learning of degraded speech: evidence from MEG encoding Matthew H Davis1, Ediz Sohoglu1; 1MRC Cognition and Brain Sciences Unit, University of Cambridge Speech perception can be improved in three ways: 1) providing higher-fidelity speech, i.e. improving signal quality 2) providing supportive contextual cues, or prior knowledge, and 3) providing relevant prior exposure that leads to perceptual learning (Sohoglu and Davis, 2016). Predictive coding (PC) theories provide a common framework to explain the neural impact of these three changes to speech perception. According to PC accounts, neural representations of expected sounds are subtracted from bottom-up signals, such that only the unexpected parts are represented, i.e. ‘prediction error’ (Rao and Ballard, 1999). Previous multivariate fMRI data (Blank and Davis, 2016) show that when listeners’ predictions are weak or absent neural representation are enhanced for higher-fidelity speech sounds. However, when listeners make accurate predictions (e.g. after matching text), higher-fidelity speech leads to suppressed neural representations despite better perceptual outcomes. These observations are uniquely consistent with prediction error computations, and challenge alternative accounts (sharpening or interactive activation) in which all forms of perceptual improvement should enhance neural representation. In the current work we applied forward encoding models (Crosse et al., 2016) to MEG data and test the time-course of cross-over interactions between signal quality and prior knowledge or perceptual learning on neural representations. We analysed data from a previous MEG study (N=21, English speakers) which measured evoked responses to degraded spoken words (Sohoglu and Davis, 2016). Listeners heard noisevocoded speech with varying signal quality (spectral channels), preceded by matching or mismatching written text (prior knowledge). Consistent with previous findings (Sohoglu et al., 2014), ratings of speech clarity were 73 Poster Session A enhanced by greater spectral detail and matching text. Exposure to speech following matching text is shown to promote perceptual learning (Hervais-Adelman et al, 2008) and we similarly observe better recognition accuracy for vocoded words in isolation before and after this exposure. We report three main MEG findings: (1) MEG responses to speech were best predicted using a spectro-temporal modulations (outperforming envelope, spectrogram and phonetic feature representations). (2) We observe a cross-over interaction between clarity and prior knowledge consistent with prediction error representations; if matching text preceded speech then greater spectral detail was associated with reduced forward encoding accuracy whereas increased encoding accuracy was observed with greater spectral detail following mismatching text. This interaction emerged in MEG responses before 200ms consistent with early computations of prediction error proposed by PC theories. (3) Analyses of model weights (temporal response functions) show that perceptual learning reduced the sensitivity of MEG responses to spectro-temporal modulations in speech, however this effect did not depend on the amount of spectral detail presented (i.e. we did not observe the same cross-over interaction for perceptual learning as for prior knowledge). Further analyses will compare the coding of the specific spectro-temporal modulations that are preserved (slow, broadband) or degraded (fast, narrow-band) by noise-vocoding. We predict that perceptual learning will down-weight prediction errors for spectro-temporal cues that are degraded by noise-vocoding and up-weight prediction errors for cues preserved by noise-vocoding. These findings contribute towards the detailed specification of a computational model of speech perception based on PC principles. A67 Auditory Language Processing in the Visual Cortex of Blind Individuals Kiera O’Neil1, Aaron Newman1; 1Dalhousie University Introduction: Early and sustained lack of visual input leads to an adaptive re-shaping of brain organization (Kupers & Ptito, 2014). For example, during speech (auditory) language processing, people who are blind show activation of the visual cortex, specifically the primary visual cortex (V1)—an area not typically activated in sighted people. This has been observed for tasks involving verb generation and verbal memory (Amedi, Raz, Pianka, Malach, & Zohary, 2003), semantic and phonological processing (Burton, Diamond, & McDermott, 2003), and sentence comprehension (Bedny, PascualLeone, Dodell-Feder, Fedorenko, & Saxe, 2011). It is not clear what functional role or roles V1 plays in language processing in people who are blind. The purpose of this study was to clarify the role of the visual cortex in auditory language processing in early blind adults by comparing activation across tasks involving different types of language processing (semantic/phonological) and across tasks that vary in their verbal working memory demands. Methods: We have collected fMRI data from two groups of participants, early blind adults (n = 6, mean age = 44.1) 74 SNL 2019 Program and sighted controls (n=10, mean age = 44.6). Participants made decisions about the relationship between auditorily presented pairs of words in three categories – semantic (meaning), phonological (rhyme) and perceptual control (speaker). As well, two n-back conditions were presented, in which participants indicated whether a word in a list matched the immediately preceding word (1-back) or the word two before (2-back). Results: Both blind participants and sighted controls showed activity in similar left frontal and temporal language regions for the semantic (middle frontal gyrus, Broca’s area, inferior temporal gyrus, temporal fusiform gyrus), phonological (middle and inferior frontal gyrus, precentral gyrus, fusiform gyrus) and perceptual conditions (middle frontal gyrus, Broca’s area, inferior temporal gyrus, fusiform gyrus). As well, similar activity in both groups was also observed for the 1-back (inferior temporal gyrus, superior frontal gyrus, supramarginal gyrus, superior parietal lobe) and 2-back conditions (frontal pole, superior frontal gyrus, supramarginal gyrus, angular gyrus, superior temporal gyrus). However, only the blind group reliably recruited the visual cortex (both primary and extrastriate regions)— and only for the semantic, phonological and n-back conditions, but not the perceptual control condition. No significant difference in visual cortex activity between the semantic and phonological conditions was found. To determine if the visual cortex is primarily involved in verbal working memory, we compared activity in the 1-back condition to the 2-back condition (which increases the verbal working memory demands). For the blind participants, no difference in visual cortex activity between the 1-back and 2-back conditions was observed. Conclusion: Our results to date indicate that the visual cortex is involved in both semantic and phonological processing in people who are blind, but not auditory tasks that are perceptual and not linguistic in nature. Additionally, while previous work has suggested the visual cortex may primarily be involved in verbal working memory in those who are blind, our results provide no evidence that verbal working memory load modulates visual cortex activity. A68 The network architecture for natural language in the brain is not fixed Sarah Aliko1,2, Florin Gheorghiu1, Jeremy I Skipper1; 1Experimental Psychology, University College London, 2 London Interdisciplinary Doctoral Program Introduction. What is the network architecture of spoken language comprehension and how is it integrated into whole brain or global network dynamics? Most current models of the neurobiology of language assume a modularity and domain specificity with terms like ‘the language network’ or ‘dual-streams’ and a corresponding set of static brain regions. These models are based on data that ‘assume the antecedent’ in that they derive from unnatural language stimuli and tasks or ‘language localisers’ and rely on averaging and task-subtractions. We take a step back to observe the organisation of natural language in the context of the whole brain. Under these conditions, we propose that language is situated in a coreperiphery global network architecture, involving a set of densely connected core regions and a set of sparsely but The Society for the Neurobiology of Language SNL 2019 Program  well-connected peripheral regions. We predict that the core is not comprised of traditional language regions with those, rather, occupying a dynamic and spatially unfixed periphery. Methods. Thirty participants watched two fulllength movies (‘500 Days of Summer’ and ‘Citizenfour’), during 1.5T functional magnetic resonance imaging. Movies were labelled for speech (among other things) using machine learning approaches. After preprocessing, dynamic functional connectivity was conducted in running windows of one minute at one second (TR) steps. Voxel-wise graph theoretical measures were sorted into windows of high (M=11.68) and low (M=2.19) numbers of words. Spatial and temporal independent component analysis (ICA) were used to locate stable states. Results were also correlated with ‘gold-standard’ meta-analyses from the neurosynth database. Results. The global network structure had a core-periphery architecture, with primary motor/auditory/visual core regions for high-speech, and primary visual and prefrontal regions for low-speech. Consistent with a core-periphery arrangement, networks changed dynamically over time, with hundreds of possible temporal states in the brain but no stable language networks. Confirming this, only spatial ICA resulted in language-related labels from meta-analyses, and this at low correlation values (max r < 0.35). Furthermore, we found that there were 2-9 network communities during high-word windows originating from the ‘language’ meta-analysis, and 6-16 in the rest of the brain. On average, 99% of ‘language’ communities were part of a community outside of those regions, activating 57% of the rest of the brain in varying spatial configurations. Discussion. The neurobiology of natural language comprehension is situated in a global core-periphery network architecture whose core does not involve traditional language regions. The periphery involves many dynamically-reconfiguring communities that only partially overlap and extend well beyond those regions. Indeed, these communities encompass most of the rest of the brain in a pattern that is not spatio-temporally fixed. We suggest this core-periphery arrangement is the kind of architecture required to support a complex and dynamic behaviour like language, composed of many (putative) subcomponents (e.g., phonology, semantics, etc.), each of which are arguably contextually determined. As these are not the usual conditions under which the neurobiology of language has previously been examined, it is perhaps unsurprising that the idea of fixed language regions emerged. Perception: Auditory A69 Human speech cortex encodes amplitude envelope as transient, phase-locked responses to discrete temporal landmarks Katsuaki Kojima1, Yulia Oganian1, Chang Cai1, Anne Findlay1, Edward Chang1, Srikantan Nagarajan1; 1University of California San Francisco The slow (1-10Hz) temporal amplitude envelope of speech reflects acoustically and perceptually relevant information about speech temporal structure and content, including syllabic structure. It is well-established that the phase of neural activity in the delta-theta bands (1 - 10 Hz) is The Society for the Neurobiology of Language Poster Session A aligned to the phase of the speech amplitude envelope during listening. This has been taken as evidence for continuous entrainment of endogenous low-frequency oscillations to the speech envelope, possibly driven by phase-reset at some landmark event in the envelope, such as local peaks in the envelope (peakEnv) or times of rapid increases in amplitude (peakRate). We recently showed using direct electrocorticography (ECoG) that local neural populations in speech cortical areas selectively encode peakRate events in continuous speech. This finding suggests that peakRate is the primary envelope cue represented in speech cortex. However, it leaves open whether phase-locking of low-frequency oscillatory activity observed with M/EEG reflects these transient responses. Alternatively, it might reflect the phase-reset of endogenous oscillatory activity by either peakRate or peakEnv events. We predicted that if it reflects transient responses, phase-locking would 1) rapidly diminish between consecutive acoustic edge events and 2) cover a frequency range reflective of the temporal structure of the speech stimulus envelope. In contrast, if phase-alignment reflects phase-reset of ongoing oscillatory activity, it should continue for several cycles between consecutive acoustic edges and its frequency range should be independent of stimulus envelope dynamics. To contrast these predictions, we recorded neural activity using MEG while participants (n = 6) listened to regularly-paced and 1/3-slowed continuous speech. We analyzed the phase of neural activity in the delta-theta band (1 - 10Hz) over bilateral temporal regions, aligned to acoustic edges and peaks in the speech envelope. Phase-locking was increased when neural activity was aligned to peakRate events, more than it was aligned to peakEnv events, replicating our intracranial results. Crucially, phase-locking in lower frequency bands increased for slowed speech compared to regular speech. Finally, phase-locking peaked after peakRate events and diminished within a single cycle. This pattern of phase-locking is suggestive of an underlying transient response, rather than continuous oscillatory entrainment. These data confirm and extend our previous intracranial findings to low-frequency activity and provide a link between results from intracranial electrophysiology and non-invasive MEG recordings. Taken together, our results demonstrate that the speech envelope induces a series of evoked responses at times of rapid increases in the speech amplitude envelope, rather than continuous alignment of intrinsic oscillatory activity. A70 Modeling the EEG from the audio signal: a methodological investigation Katerina Danae Kandylaki1, Athanasios Lykartsis2, Sonja A. Kotz1; 1Maastricht University, 2Technische Universität Berlin In the study of the neurobiology of language, researchers use artificial and manipulated stimuli or they quantify existing features in texts such as word frequency. Even if one models the continuous EEG response based on word features, modelling is limited to one value per word. In the current project, we investigated how the auditory signal can provide a higher level of granularity for modelling EEG responses in a continuous fashion. This level of granularity 75 Poster Session A SNL 2019 Program is needed to model rapid neurocognitive processes such as beat perception in speech. Inspired by computational algorithms for text processing, we employed audio content analysis algorithms on speech signals, previously used to model speech rhythm in automatic language identification (Lykartsis & Weinzierl, 2015). As this method performs significantly better on noise-free speech, we recorded our stimuli based on existing poems and stories. Additionally, as this method is sensitive to voice characteristics, we recorded stimuli with four different voices (two male, two female). This way, we created a set of 33 files: 19 files with regular rhythm, based on poems, and, 14 files with irregular rhythm, based on prose. Regular rhythm in speech is quantified as equal distance between strong syllables, as for example in isochronous poetry, whereas irregular speech rhythm has varied distances between stressed syllables. First, we calculated the following six novelty functions: Spectral Flux, Spectral Flatness, RMS Energy, Pitch (F0), Spectral Centroid, a novel, compound feature denoting vowel-non vowel excerpts. Then, we extracted beat histograms and features from them as in (Lykartsis & Weinzierl, 2015) in segments of 10sec (with 50% overlap) and full files. Next, we performed two classifications, using all the features with two classifiers (a 1-NN and a Support Vector Machine, SVM) as a proof of concept. First, we classified the contrast male vs. female and found that the 10sec frames were performing better at this classification with an up to 84.8% (SVM) accuracy, compared to the full files, for which both classifiers performed at chance level for both segment sizes. The reverse effect was found for the contrast poem vs. story, with the 1-NN classifier performing at 93.9% accuracy on the full files and both classifiers performing at chance level on the 10sec segments. The next step is to use these novelty functions, or combinations thereof, as predictors for the EEG signal. Importantly, we focused on neurocognitively relevant features of the auditory signal and specifically the ones related to beat perception (Kotz, Ravignani, & Fitch, 2018). Phonological theory suggests that strong syllables in language are realized by alterations in pitch, loudness and duration. We therefore calculated a theoretical beat (theobeat) novelty function to be our factor of interest, by combining F0, RMS Energy, and Spectral Flux in equal proportions (33% each). As control factor, we will use the Spectral Centroid novelty function, which, as it only denotes the spectral center of weight of the signal, is not expected to encode beat/ rhythm-related information. This exploratory analysis could provide promising results and open new ways of analyzing speech for modeling neurocognitive data. behavioural symptoms of amusia, some findings suggested null/limited effect of training through daily music listening or vocal training, while some other findings suggested that amusics improved their pitch processing abilities behaviourally after targeted and explicit training. However, at the neural level, whether the amusics’ brain responses to melodies could be reshaped remains unclear. To better understand the neuroplasticity of the amusic brain, we examined the event-related potentials (ERPs) in 12 Cantonese-speaking amusics before and after melody training and also tested 12 matched controls. During the six training sessions, participants heard identical and different melody pairs. In the different pairs, a single tone was detuned from what it is in its counterpart by five scales (2, 1.5, 1, 0.5 and 0.25 semitone). Each melody was simultaneously accompanied by a visual contour, which is congruent with the pitch contour. Prior to and after the training, ERPs were recorded during the deviant pitch sequence detection presented in the piano timbre. There were two kinds of sequences: “standard” and “deviant”. The “standard” sequences were composed of five standard tones. In the “deviant” sequence, the first three notes were standard tones, followed by a deviant tone and a standard tone. The standard tone was played at a pitch level of C6. The rare deviant tones were either lower or higher in pitch than the standard tone by 25, 50 or 100 cents. According to previous findings that the amusic brain was impaired in consciously processing small pitch differences, we predicted that in the pre-test, the P3 amplitude elicited in the amusic brain should be reduced compared with controls, especially in the small pitch deviation conditions (25/50 cents). If melody discrimination training could significantly influence the amusic’ brain responses to the musical pitch, the P3 activities elicited by the deviant tone should be stronger in the post-test than the pre-test. The behavioural results demonstrated that after melody discrimination training, the amusics improved significantly in the detection of deviants in the 25 cents condition, suggesting the effect of training at the behavioural level. The ERPs results showed that in the pre-test, the P3 amplitude elicited by the amusic group was significantly smaller than the control group. While in the post-test, the group difference between amusics and controls was diminished, implying that after training, the neural activities in the amusic brain became similar to controls. Our results were the first to report that with targeted melody discrimination training, the neural activities in the amusics could be significantly improved, which suggests that the amusic brain has the plasticity for learning in the music domain. A71 The effect of melody training on brain responses in Cantonese-speaking congenital amusics: Evidence from Event-related Potentials Jing Shao1,2, Yubin Zhang1, A72 How semantic association influence word processing: MEG study Alexandra Razorenova1,2, Boris Caicai Zhang ; The Hong Kong Polytechnic University, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science 1 1 2 Congenital amusia (amusia hereafter) is an inborn pitch disorder. Individuals with amusia show deficits in pitch processing in both music and language domains. Regarding whether intervention could remedy the 76 Chernyshev1,2,3, Anna Butorina1, Anastasia Nikolaeva1, Andrey Prokofyev1, Nikita Tyulenev1, Tatiana Stroganova1; 1Moscow State University of Psychology and Education, 2National Research University Higher School of Economics, Moscow, 3Lomonosov Moscow State University The Society for the Neurobiology of Language SNL 2019 Program  What makes a word a word? Word is commonly believed to be a structural unit of any human language, and word learning underlies language acquisition. However, it is rather challenging for neuroscience to describe how lexicality is established in the human brain, and how phonological word representations relate to lexicality. Any word may be considered within two domains: phonological and semantic. Correspondingly, there are two groups of studies dedicated to novel word learning: some of them deal with phonological familiarization to pseudowords, while other studies investigate mechanisms of word meaning acquisition via passive associative learning that adds semantics to pseudowords. Still there is no consistent understanding whether semantic association is strictly required to integrate new pseudowords into lexical domain, or the brain can treat pseudowords as ‘empty’ lexical entries. The current study aimed to clarify the influence of semantics on pseudoword processing. We used a novel lexical trial-and-error learning paradigm to establish associations between pseudowords and actions. We inquired when and where processes associated with semantics take place in the human brain. MEG neuroimaging technique was used as it provides high spatial and temporal resolution. Participants were presented with eight pseudowords; during learning blocks, four of them were assigned to specific body part movements through commencing actions by one of participants’ left or right extremities and receiving a feedback. The other pseudowords did not require actions and were used as controls. Magnetoencephalogram was recorded during passive listening to the pseudowords before and after learning blocks. The cortical sources of the magnetic evoked responses were reconstructed using distributed source modeling. All of the 24 participants reached successful performance on the task. Phase-locked neural response selectively increased for pseudowords that acquired association compared with control pseudowords. Using data-driven approach, we localized significant differential activation into the left hemisphere, including insula, Broca’s complex, intraparietal sulcus and anterior STS-MTG. Differential activation started 150 ms after the uniqueness point. These areas can be viewed as both low-tier (STS-MTG), and higher-tier (intraparietal sulcus, temporal pole) structures involved in speech processing. We report rather widespread brain activity induced by processing of newly-learnt words. Relatively large size of areas involved may be explained by the fact that the experiment lasted less than two hours and no consolidation could have occurred within such a short time. The results show that acquisition of word semantics induced enhanced activation within brain areas involved in phonological word processing. Notably, we observed a widespread involvement of articulatory zones, i.e. Broca’s complex, which stays in accord with Lieberman’s motor theory of speech perception. It is likely that words that associatitively acquired meaning gained high priority to be recognized and remembered. Thus, brain circuits responsible for phonological decoding became attuned to new sets of phonemes. Importantly, this mechanisms, The Society for the Neurobiology of Language Poster Session A known as receptive field tuning, was revealed within studies dedicated to operant conditioning. We observed essentially similar effects within classical speech areas, thus a mechanism analogous to receptive field tuning may be a principal mechanism of word learning. Supported by RFBR grant 17-29-02168. Perception: Speech Perception and Audiovisual Integration A74 Grammatical Expectations of American English Dialects Rachel Elizabeth Weissler1, Jonathan Brennan1; University of Michigan Computational Neurolinguistics Lab 1 We test how listeners alter their grammatical expectations when listening to different American English varieties in two EEG experiments. Bountiful neurolinguistics evidence shows that people invoke prediction during sentence processing through Event-related potentials (ERPs) (Kutas et al 2014), and that these predictions are conditioned by the identity of the speaker. For example, Van Berkum et al (2008) observe a N400 response for sentences like I like a glass of wine before bed when uttered in a child’s voice. This supported the hypothesis that listeners rapidly take in perceived speaker information when processing sentences. The present studies aim to distinguish whether speakers of a main-stream variety have specific knowledge of multiple grammars, or whether they lump all other stigmatized dialects into non-specific “other” categories with relaxed grammatical expectations. The grammatical phenomenon of “auxiliary dropping” is a feature of African American Language (AAL) but not Mainstream U.S. English (MUSE) (e.g. “My brother, {he is/ he’s/he} working today”). Listeners heard sentences with auxiliaries present and absent in MUSE and AAL. They also heard matched sentences that are ungrammatical in all English varieties (e.g. “My brother, he’ll working today”). We predicted that if listeners form specific expectations, the presence of the ungrammatical “ll” feature should elicit a P600 response when hearing both MUSE and AAL, whereas auxiliary dropping should elicit a P600 in MUSE, but not in AAL. Alternatively, if listeners group all non-standard dialects into an “other” category with relaxed grammatical expectations, neither auxiliary absence or the ungrammatical condition should show a P600 for AAL speech. Experiment 1 used stimuli from one bidialectal Midwestern black speaker of both MUSE and AAL, yielding a within subject 2 (language varieties) by 3 (grammatical features) design. EEG was recorded using 61 active electrodes and ERP analysis targeted the P600. Result for AAL show a P600 response to only the ‘ll condition, and no P600 for the auxiliary present or absent conditions. This supports the dialect-specific hypothesis, that listeners might be expecting speakers of AAL to use either of these constructions. Surprisingly, no P600s were found for the MUSE dialect; this may reflect listeners recognizing that the stimuli call came from a single speaker, or even possibly, the unwillingness to grant dialect fluidity to a speaker. Experiment 2 sought to clarify this issue by recording the MUSE stimuli from a Caucasian American male with a similar demographic background. 77 Poster Session A MUSE results show a P600 for the auxiliary absent condition, and the ungrammatical ‘ll stimuli, but no P600 for those same conditions in AAL. Through analysis of American English dialects, this work contributes to further understanding of how social information interfaces with online processing, and expectations that may be formed depending on the perceived identity of a voice. The impact of this work is paramount, as perceptions of stigmatized language varieties can lead to dialect discrimination that negatively affects the way those speakers are treated (Rickford 1999). A75 Tolerance to audiovisual asynchronies for early cross-modal interactions during speech processing: An electrophysiological study Alexandra Jesse1, Elina Kaplan1; University of Massachusetts Amherst 1 To recognize speech during face-to-face conversations, listeners evaluate and combine speech information obtained from hearing and seeing a speaker talk. Audiovisual speech is generally recognized more reliably than auditory-only speech. The audiovisual benefit arises in part because listeners integrate perceptual information arriving within a certain time window from the two modalities into a unitary percept. Cross-modal interactions can occur already early during auditory perception. Prior studies measuring event-related potentials (ERPs) have shown cross-modal interactions in early auditory processing in that the first negative peak (N1) typically found around 100 ms after an acoustic onset is smaller when auditory speech is accompanied by visual speech than when not. However, information from the two modalities does not necessarily have to arrive at the same time to be integrated or be perceived as synchronous. Listeners tolerate a certain degree of physical audiovisual asynchrony in their recognition of speech and in their judging of synchrony. Additionally, they allow for a larger temporal lead of visual information than of auditory information. The current study tested the time window within which visual and auditory information must occur to produce cross-modal interactions in the early auditory processing of speech. The prediction was for neural cross-modal interactions at the N1 to become less likely as auditory and visual inputs were more separated in time. While measuring their ERPs, young adults heard and saw a female speaker saying the syllable /pa/. This audiovisual speech stimulus was either presented as originally recorded (i.e., synchronous condition) or with systematically induced stimulus-onset asynchronies (SOAs). For two of the selected SOAs (-300 ms auditory lead; +500 auditory lag) the auditory and visual events have been commonly reported as being out of sync, and for the others as being in sync (-67 ms auditory lead; +233 auditory lag). After each presentation, participants categorized the audiovisual stimulus by button press as being presented in sync or out of sync. On additional trials with auditory-only (A) or visual-only (V) speech, participants simply pressed any button after the presentation. Auditory-only trials showed a randomly pixelated square spectrally matched to the video of the speaker. The N1 mean amplitude was measured as the 78 SNL 2019 Program mean activity between 90-140 ms after acoustic onset. As expected, overall, the N1 amplitude was reduced for audiovisual presentations (AV-V) compared to the auditory-only presentation (A), indicating multisensory interactions. Compared to the synchronous condition, the N1 amplitude was significantly larger when the sound was presented 300 ms before its natural occurrence. All other asynchronous SOA conditions (-67, +233, and +500 ms) had cross-modal interactions similar in size to those found in the synchronous condition. Visual and auditory speech information therefore does not have to be physically synchronous for cross-modal interactions to occur. Perceptual tolerance to asynchronies is already observable in early multisensory interactions during the processing of audiovisual speech. A76 Noninvasive neurostimulation reveals a causal role for left superior temporal lobe in speech adaptation Ja Young Choi1,2, Tyler Perrachione2; 1Harvard University, 2Boston University Speech perception requires listeners to map acoustic speech signals onto abstract phonemic representations despite the variability in acoustic-to-phonemic mapping across talkers. To facilitate this mapping, listeners use preceding speech to adapt to talker-specific phonetic idiosyncrasies. Previous studies have reported increased neural activity in superior temporal lobe in the presence of talker variability. However, it is unknown how this region is causally involved in processing variability in speech. In this study, we used high-definition transcranial direct current stimulation (HD-tDCS) to investigate the causal involvement of left superior temporal lobe in talker adaptation. Participants (N = 60 right-handed native English-speaking adults) performed a speech processing task in which they identified the target word they heard (“boot” / “boat”) as quickly and as accurately as possible. The task factorially manipulated talker variability (single vs. mixed talkers) and speech context (isolated words vs. connected speech). Throughout the experiment (~13 min), participants received anodal (n = 20), cathodal (n = 20), or sham (n = 20) stimulation in a between-subjects design. For the active stimulation groups (anodal and cathodal), current intensity was linearly ramped up to 2 mA over 30 seconds before the experiment started, and the intensity was ramped back down to 0 over 30 seconds upon completing all trials. For the sham stimulation group, current intensity was linearly ramped up to 2 mA over 30 seconds before the experiment started, immediately followed by a linear ramp-down to 0 over 30 seconds, which generated the initial dermal sensation of active stimulation but did not actually stimulate the neurons underneath the scalp during the task. Stimulation sites were chosen so that maximum current intensity was localized to left superior temporal lobe. Our results consistently showed that listeners were slower to identify target words in mixed-talker condition than in single-talker condition overall (p ≪ 0.0001), reflecting the additional processing cost associated with talker variability. We also observed a significant talker variability × speech context interaction (p ≪ 0.0001), demonstrating The Society for the Neurobiology of Language SNL 2019 Program  Poster Session A that magnitude of additional processing cost incurred by talker variability was reduced by connected speech – i.e., speech preceding a target word helped listeners resolve cross-talker variability. While overall response time did not differ across the three stimulation groups (p = 0.36), the effect of connected speech was significantly attenuated under both anodal and cathodal stimulation compared to sham (anodal vs. sham p < 0.01; cathodal vs. sham p < 0.01), showing that active stimulation disrupts the brain’s ability to use connected speech to rapidly adapt to talkers. However, active stimulation did not have a modulatory effect on talker adaptation in the isolated-word condition (anodal vs. sham p = 0.45; cathodal vs. sham p = 0.90), showing a dissociation in the effect of tDCS on talker adaptation on two different timescales. This set of results suggests that stimulation of left superior temporal lobe disrupts the brain’s ability to use speech context to rapidly adapt to a talker, revealing this region’s causal role in talker adaptation. different responses than were observed for adults. First, a P600 is found for syntactically surprising items in anterior rather than posterior sites (p < .001), which diminishes with age (p < 01). For entropy reduction, we find a frontal negativity (p < .01) rather than the expected positivity. This response also interacts with age (p < .001) such that the model predicts a positive rather than a negative response in cases of entropy reduction only for our oldest subjects. Overheard speech has been suggested as a useful source of information for the acquisition of single words (Akhtar, 2005) as well as higher-level functions such as narrative development (Blum-Kulka and Snow, 2002). Our results suggest that children do compulsively access lexical information, indexed by the N400, when presented with task-irrelevant speech. However, our data shows markers of automatic speech processing that are tied to information structure, such as syntactic category prediction and entropy, are only beginning to emerge in our oldest group of children. A77 Assessing continuous speech processing in adults and normal hearing children Laurel Lawyer1, Sharon Coffey- Phonology and Phonological Working Memory Corina2, Andrew Kessler2, Lee Miller2, David Corina2; 1University of Essex, 2Center for Mind and Brain, University of California, Davis As listeners, we are adept at ‘tuning out’ irrelevant auditory information to focus only on the signals of interest. Despite this, we know that some elements of speech perception are automatic regardless of attention, for instance sensitivity to phonetically relevant contrasts (cf. Shtyrov, 2010). In this study, we use task-irrelevant continuous speech in an EEG paradigm to assess lexical and syntactic processing in both adults (N=21, mean age=19) and children (N=21, ages 1;9 - 8;8; mean=5;2). Data was analysed using mixed-effects models to predict mean EEG amplitudes following the onset of open class words in sentence medial positions. Two analysis windows were chosen for analysis of lexical and syntactic effects: 300-500msec (N400), and 500-700msec (P600). Lexical frequency information for the adult group was gathered from CELEX (Baayen, Piepenbrock & Gulikers (1995)) and for the children from a corpus of child-directed speech in CHILDES (MacWhinney (2000), Li & Shirai (2000)). Syntactic surprisal and entropy were quantified for each word in the stimulus set using an incremental top-down parser (Roark (2001)). In adults, results show a robust N400 response to low-frequency words in frontal sites (p < .01) which attenuates after sentence stimuli are repeated four or more times. In the later window, two separate P600 effects were observed: an anterior effect showing a larger response for items that reduce rather than increase entropy (p < .001), and a posterior effect which showed larger responses for syntactically surprising items (p < .001). These results expand on those reported by Kaan & Swaab (2003) and Frank (2015), suggesting frontal P600s may be related to ambiguity resolution, where as the more canonical posterior P600 is found in cases of syntactic integration difficulty (Friederici (2002), Osterhout and Holcomb (1992)). In children, we observe a similar frontal N400 (p < .001), which gets larger with age (p < .001). In the P600 window, however, we find qualitatively The Society for the Neurobiology of Language A79 Grey and white matter correlates of verbal and non-verbal working memory Dimitrios Kasselimis1,2, Georgia Angelopoulou1, Panagiotis Simos2, Dimitrios Tsolakopoulos1, Georgios Papageorgiou1, Michel Rijntjes3, Marco Reisert4, Georgios Velonakis5, Efstratios Karavasilis5, Nikolaos Kelekis5, Cornelius Weiller6, Michael Petrides7, Constantin Potagas1; 1 Neuropsychology and Language Disorders Unit, 1st Neurology Department, Eginition Hospital, Faculty of Medicine, National and Kapodistrian University of Athens, 2Division of Psychiatry and Behavioral Sciences, School of Medicine, University of Crete, 3 Department of Neurology and Neurophysiology, University Hospital Freiburg, 4Medical Physics, Department of Diagnostic Radiology, Faculty of Medicine, University Freiburg, 5Radiology and Medical Imaging Research Unit, University of Athens, 6 Department of Neurology, University Medical Center, University of Freiburg, 7Montreal Neurological Institute, Department of Neurology and Neurosurgery and Department of Psychology, McGill University Introduction Several studies indicate an association between Working Memory (WM) and language functions (e.g. Potagas, Kasselimis, and Evdokimidis, 2011). However, the hypothesized common neurological substrate has not been yet established, either through imaging data derived from healthy subjects or based on evidence from aphasia. The purpose of the present study is to investigate possible associations between WM performance and grey and white matter indices of the left perisylvian region, which is traditionally considered to support language functions. Methods 60 healthy participants (33 women), 19-65 years old, with 6 to 24 years of formal schooling were assessed with the Digit Span and the Corsi Block-tapping Task. 3D T1-weighted and 30-directional DTI protocol were acquired for all participants on a 3T Philips Achieva-Tx MR scanner (Philips, Best, The Netherlands), equipped with an eight-channel head coil. Whole brain cortical reconstruction of T1 MR images was first accomplished, using the automated pipeline of FreeSurfer 6.0.0 (http:// www.surfter.nmr.mgh.harvard.edu/) (Fischl and Dale 2000). We ran separate whole brain general linear models 79 Poster Session A for WM scores and three brain metrics (surface area, cortical thickness, grey matter volume). Monte Carlo simulations were used to correct all vertex-wise results at an individual vertex level of p < 0.05 (Hagler, Saygin, & Sereno, 2006). White matter fibers were reconstructed using the global tractography approach implemented in DTI&Fibertools (Reisert et al., 2011). Results Whole brain general linear model analysis revealed significant associations between verbal WM scores and thickness of the pars opercularis (p= 0.0136), as well as surface area of the angular gyrus and the superior temporal sulcus (p= 0.0122). Corsi performance was associated with insula’s surface area (p= 0.049), insula and temporal pole’s grey matter volume (p= 0.0001). Discussion Even though there are several functional brain imaging studies focusing on associations between specific brain regions and performance on WM tasks (Wager and Smith, 2003), no study thus far has investigated structural correlates of WM in healthy participants. Our findings show that anatomical indices such as cortical thickness and surface area may play a role in cognition, and more specifically in temporary storage and manipulation of verbal and visuospatial information. Interestingly, clusters significantly associated with WM performance, included inferior frontal, inferior parietal and superior temporal regions of the left hemisphere, which have been shown to be involved in the perisylvian language network (Petrides, 2013). Overall, our findings stress the importance of brain structural variables in cognitive performance, and further support the notion of a relationship between language functions and WM at an anatomical level. The involvement of white matter tracts is also discussed. A80 Recognizing words by their neighbors: Neural decoding evidence for overlapping lexical representations Seppo Ahlfors1, Adriana Schoenhaut1, David Gow1; Massachusetts General Hospital 1 Introduction: While a growing body of research has examined the neural representation of phonetic features, very little work has examined the neural representation of phonological wordforms. One fundamental question is whether lexical representations are compositional or holistic. If lexical representations are compositional (e.g. made up of subunits such as segments, syllables or N-phones), then words with overlapping phonological patterns may have overlapping neural representations. However, if lexical representation is holistic, phonologically similar words may not have similar neural representations. To examine these predictions, we trained classifiers to discriminate between sets of words and nonwords from different phonological neighborhoods, and tested their ability to discriminate between untrained words that define those neighborhoods using spatiotemporal activation patterns from brain regions associated with phonological wordform representation. Methods: MEG/EEG data were collected simultaneously while subjects performed an auditory lexical decision task. The stimuli included monomorphemic CVC seed words, and a combination of real words and nonwords derived by changing a single feature of one phoneme of 80 SNL 2019 Program those seed words (e.g. pig -> big, peg, pick, tig, poog, pid). Stimuli were recorded by male and female talkers, and those stimuli were digitally manipulated to create tokens of all stimuli in 8 different voices. Trials were blocked by voice, with all seed words heard twice and phonological neighbors heard once per block, and all voices presented in two separate blocks. Results: Subjects were able to perform the task with high accuracy. MRI-constrained minimum norm source estimates of MEG/EEG data were created for each trial, and automatically parcellated into ROIs including regions associated with wordform representation (supramarginal gyrus, posterior middle temporal gyrus – see Gow, 2012). We trained a support vector machine to discriminate between trials based on neighborhood membership (e.g. neighbors of pig versus neighbors of bike), and then used those classifiers to discriminate between neural responses to tokens of seed words (e.g. pig versus bike). Analyses showed strong transfer of neighbor training to seed discrimination. Moreover, transfer did not depend on the lexical status of the neighbors used to train the classifiers. Conclusion: The results show that activation patterns associated with phonological neighbors of seed words are sufficiently similar to those produced by seed words to support classification. This is consistent with compositional representation of wordform, with overlapping elements of seed and neighbor words evoking common patterns of activation. An alternative interpretation involving obligatory parallel activation of lexical wordforms that overlap with heard forms is also considered. Implications of these results for understanding both behavioral neighborhood effects and nonword wordlikeness effects are discussed. Reading A81 MOUS, a 204-subject multimodal neuroimaging dataset to study language processing Jan Mathijs Schoffelen1, Robert Oostenveld1,3, Nietzsche Lam1,2, Julia Uddén1,2,4, Annika Hultén1,2,5, Peter Hagoort1,2; 1Radboud University, Donders Institute, 2Max Planck Institute for Psycholinguistics, 3Karolinkska Institute, Stockholm, 4Stockholm University, 5Aalto University Here we present an open access dataset, colloquially known as the Mother Of Unification Studies (MOUS) dataset, which contains multimodal neuroimaging data that has been acquired from 204 healthy human subjects. The neuroimaging protocol consisted of magnetic resonance imaging (MRI) to derive information at high spatial resolution about brain anatomy and structural connections, and functional data during task, and at rest. In addition, magnetoencephalography (MEG) was used to obtain high temporal resolution electrophysiological measurements during task, and at rest. All subjects performed a language task, during which they processed linguistic utterances that either consisted of normal or scrambled sentences. Half of the subjects were reading the stimuli, the other half listened to the stimuli. The resting state measurements consisted of 5 minutes eyes-open for the MEG and 7 minutes eyes-closed for fMRI. The neuroimaging data, as well as the information about the experimental events are shared according to The Society for the Neurobiology of Language SNL 2019 Program  the Brain Imaging Data Structure (BIDS) format. This unprecedented neuroimaging language data collection allows for the investigation of various aspects of the neurobiological correlates of language. A82 Readers are parallel processors: ERP evidence from the RPVP paradigm Yun Wen1, Jonathan Mirault1, Joshua Snell1, Jonathan Grainger1; 1Laboratoire de Psychologie Cognitive, Aix-Marseille Université and Centre National de la Recherche Scientifique, Marseille A central issue addressed in reading research is whether skilled reading involves one-word-at-a-time incremental processing or parallel, cascaded, and interactive processing. Here we show that simultaneous processing of multiple words is possible, by combining EEG recordings with the novel rapid parallel visual presentation (RPVP) paradigm. In the RPVP paradigm, a sequence of four horizontally aligned words was briefly presented (i.e., 200 ms). In Experiment 1, the four words could either represent a grammatically correct sequence (e.g., “she can sing now”) or an ungrammatical scrambled sequence of the same words (e.g., “now she sing can”). The experimental task was to identify one post-cued word within the sequence (e.g., the post-cued word is “sing” in previous examples). In Experiment 2, the key manipulation was the same as Experiment 1 (grammatically correct sequences vs. ungrammatical scrambled sequences), and a grammaticality judgement task was used. Experiment 3 also used a grammaticality judgement task, and the critical comparison was between two types of ungrammatical sequences, i.e., the transposed-word sequences (e.g., “you that read wrong”, transposing two adjacent central words can form a grammatical sequence) and the control sequences (e.g., “you that read worry”, transposing two adjacent central words still forms an ungrammatical sequence). We report three key findings from this set of EEG experiments: (1) a reduced N400 effect was obtained in the grammatically correct sequences compared to the ungrammatical scrambled sequences, and this N400 sentence superiority effect was independent of the experimental tasks; (2) an N400 reduction was observed in the transposed-word sequences relative to the control sequences (the N400 transposed-word effect), and both types of ungrammatical sequences elicited the N400 effect relative to grammatical filler sequences; and (3) the ERP responses to word sequences presented in the RPVP paradigm are strikingly similar across experiments, revealing the consistent neural signature of parallel processing of multiple words. Our results suggest that an initial representation of sentence structure can be quickly generated on the basis of partial information extracted from several words in parallel, and this syntactic representation then constrains on-going identification processes through feedback to word identities as evidenced by the N400 sentence superiority effect. Furthermore, this elementary syntactic representation also constrains the range of possible word candidates at each position, and such top-down constraints together with the noisy bottom-up encoding of word order drive the N400 transposed-word effect. Taken The Society for the Neurobiology of Language Poster Session A together, our study provides converging evidence in favour of highly interactive processing operating between wordlevel and sentence-level representations during sentence reading, and demonstrates that the RPVP paradigm is a methodological advance that complements other widelyused paradigms in electrophysiological investigations of reading comprehension. A83 Localizing activation of component processes of reading in adult struggling readers Rachael Harrington1,2, C. Nikki Arrington1, Lisa C. Krishnamurthy1,2, Venkatagiri Krishnamurthy2,3, Bruce Crosson1,2,3, Robin Morris1; 1Georgia State University, 2Center for Visual and Neurocognitive Rehabilitation, Department of Veterans Affairs, 3Emory University School of Medicine Reading is an essential skill necessary for many activities of daily living. Dyslexia most commonly occurs neurodevelopmentally and persists into adulthood in adult struggling readers (SR). Adult SR typically have poor fluency, receptive vocabulary and reading comprehension, along with poor phonological skills when compared to age-matched and reading agematched controls. This represents the cumulative result of early phonological reading deficits (Greenberg et al., 1997, 2002, 2011). Functional imaging of children with developmental dyslexia shows decreased activation in the left-hemisphere visual system and dorsal stream with increased activation of right hemisphere visual and language systems (Shaywitz & Shaywitz, 2005). This project aims to understand the altered or compensatory neural networks underlying adult SR. Eleven adult SR and 11 age-matched controls were given a battery of reading tests and the Fast fMRI localizer task, a covert reading task that is highly effective at identifying brain regions that are sensitive to orthographic, phonological or semantic properties of words. Scans were performed on a Siemens 3T Trio (TR = 2 sec, TE = 30 ms, 3x3x4 mm). We performed standard image preprocessing and processing with in-house pipelines. Trial types were regressed to obtain contrast images and we applied continuous brain/ behavior correlation analysis between subject reading skill and activation within specified ROIs. Following t-tests (p<0.05, cluster size =50) for adult SR relative to controls, we found increased activity in right inferior parietal lobe (r-IPL) during reading of phonologically related word sets and in right inferior frontal gyrus (r-IFG) and right middle temporal gyrus (r-MTG) during reading of semantically related word sets. Within the control group, we found positive correlations with left fusiform gyrus (l-FG) during reading of semantically related words sets and Woodcock Johnson 3: Reading Fluency subtest scores (WJ3RF) (r=0.607, p=0.048). We also found positive correlations when reading phonologically related words between l-IPL activation and the WJ3: Letter Word ID subtest (WJ3LWID) (r =0.828, p=0.002). Within the SR adult group, we found positive correlations between the WJ3RF and betaweights during reading of semantically related words in regions similar to those of the control group, such as l-IPL (r=0.661, p=0.027) and l-FG (r=0.858, p=0.001), but also in right hemisphere regions like r-IPL (r=0.652, p=0.03), 81 Poster Session A right inferior frontal gyrus (r-IFG) (r=0.694, p=0.018), and r-FG (r=0.869, p=0.001). Positive correlations also were present between RJ3LWID and reading of phonologically related word sets again in l-IPL (r=0.88, p=0.001) and l-FG (r=0.75, p=0.008) but also in right hemisphere regions such as r-IPL (r=0.79 p=0.004) and r-FG (r=0.71, p=0.013). While the increased right hemisphere activation as compared to controls is consistent with previous literature from children with dyslexia, we also found that right hemisphere activation is correlating positively with behavioral testing in adult SR. These results need to be explored in a larger group and further analysis is needed to determine if this activation is adaptive or maladaptive for reading. A better understanding of the neurobiology of reading can help inform treatment of both adult SR and other non-developmental reading disorders in adults. A84 Effects and interactions of orthographic depth and lexicality in Arabic visual word recognition: A lexical decision ERP study Ali Idrissi1, R. Muralikrishnan2, Eiman Mustafawi1, Tariq Khwaileh1, John Drury1; 1Qatar University, 2Max Planck Institute for Empirical Aesthetics INTRODUCTION. The Arabic script uses independent letters to represent consonants and long vowels, typically leaving short vowels unmarked. However, short vowels can be marked with diacritics, which provide studies of visual word recognition with degrees of deep/shallow orthography that can be examined in tightly matched minimal pairs. Using fMRI, Bourisly et al. (2013) show distinct activations suggestive of additional lexical search when diacritics are absent, and greater involvement of mappings to phonology/semantics when they are present (cf. Weiss et al. 2015). An Arabic semantic priming ERP study (Mountaj et al. 2015) found larger early negativities when diacritics were present (~N1 peak) but, surprisingly, no influence on N400 semantic relatedness effects (cf. Bar-Kochav & Breznitz 2012). PRESENT STUDY & METHODS. Here we employed a 2x3 design crossing LEXICALITY (real-/pseudo-words) with three levels of DIACRITIC DENSITY (fully-marked/ (FULL), minimally-marked/(MIN), non-marked/(NON)). In minimally-marked cases a single diacritic was included where it could ensure disambiguation. Additionally, violations were tested with diacritics either indicating attested word patterns that do not occur with the given real-roots, or unattested patterns attached to either realor pseudo-roots. Adult Qatari Arabic native speakers (n=33; all female) judged 720 items (half real-words) for LEXICALITY while EEG was continuously recorded. ERPs were examined in three sets of time-windows probing early evoked responses, the N400, and late positivities. Peak and (50% fractional peak) latencies were examined to probe N400 onset and offset timing. Lexical decision response latency/accuracy was also examined. Here we focus on early evoked potentials and on the N400. RESULTS. First, around the N1 peak ERP amplitudes varied parametrically with DIACRITIC DENSITY but in the opposite direction of previous studies (negativities largest for NON>MIN>FULL). Second, DIACRITIC x LEXICALITY interactions emerged in the form of earlier 82 SNL 2019 Program N400 onset for NON than FULL cases for real-words only, while the opposite pattern held for N400 offset, which was later for NON than FULL for pseudo-words only. Also, diacritics indicating unattested word patterns yielded the largest early negativity of any condition. This violation response was followed by a sustained negativity for realwords only. Real-word mismatches indicating possible word patterns did not show this large early negativity, though the subsequent sustained effect emerged later (~N400 peak). DISCUSSION. We suggest our opposite direction early effects are due to our relatively smaller proportion of NONmarked cases and the presence of diacritic violations/mismatches. However, this finding, coupled with the robust violation response for unattested diacritic patterns, clearly shows these initial stages of processing are sensitive to more than just mere visual load/complexity (contra previous studies). Further, our possible/attested diacritic mismatch cases provide information about the timing of root and word-pattern integration, which we discuss relative to findings from previous masking priming studies of Arabic (e.g., Boudelaa & Marslen-Wilson 2005). Finally, the N400 latency effects offer the promise that manipulations of diacritics could aid in adjudicating disputes about the etiology of this ERP component in addition to providing hitherto lacking evidence for the effect of orthographic depth on access/ retrieval of semantic LTM during lexical decision. A85 Chinese Two-character-words Overcome Interocular Suppression Faster than Chinese Two-characternonwords Jian’e Bai1,2,3, Jinfu Shi1,2,3, Yiming Yang1,2,3,4; 1School of Linguistic Sciences and Arts, Jiangsu Normal University, 2Jiangsu Collaborative Innovation Center for Language Ability, 3Jiangsu Key Laboratory of Language and Cognitive Neuroscience, 4Institute of Linguistic Science, Jiangsu Normal University Previous studies showed that many kinds of stimuli can be processed unconsciously under interocular suppression, such as faces, numbers, and even meaning of words. In Chinese writing script, a single Chinese character is often a morpheme and has meanings by itself. However, in reading materials, most Chinese words are composed of two Chinese characters. Word level processing plays a very important role in Chinese reading. In this study, we focused on whether word level information of Chinese could be processed unconsciously under interocular suppression. We conducted a behavioral experiment with Continuous Flash Suppression(CFS) paradigm to investigate the question above. Ten college students took part in this experiment. During the experiment, in each trial, a Chinese two-character-word or a Chinese two-character-nonword was presented to one eye of the subjects while a high-contrast dynamic noise pattern was presented to the other eye. Due to the strong suppression induced by the dynamic noise pattern, the word or the nonword was not visible at the beginning of each trial. We asked the subjects to press a response key to indicate the location of the stimulus once they saw any part of a Chinese character as soon as possible. We compared the reaction time to words with that to nonwords and found that words overcame the interocular suppression The Society for the Neurobiology of Language SNL 2019 Program  significantly faster than nonwords. Then we recruited another eleven college students to take part in a control experiment. In the control experiment, same stimuli were used and the parameters were essentially the same as for the CFS experiment described above. Unlike the CFS experiment, in the control experiment, subjects viewed the stimuli binocularly (non-rivalry) rather than dichoptically. In each trial, the pattern noise was presented to both eyes and a Chinese two-character-word or a two-character-nonword was blended into the noise pattern and gradually came into view out of the noise. The ramping time of the stimuli was set at 10s in order to keep the detection time in a similar range as the suppression time in the main experiment. We also measured the response time for the subjects to detect the presence of the two-character-words or two-character-nonwords and make a button press to indicate the location of each stimulus. The result showed that there was no significant difference in reaction time between words and nonwords. This result indicated that the less suppression time for words in the main experiment was specific to the interocular competition, and was not due to a general advantage in detection. In summary, these results from the interocular suppression experiment and the control experiment indicated that word level information of Chinese writing script could be processed unconsciously under interocular suppression. Acknowledgement: This project was supported by National Natural Science Foundation of China(31400866,31671170), Natural Science Foundation of Jiangsu Province(14KJB180006), Jiangsu Provincial Foundation for Philosophy and Social Sciences(12YYC015), Foundation for Doctors of Jiangsu Normal University(12XLR009), Jiangsu Province Postdoctoral Foundation. Correspondence: Jiane Bai (baije9972@gmail.com) and Yiming Yang (yangym@jsnu. edu.cn). A86 Bilingual dyslexic children show language-universal deficits in the brain Qing Zhang1, Xiaohui Yan1, Fan Cao1; Department of Psychology, Sun Yat-Sen University 1 Previous study has shown that Chinese-speaking children also show phonological deficits behaviorally and neurologically (Cao et al., 2016). If the underlying cause for DD is language-universal, bilingual speakers with DD could shed light on the neural basis of DD across languages. The existing results of the neural signature of bilingual (Chinese and English) DD is not consistent (Siok et al., 2008, Hu et al., 2010; You et al., 2010), mainly because none of them has examined both L1 and L2 simultaneously. To this end, we collected data from bilingual dyslexic children, and directly compared their deficits in both languages. Seventy-six Chinese children were recruited, with 20 children with DD in both Chinese and English, 17 age-matched controls for the Chinese task (AC), 21 reading-matched controls for the Chinese task (RC), and 18 age-matched controls for the English task (EC). A visual rhyming judgment task was adopted in the fMRI, in which two words were presented sequentially in the visual modality, and participants were asked to determine whether they rhyme or not. In a full factorial The Society for the Neurobiology of Language Poster Session A ANOVA of 2 groups (dyslexia, control) by 2 languages (Chinese, English), we found a main effect of group at two clusters in the left IFG, the left precuneus, and the left ITG, with greater activation in AC and EC than in DD. At the ROI level, we compared RC and DD, and found that RC was similar to AC and greater than DD at the left precuneus and left ITG (t(1,35)=2.84, p=0.008 for the precuneus; t(1,35)=2.50, p=0.017 for the ITG). It suggests that these two deficits are associated with dyslexia core deficits, since RC showed the same pattern as AC. At the two clusters in the left IFG, however, RC did not show significant difference than DD, suggesting these deficits might be accumulated with the experience of being DD. No interaction was found. These deficits at the left IFG overlaps with the language core areas from another study by Liu et al. (in progress) which proposed a language core network associated with the abstract and symbolic nature of language. The overlaps lie on the left IFG and the middle part of the precentral gyrus, very close to the location of laryngeal motor cortex (LMC) (Simonyan, 2013), suggesting that speech production deficit might play a more important role in phonological deficit experienced by children with DD. Correlation analysis also found different patterns for the dyslexia core areas and dyslexia accumulated areas. Within children with DD, we found negative correlation between accuracy on the task and activation in the left precuneus, and positive correlation between reading skill and activation in the left IFG. It suggests that children with more severe DD showed greater activation in the left precuneus and less activation in the left IFG. We hypothesize that the core deficits at the left precuneus and ITG are associated with the origin of DD, which then interferes with the development of the IFG. Multisensory or Sensorimotor Integration A87 Changes in brain activity during learning graphemephoneme associations: An MEG study Weiyong Xu1,2, Orsolya Kolozsvari1,2, Jarmo Hämäläinen1,2; 1Department of Psychology, University of Jyväskylä, Finland, 2Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Finland Learning to associate written letters with speech sounds is crucial for the initial phase of reading acquisition. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of graphemephoneme associations and effects of memory consolidation during sleep. In the present study, we trained 29 Finnish participants (mean age: 24.12 years, SD: 3.44 years) to learn foreign letters and speech sounds on two consecutive days (first day ~ 50 minutes; second day ~ 25 minutes), while neural activity was measured using magnetoencephalography (MEG). Visual stimuli consisted of 12 Georgian letters (ჸ, ჵ, ჹ, უ, დ, ჱ, ც, ჴ, ნ, ფ, ღ, წ). Auditory stimuli consisted of 12 Finnish speech sounds ([a], [ä], [e], [t], [s], [k], [o], [ö], [i], [p], [v], [d]). The audiovisual learning experiment consisted of 12 alternating learning and testing blocks in Day 1 and 6 learning and testing blocks in Day 2. All participants learned 6 audiovisual pairs based on the feedback about 83 Poster Session B the congruency information of the audiovisual stimuli, while another 6 audiovisual pairs were non-learnable due to the lack of congruency information in the feedback. Overall, the participants learned the correct associations within half an hour of training in Day 1 as indicated by their reaction time and accuracy. In Day 2, although the accuracy already reached ceiling level, there was a decrease of reaction time compared to the last block in Day 1 (Day 1 block 12: 1117 ms±436 ms v.s. Day 2 block 1: 825 ms±153 ms ). Event-related fields (ERFs) were obtained by averaging separately audiovisual congruent and incongruent trials before and after learning of letterspeech sound associations. Audiovisual congruency effects (audiovisual incongruent > audiovisual congruent) were observed in the late time window (300-600ms) in left temporal regions after successful learning of the grapheme-phoneme associations in both Day 1 and Day 2. In addition, compared to the brain responses to nonlearnable audiovisual stimuli, there was a suppression in the brain responses to the learnable audiovisual congruent stimuli and enhancement in responses to the incongruent stimuli. Our findings indicated the important role of left temporal region in the initial phase of learning letter-speech sound associations, as well as the memory consolidation during sleep for improvement of learning outcome. A88 Effortful verb retrieval from semantic memory drives beta suppression in higher-order motor areas Boris Chernyshev1,2, Anna Pavlova1,2, Anna Butorina1, Anastasia Nikolaeva1, Andrey Prokofyev1, Maxim Ulanov1,2, Denis Bondarev1,3, Tatiana Stroganova1; 1Moscow State University of Psychology and Education, 2National Research University Higher School of Economics, Moscow, 3National Research Center “Kurchatov Institute”, Moscow The participation of the motor cortex in the semantic processing of verbs remains a subject of debate in neuroscience. The aim of the present study was to test the hypothesis that the motor circuitry contributes to semantic access to verbs representations. To this end, we examined whether the verb retrieval from semantic memory engages the motor cortex and whether this engagement is stronger when the more demanding memory search is required. We asked 33 participants to overtly produce related verbs in response to presented noun cues. The noun cues were either strongly associated with a single verb and prompted the fast and effortless verb generation (mean RT = 1.22 sec), or were weakly associated with multiple verbs and more difficult to respond to (mean RT = 1.89 sec). We used suppression of MEG beta oscillations (15-30 Hz) as an index of cortical activation and performed a wholebrain analysis to identify the cortical regions sensitive to the difficulty of verb semantic retrieval. The verb generation task induced a widespread beta suppression which started around 250ms after noun cue presentation and sustained until the overt production of the verb (p < 0.0001, FWE-corr.). The suppression was localised to a widely distributed left-hemispheric cortical network, that included the higher-order motor areas of the frontal lobe, classical auditory speech areas of the temporal lobe and 84 SNL 2019 Program memory-related structures at the mesial temporal surface (p < 0.05, Bonferroni-corr.). Crucially, despite the spread of beta suppression over the entire left hemisphere, the only cortical regions where beta suppression was sensitive to the semantic difficulty, were the premotor and supplementary motor areas on the lateral and medial surfaces of the frontal lobe. Stronger activation of the higher-order motor areas was observed under the more difficult task condition between -700 and -550ms before the response (p < 0.05, FWE-corr.), at the time window which substantially precedes the preparation of vocal response and likely overlaps with the semantic search for the target verb in both conditions. Given that greater allocation of processing resources in the higher-order motor areas was observed during the effortful memory search, we proposed that the re-activation of verb-related motor plans in higher-order motor circuitry serves to promote the semantic retrieval of target verbs. Thus, our findings support the “embodied cognition” view that motor associations contribute to verb semantic processing. Work on this study was supported by the Russian Science Foundation (grant 14-28-00234) and Core Funding of MEG centre from Ministry of Science and Education of RF. Poster Session B Tuesday, August 20, 2019, 3:15 – 5:00 pm, Restaurant Hall Signed Language and Gesture B1 Cross-modal plasticity in secondary auditory cortex Emil Holmer1, Josefine Andin1, Mary Rudner1; 1Linköping University In deaf early signers (DES) secondary auditory cortex (TE3) is reorganized to support visual and cognitive processing. In the present work, we investigated plastic reorganization in TE3 for visual working memory (WM) in DES. In an fMRI experiment, 16 DES and 22 hearing nonsigners (HNS) performed a sign-based n-back WM task. In the task, sequences of video-recorded lexicalized signs from the Swedish Sign Language (SSL) were presented in high or low visual resolution. Participants judged whether the present video was the same as the video n steps back in the sequence, and WM load was manipulated by varying n from one to three. Region of interest analysis was performed by obtaining mean activation separately for left and right TE3 for each participant. A two (group) by three (load) by two (resolution) by two (hemisphere) mixed ANOVA was performed. A main effect of group showed stronger relative activation across both hemispheres for DES compared to HNS. Corroborating previous studies, there was consistent deactivation compared to rest in the HNS, indicating suppression of non-relevant auditory stimuli. In contrast, for the DES, a relative activation compared to rest was observed. Further, for both groups activation varied as a function of WM load and visual resolution. For HNS, there was a consistent deactivation across all conditions. For DES, significant activation was found for 1-back at both resolutions and for 2-back in high resolution, whilst 2-back in low resolution The Society for the Neurobiology of Language SNL 2019 Program  and 3-back in both resolutions showed neither clear activation nor deactivation. There was also a significant interaction between hemisphere and group. DES showed stronger relative activation compared to HNS in both hemispheres. HNS showed more deactivation in right compared to left hemisphere, whereas no difference across hemispheres was found for DES. In conclusion, these results suggest that in DES, secondary auditory cortex support WM processing when task demands are low. Thus, the present study support the notion of crossmodal plasticity in secondary auditory cortex for DES. Control, Selection, and Executive Processes B2 Performance on Word Generation in Schizophrenia: A Linguist’s View Anna Rosenkranz1, Tilo Kircher2, Arne Nagels3; University of Cologne, 2Philipps University Marburg, 3Johannes Gutenberg University Mainz 1 Background and Aim: In verbal fluency (or word generation) tasks, a well-established measure for both executive functions and lexical-semantic abilities, participants are asked to generate as many words as possible from a specific semantic category (semantic fluency) or a specific letter (lexical fluency). Deficits on these tasks are commonly observed in patients with schizophrenia with both negative and positive formal thought disorder symptomatology. On a neuropsychological level, a number of studies suggest an involvement of executive functions, attention, short-term memory, cognitive flexibility, inhibition and verbal intelligence, where the particular role of these neuropsychological domains on verbal fluency task performance is still a topic of critical debate. From a linguistic perspective, verbal fluency involves both intact access to representations and efficient word retrieval processes. With regard to patients with schizophrenia, some studies report disproportionately impairment on semantic fluency indicating semantic access deficits, while others report normal patterns of verbal fluency task performance, with better performance in semantic fluency as compared to lexical fluency, indicating general retrieval difficulties. Typically, only the number of correctly generated words is used as a metric to quantify performance, which do not allow a differentiation between the different cognitive components involved. In this research, we implement additional analyses techniques (such as error types, clustering, switching, word frequency and temporal measures) to compare the lexical contribution for semantic and lexcial fluency performance in patients with schizophrenia. Method: We tested semantic (animals) and lexical (letter p) fluency (each for 60 s) as well as executive functions, attention, working memory and verbal intelligence in patients with schizophrenia (n=50) and age-matched healthy controls (n=36). As valid audio records were needed for the additional analysis, we analysed the verbal fluency performance of a subset of 36 patients with schizophrenia with respect to error types, the number of switches from one subcategory to another and the size of the clusters produced within subcategories, word frequency and temporal measures (e.g. the number of correct words The Society for the Neurobiology of Language Poster Session B was evaluated as a function of four 15-s time intervals). Results and Conclusion: A strong relationship was found between attention deficits and both semantic and lexical fluency in patients with schizophrenia, suggesting that verbal fluency deficits in general are mainly driven by attention dysfunctions. Disproportionately impairment on semantic fluency, which was reported in previous research was not represented in our results. However, patients with schizophrenia generated fewer words in semantic fluency in comparison to healthy controls. Regarding the additional analysis, results showed that patients with schizophrenia produced more errors in comparison to healthy controls. Furthermore, only positive formal thought disorder was related with the number of errors. Interestingly, regarding clustering and switching, word frequency as well as temporal measures, patients with schizophrenia showed no abnormal pattern. We discuss these findings in light of the role of linguistic processes involved in generating and encoding a response in verbal fluency tasks in patients with schizophrenia. B3 Prediction: when, where & how? An investigation into spoken language prediction in naturalistic virtual environments Eleanor Callaghan1, David Peeters1,2, Peter Hagoort1; 1Max Planck Institute for Psycholinguistics, 2Tilburg University A longstanding question remains as to how language is processed so efficiently and so rapidly. Recent evidence suggests that this fast processing is assisted by the prediction of upcoming linguistic input. Visual word paradigms have been fundamental in providing evidence for prediction in spoken language comprehension, by demonstrating that participants make anticipatory eye movements towards depicted objects before their associated noun is spoken. However, it is questionable whether this type of prediction also occurs in more naturalistic, everyday environments, which are often intrinsically rich and visually complex. It is equally unclear to what extent subtle cues in speech are used to update predictions. Recent work from our Virtual Reality laboratory supports that prediction does indeed occur in naturalistic environments. In a series of four experiments, we observed anticipatory eye movements to objects in rich virtual environments, even when increasing the number of distractor objects in the scene and the proportion of filler sentences included in the experiment (Heyselaar et al., under review). The number of different verbs used in the experiments was rather low, however, thereby questioning the generalizability of the findings. We therefore continued to investigate these findings in a novel virtual reality experiment, in which participants listened to sentences spoken by a virtual agent during a virtual tour of eight scenes (e.g., an office, a living room, a canteen). The agent discussed her relation to each scene while participants’ eye movements were continuously recorded. Spoken stimuli, produced by the agent (incl. lip sync and gaze to the participant), consisted of 128 sentence pairs that contain a subject-verb-object clause. Sentences within each pair were identical apart from the verb, one of which is restrictive (related to a single object in the scene), where the other is unrestrictive (related to multiple 85 Poster Session B objects in the scene). Sentence pairs were separated into two lists that participants were randomly assigned to, so that no participant heard both sentences from a pair. Only 50% of the sentences referred to an object that was present in the scene. The remaining 50% of sentences behaved as filler trials. Preliminary results confirm that, in a critical time window that precedes noun onset, the mean proportion of fixations towards the target object were greater in the restrictive compared to unrestrictive condition. These findings indicate that prediction in language comprehension occurs in naturalistic, real life situations. Ongoing work additionally investigates how disfluencies in speech, for example hesitations, influence predictions. In light of the importance of prediction for efficient language comprehension, we aim to establish which elements of speech are used to predict upcoming utterances in more ecologically valid scenarios. Disorders: Acquired B4 Brain tumors in left frontal regions impact language laterality as determined by pre-surgical fMRI Monika Polczynska1, Lilian Beck1, Taylor Kuhn1, Kevin Japardi1, Christopher Benjamin2, Timothy Ly1, Susan Bookheimer1; 1University of California, Los Angeles, 2Yale University Assessing hemispheric dominance before brain surgery is an important guide in surgical planning that can minimize the occurrence of new, surgically-induced language impairments (Kundu et al., 2013). Language dominance can be assessed with the Laterality Index (LI) measure. In functional magnetic resonance imaging (fMRI), LI quantifies information from blood oxygen-level dependent (BOLD) activation in language tasks in each cerebral hemisphere, or in selected regions of interest (ROI) (Benjamin et al. 2018). Generally, individuals with more language activation in the left hemisphere have higher language LI, while individuals with more bilateral language activity have lower language LI. We address the following two questions: (1) Does a brain tumor in different parts of the language system impact the stability of LI differently?, and (2) Are changes in LI due to a decrease in functional activation around the lesion or an increase in activation in brain areas on the contralateral side? We accessed a database of over 1100 patients to perform exact matching. We divided our subjects (n=60; 25 females; 18 left-handed and four ambidextrous; average age=47 years, SD=14.1) into four tumor groups with 15 subjects per group: 1. Left anterior hemisphere, 2. Left posterior hemisphere, 3. Right anterior hemisphere, and 4. Right posterior hemisphere. Patients from groups 1 and 2 were our target sample, with brain tumors within the language regions of the left dominant hemisphere. Groups 3 and 4 served as controls, with tumors within the right language homologs. There were both low- and high-tumor grade patients in each group. We matched the patients based on significant factors that are known to affect LI: tumor location and volume, gender, handedness, and age. We evaluated LI in three language tasks during pre-operative fMRI: object naming, verbal, and auditory responsive naming. We calculated active voxel counts in each hemisphere, and four ROI: Broca’s area, Wernicke’s area, and their 86 SNL 2019 Program right hemisphere homologs. We found that patients with a brain tumor in the left anterior hemisphere had lower language laterality than patients with a brain tumor in the right anterior hemisphere based on active voxel counts in each hemisphere (p=0.020). Evaluating LI in specific ROI enabled us to observe that the left anterior patients had decreased language laterality in Broca’s area (p=0.020) but not in Wernicke’s area. Further, compared to the three other groups, these patients displayed lower active voxel counts in Broca’s area and higher voxel counts in the right hemisphere homolog of Broca’s area. To conclude, a brain tumor located in the left anterior hemisphere affected the stability of language LI. Specifically, patients with a left anterior tumor had less activity in the left frontal language regions (Broca’s area) and elevated activity in the contralateral areas. An important clinical implication of this study is that in patients with brain tumors in the left anterior hemisphere, it is essential to assess language LI using an ROI approach, specifically in the posterior language areas. B5 Using Vascular Territories to Estimate Disconnection Profiles in Post-Stroke Aphasia Natalie Busby1, Ajay Halai1,2, Ying Zhao2, Geoff Parker3,4, Matt Lambon Ralph1,2; 1Division of Neuroscience and Experimental Psychology, University of Manchester, 2MRC Cognition and Brain Sciences Unit, University of Cambridge, 3Centre of Medical Image Computing, UCL, 4 Bioxydyn Ltd. White matter disconnection is important for understanding disorders of higher-order functions such as language, however as diffusion data is rarely collected clinically, recent studies have attempted to predict disconnection patterns using other factors. Lesions are often overlaid onto white matter atlases to estimate disconnection however, although damage sustained to the brain post-stroke appears random, it is actually constrained by the underlying neurovasculature; brain regions supplied by the occluded arterial branch will be affected. Therefore, accounting for this by investigating which vascular territories were damaged may yield an interesting way to predict disconnection. Consequently, the aims of this study were; (a)to identify disconnection profiles associated with each vascular territory of the middle cerebral artery (MCA), (b)to determine whether territories can be combined to ‘build’ a lesion, and (c) to predict disconnection patterns in post-stroke aphasia patients by the summing damage associated with each vascular territory used to ‘build’ their lesion. Anatomical Connectivity Mapping (ACM) assesses long-range disconnection and may be a complementary alternative to local connectivity measures. Using probabilistic tractography, ACMs are obtained by initiating streamlines from all voxels. Cumulative trajectories of streamlines are saved, providing a brain map indicating how many times a streamline passed each voxel (i.e. the global connectivity of each voxel). This identifies widespread disconnection as fewer streamlines would pass through any voxel connected to damaged regions and may enable the prediction of disconnection profiles accounting for damage away from the lesion. This also allows for The Society for the Neurobiology of Language SNL 2019 Program  ‘pseudo-lesioning’; the selective removal of each vascular territory from healthy controls. Disconnection profiles can then be calculated for each territory in every individual. Vascular territories were combined to match lesions in 62 individuals with aphasia following a left hemispheric MCA stroke. ACM was used to estimate disconnection in each individual using which territories best matched their lesion. This was compared back to real patient connectivity. On average, 4.58 vascular territories were combined to ‘build’ the lesion. High similarity scores were found between the lesion and combined territories. There was a significant positive correlation between lesion volume and similarity scores(r=0.665,p<0.001). High similarity scores were found between actual and predicted patient connectivity. A better match between the territories and the lesion positively correlated with a more accurate prediction of disconnection. Individuals with lesions smaller than 10,000 voxels had a significantly less accurate prediction of connectivity than larger lesions(t(62)=7.72,p<0.001). Selectively removing each vascular territory revealed disconnection associated with damage to each territory. Strikingly, disconnection extended far beyond the removed region. The high similarity between each lesion and combined territories suggests the underlying neurovasculature can explain damage sustained following a left hemispheric MCA stroke resulting in aphasia. The high similarity scores between predicted and real patient connectivity scores suggests that connectivity can be predicted using the underlying neurovasculature, particularly for larger lesions. This novel methodology demonstrated that disconnection following a left-hemispheric stroke can be explained by the underlying neurovasculature of the MCA. This may enable a better understanding of language deficits where there is no scope for the collection of diffusion data in the patients themselves. B6 White-matter bottleneck in small vessel disease: A lesion- symptom mapping study of executive-language functions. Ileana Camerino1, Joanna Sierpowska1, Nathalie H. Meyer1, Anil Tuladhar2, Roy P.C. Kessels1,3, Frank-Erik de Leeuw2, Vitória Piai1,3; 1Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Donders Centre for Cognition, 2 Radboud University, Donders Institute for Brain, Cognition and Behaviour, Centre for Neuroscience, Department of Neurology, 3 Radboud University Medical Center, Department of Medical Psychology Cerebral small vessel disease (CSVD), characterized by the presence of white matter lesions (WML), is among the main causes of vascular cognitive impairment. The best-studied cognitive domains in CSVD are executive functioning and processing speed, which are correlated with total WML volume (Prins et al., 2005; Wardlaw et al., 2013). By contrast, the domain of language has received much less attention (Herbert et al., 2014; Welker et al., 2012). Recent studies indicate that WML location might be more informative than total WML volume in explaining the cognitive profile of CSVD (Biesbroek et al., 2017). However, these studies only investigated tasks of executive function and processing speed, whereas other brain functions The Society for the Neurobiology of Language Poster Session B that might be more dependent on WML location, such as language, have remained understudied (Biesbroek et al., 2016; Duering et al., 2013; Smith et al., 2011). In addition, these studies used global compound scores of executive function and processing speed with and without language involvement, precluding inferences regarding whether there is a core network underlying executive and language tasks. The present study investigates whether WML location is associated with poorer performance in executive- language tasks, as analyzed at a single task level. This study included a cohort of 445 CSVD patients without dementia, with varying burden of WML. The Stroop (word reading, color naming, and colorword naming) and the verbal fluency tests were used as measures of language production with varying degrees of executive demands. The digit symbol modality (DSMT) was used as a control task as it does not require verbal abilities. A voxel-based lesion symptom mapping (VLSM) approach (Bates et al., 2003) was used. Analyses were limited to those voxels where at least 4% (N= 18) of the individuals had a lesion with the goal of minimizing biased parameter estimates. To correct for multiple comparison, permutation testing was used. The cut-off for a significant cluster size was determined based on 6000 iterations, with a voxel-wise threshold set at an alpha level of 0.05 (Kimberg et al., 2007). All VLSM analyses were corrected for age, gender, education, and lesion size. Additionally, to control for the processing speed component in language-related tasks, all VLSM analyses were corrected by a “processing speed” score. The VLSM analyses revealed statistically significant clusters for verbal fluency, and Stroop word reading, color naming and color-word naming, but not for DSMT. Worse scores in all tests were associated with WML in the forceps minor, bilateral thalamic radiations and the caudate nuclei. This set of brain areas was similar across all tests. The lesionsymptom associations remained the same once the scores of the verbal fluency and Stroop color-word naming tests were corrected for processing speed. In conclusion, a relationship was found between WML in a core frontostriatal network and executive-language functioning in CSVD independent of lesion size. This circuitry, formed by the caudate nuclei, forceps minor and thalamic radiations, seems to underlie executive-language functioning beyond the role of general processing speed. Finally, the contribution of this circuitry seems to be stronger for tasks requiring language functioning. B7 Resting state functional connectivity changes in transient post-operative aphasia Christian Alexander Kell1, Laura Hansmeyer1, Marie-Therese Forster1, Ines Kropff1, Silke Fuhrmann1, Pavel Hok1,2, Johann Philipp Zöllner1; 1Goethe University Frankfurt, 2 Palacký University of Olomouc, Czech Republic Aphasia results from a dysfunctional language network in the speech-dominant hemisphere. Compared to matched controls, aphasic stroke patients over-recruit right homologous brain regions and re-lateralize language-related activity to the left hemisphere in the course of recovery (Saur et al., 2006, Hartwigsen et al., 2013). However, stroke research does not provide the 87 Poster Session B opportunity of comparing the aphasic and recovered state with a prelesional asymptomatic state in individual patients. This is a prerequisite to identify recovery-related neuroplasticity as normalization of peri-lesional functional connectivity. We studied transient aphasia following tumor surgery (Wilson et al., 2015) in patients who did not show language deficits presurgically. We analyzed changes in fMRI resting state functional connectivity associated with the development of and recovery from transient aphasia, which identifies task-independent network signatures. Repetitive resting state fMRI measurements were performed in 20 well-characterized patients undergoing awake tumor surgery of their speech-dominant left temporal (n=13) or left frontal (n=7) lobe. None had lasting speech or language symptoms prior to surgery and transient postoperative aphasia was present in all patients. Patients were assessed and measured before surgery, during maximal aphasia in the postoperative week and six months later. fMRI datasets were analyzed separately in the two groups (frontal/temporal tumor resection) using a seed-based whole brain functional connectivity analysis with six seeds in a priori defined language regions that were not directly affected by the tumor (inferior frontal gyrus pars triangularis, pars opercularis, articulatory motor cortex, dorsal premotor cortex, superior parietal-temporal area, superior temporal sulcus) and their contralateral homologues. Paired t-tests comparing functional connectivity between the three time points were significant at p<0.05 (FWE-corrected at cluster level, cluster-forming threshold of p<0.001, uncorrected). Because all tumor tissue / resection sites were masked out of the results to exclude spurious findings based on structural changes, connectivity between the left frontal and temporal lobe could not be assessed. In transient aphasia, the left dorsal premotor cortex/preSMA reduced resting state functional connectivity with the left inferior frontal gyrus. In addition, connectivity between the left superior temporal sulcus and the left precuneus, which is an important hub of the default mode network, was reduced. Transient aphasia was also associated with reduced interhemispheric resting state functional connectivity between left language areas and their homologues. Recovery from aphasia went along with a quasi-normalization of resting state functional connectivity, because only connectivity between the left primary articulatory motor area and the right planum temporale increased when comparing recovery with pre-surgery measurements. No other connection showed significant changes between pre-surgical and long-term measurements. Our results demonstrate that aphasia is indeed associated with reduced coupling of the left language network and reduced interhemispheric connectivity, even in the absence of overt language processing. This observation could reflect alterations of inner speech during aphasia, particularly because Wernicke’s area coupled less strongly with the introspection-related default mode network during rest. Recovery was rather related with a return to baseline connectivity than with large-scale re-organization of 88 SNL 2019 Program language networks, indicating local neuroplasticity could be key in restoring language function in the distributed speech and language network. B8 White Matter Hyperintensity predicts naming treatment outcomes in aphasia Claudia Penaloza1, Maria Varkanitsa1,2, Andreas Charidimou2, David Caplan2, Swathi Kiran1; Boston University, 2Massachusetts General Hospital 1 Introduction: Predicting treatment-induced recovery in people with aphasia (PWA) is essential in rehabilitation research given the large outcome variability observed and the complex interplay between multiple factors modulating treatment response. Although stroke-related factors including lesion site and volume, and aphasia severity substantially predict aphasia recovery, general brain health markers may independently modulate individual response to language therapy. Here, we examined whether white matter hyperintensity (WMH) predicts treatment outcomes in aphasia beyond strokerelated factors. Methods: Participants were 30 chronic PWA (10F, age: mean=61years, range=40–80 years, education: mean=15 years, range=12–18, time post stroke: mean=52 months, range=8–170 months) with a single left hemisphere stroke (volume: mean=135.21cm3, range=11.66–317.07cm3) who completed up to 12 weeks of semantic feature analysis treatment. Mean baseline aphasia severity (AQ quotient) from the Western Aphasia Battery–Revised was 59.83 (range=11.7–95.2). T2–FLAIR MRI scans at baseline were scored for WMH severity on the right hemisphere using the Fazekas scale. Periventricular WMH (PVWMH), deep WMH (DWMH), and deep WMH lesion count (DWMHcount) were rated on a 4–point scale (0=absent–3=severe) by two independent raters (inter-rater reliability κ: PVWMH=0.79, DWMH=0.90, DWMHcount=0.95). We additionally computed two composite scores, WMH global load (PVWMH score + DWMH score) and the WMH count-based load (PVWMH score + DWMH score + DWMHcount score). The proportion of the potential maximal gain (PMG; Lambon Ralph et al., 2010), assessed immediately after treatment [(mean post-treatment score – mean pre-treatment score)/(total number of trained items – mean pre-treatment score)] was the primary dependent variable, and WMH scores the predictors. Because none of the measures was normally distributed, we dichotomized the WMH scores in mild and moderate/ severe cases and split PMG into quartiles to develop four ordinal regression models using STATA, one for each WMH score, including AQ, total lesion volume and age as covariates. Results: Both DWMH severity and DWMH lesion count independently predicted treatment outcome; going from mild to moderate/severe DWMH and DWMH lesion count were associated with lower odds of moving to a higher PMG quartile (DWMH: odds ratio=0.11, SE=0.11, z=-2.23, p=0.02; DWMHcount: odds ratio=0.13, SE=0.11, z=-2.35, p=0.01). Similar results were found for the two composite scores (WMH global load: odds ratio=0.14, SE=0.13, z=-2.03, p=0.04; WMH count-based load: odds ratio= 0.19, SE= 0.16, z= -2.00, p=0.04). PVWMH severity was not a significant predictor. Conclusion: Our The Society for the Neurobiology of Language SNL 2019 Program  findings indicate that WMH in the RH predicts language treatment outcomes in PWA above and beyond age and stroke-related indexes of brain damage. WMH has been associated with varying neuropathological processes including demyelination and axonal loss. Because neural structural integrity is essential for brain plasticity, our findings suggest that individual differences in treatment response may depend on white matter integrity in the contralesional (RH) hemisphere. In addition, given that WMH is a chronic progressive brain pathology that tends to be symmetrical and exists even in stroke-free healthy individuals, our findings suggest that pre-morbid markers of brain health may affect treatment and functional outcome. These results highlight the utility of examining biomarkers of neural integrity in aphasia recovery and rehabilitation research. B9 Long-range fiber damage in small vessel brain disease affects aphasia severity Janina Wilmskoetter1, Barbara Marebwa1, Alexandra Basilakos2, Julius Fridriksson2, Chris Rorden3, Brielle C. Stark4, Lisa Johnson2, Gregory Hickok5, Argye E. Hillis6, Leonardo Bonilha1; 1Department of Neurology, College of Medicine, Medical University of South Carolina, 2Department of Communication Sciences and Disorders, University of South Carolina, 3Department of Psychology, University of South Carolina, 4 Department of Speech and Hearing Sciences, Indiana University, 5 Department of Cognitive Sciences, University of California, Irvine, 6 Department of Neurology, Johns Hopkins University Background: We sought to determine the underlying pathophysiology relating white matter hyperintensities (WMH) to post-stroke outcomes. We hypothesized that: 1) WMH are associated with damage to a higher percentage of long-range compared to medium- and short-range intracerebral white matter fibers, and 2) the proportion of long-range fibers mediates the relationship between WMH and chronic post-stroke aphasia severity. Methods: We measured severity of periventricular (PVH) and deep white matter hyperintensities (DWMH), calculated percentages of short-, mid- and long-range white matter fibers, and determined aphasia severity of 48 individuals with chronic post-stroke aphasia. Correlation and mediation analyses were performed to assess the relationship between WMH, connectome fiber-length measures and aphasia severity. Results: We found that more severe PVH and DWMH correlated with a lower proportion of long-range fibers (r=-0.423, p=0.003; and r=-0.315, p=0.029; respectively), counterbalanced by a higher proportion of short-range fibers (r=0.427, p=0.002; and r=0.285, p=0.050; respectively). Mediation analyses revealed: 1) a significant total effect of PVH on WAB-AQ (standardized beta=-0.348, p=0.008), 2) a nonsignificant direct effect of PVH on WAB-AQ (p>0.05), 3) a significant indirect effect of more severe PVH on worse aphasia severity mediated by the lower percentage of long-range and higher percentage of short-range fibers (effect=6.5078, Bootstrapping: SE=3.5797, lower limit 95CI=-0.5672, upper limit 95%-CI=15.0571). Conclusions: We conclude that small vessel brain disease seems to affect chronic aphasia severity through a change of the proportions of long- and short-range fibers. This The Society for the Neurobiology of Language Poster Session B observation provides insight into the pathophysiology of small vessel brain disease, and its relationship with brain health, stroke recovery and aphasia severity. B10 Relation between diffusion measures of the arcuate fasciculus and initial severity of aphasia in the acute phase after stroke Klara Schevenels1, Robin Lemmens3,4,5, Inge Zink1, Bert De Smedt2, Maaike Vandermosten1; 1Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, 2Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, 3 Experimental Neurology, Department of Neurosciences, KU Leuven, 4Laboratory of Neurobiology, Center for Brain and Disease Research, VIB, Leuven, 5Department of Neurology, University Hospitals Leuven The arcuate fasciculus (AF) is a white matter bundle connecting temporal, parietal and frontal regions. The direct long segment connects the inferior frontal and temporal lobes, two indirect shorter segments connect the inferior frontal with the inferior parietal lobe (anterior segment) and the inferior parietal with the temporal lobe (posterior segment) (Catani et al., 2005). The AF has been related to phonological language skills (Yeatman et al., 2011), reading (Vandermosten et al., 2012b) and syntactic skills (Friederici et al. 2011). Therefore, the properties of this tract after stroke might be related to initial aphasia severity, particularly for phonology and syntax. We used diffusion imaging tractography to obtain an estimated reconstruction of the AF in vivo and derived the fractional anisotropy (FA) index to evaluate the tract. We consecutively recruited 15 stroke patients (9 males, 6 females, mean = 71.2 y.o., SD = 9.1 years) with left hemispheric or bilateral lesions and language impairment from the stroke unit at the University hospital of Leuven. Patients were screened for language disorders with the ScreeLing (Doesborgh et al., 2003) on average 4.5 days after stroke (SD = 5.8 days) and underwent a diffusion MRI scan on average 4.5 days after stroke (SD = 2.1 days). The ScreeLing provides a global measure of aphasia severity as well as subtest scores for semantic, phonological and syntactic processing. MRI data for all but one subject were acquired with a 3T scanner equipped with a 32-channel head coil and a single-shot EPI pulse sequence. Diffusion weighting of b=700, 1000 and 2000 s/mm2 in 20, 32 and 60 directions, together with 7 non-diffusion weighted images, lead to the acquisition of 119 images. After deterministic whole brain tractography, 5000 streamlines were propagated from all brain voxels with a step size of 1 mm, FA-values above 0.2 and a maximum angle of 40 degrees (Basser et al., 2000). Tractography dissections were obtained in Trackvis using a region of interest approach in the patients’ native FA-map, following the protocol described in Wakana et al. (2007). We obtained volume and FA-values for the different segments of the AF in both hemispheres and related these measures to the total and subtest scores of the ScreeLing using Holm’s corrected Kendall pairwise correlations. The results indicate a significant positive correlation between the FA in the left posterior AF and the total ScreeLing score (r = 0.54, p = .048), the semantic 89 Poster Session B score (r = 0.56, p = .039) and the syntactic score (r = 0.60, p = .025). Against our expectations, there were no significant correlations with the phonological scores. To further clarify our results, we will look at the relation between the AF and the different tasks for each linguistic component and examine what is explained by the size and location of the lesion. In addition, we will integrate new data to investigate how different diffusion measures are related to language recovery from the acute stage after stroke to the subacute stage after stroke and whether more advanced diffusion models explain our data better. B11 The effect of left and right hemisphere lesions on discourse production Dimitrios Tsolakopoulos1, Georgia Angelopoulou1, Georgios Papageorgiou1, Krystalli Gryllou1, Sofia Vassilopoulou2, Dionysios Goutsos3, Dimitrios Kasselimis1,4, Constantin Potagas1; 1Neuropsychology and Language Disorders Unit, 1st Neurology Department, Eginition Hospital, Faculty of Medicine, National and Kapodistrian University of Athens, 2Stroke Unit, 1st Neurology Department, Eginition Hospital, Faculty of Medicine, National and Kapodistrian University of Athens, 3 Department of Linguistics, School of Philosophy, National and Kapodistrian University of Athens, 4Division of Psychiatry and Behavioral Sciences, School of Medicine, University of Crete Introduction Data derived from functional brain imaging (Troiani et al., 2008) and lesion studies (Ulatowska, North, & Macaluso-Haynes, 1981) strongly suggest that the left hemisphere is involved in including discourse production. Nevertheless, the right hemisphere is shown to be also involved in language, especially when it comes to discourse production (Alexandrou, Saarinen, Mäkelä, Kujala, & Salmelin, 2017). Contemporary lesion studies seem to be in accordance with this notion, by highlighting deficits related to the macrostructure of discourse production, after right hemisphere damage (Bartels-Tobin & Hinckley, 2005; Marini, 2012). In order to further investigate this issue, we attempted to assess possible deficits in discourse production in two groups of post-stroke patients: individuals with right hemisphere damage and left-lateralized lesions resulting in aphasia. Methods 10 patients (3 women) with aphasia due to left hemisphere stroke (LHS), 9 patients with right hemisphere stroke (5 women) (RHS) and 10 healthy participants, were recruited for the study. Patients with aphasia were recruited on the basis of a minimum speech rate criterion (≥50 words per minute) All participants were native Greek speakers. The groups were matched for age and formal years of schooling. Cookie Theft picture from the Boston Diagnostic Aphasia Examination was used to elicit speech samples. Speech samples were then transcribed and evaluated with regard to semantic content. The evaluation of the main concepts of discourse was based on methodology previously reported by Nicholas & Brookshire (1995). In particular, the picture corresponded to seven Content Units (CU). Each CU was marked as AC (accurate), AI (accurate incomplete), IN (incomplete), or AB (Absent). Results An Analysis of Variance showed significant differences between the two groups with regard to AC; F(2,26)=4.601, p=.019, η_p^2=.261. Post-hoc pairwise comparisons using Scheffe 90 SNL 2019 Program revealed significant differences between controls and LHS (p=.019), but not between LHS and RHS (p=.371) or controls and RHS (p=.332). It should be however noted that RHS demonstrated lower performance than controls and were superior to LHS. ANOVAs with AI, IN, and AB as dependent variables did not yield significant results. Conclusions As expected, our results show a clear-cut negative effect of left brain damage on the macrostructure of discourse production. Interestingly, individuals with RHS were superior to LHS patients, but demonstrated lower performance compared to controls. However, in contrast with recent studies, the difference between controls and RHS failed to reach significance. This could be attributed either to small sample or to the fact that description of the Cookie Theft Picture poses low demands in terms of cognitive load. In the latter case, RHS patients could face difficulties in organizing semantic aspects in order to provide a coherent narrative, but those are revealed only when the task is of increased complexity, and thus more demanding. Associations with lesion sites are also discussed. Methods B12 A model to investigate effective connectivity in the speech network with TMS-evoked cortical potentials Pantelis Lioumis1,2, Karita Salo1,2, Selja Vaalto1,3, Risto Ilmoniemi1,2; 1Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, 2BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Hospital, 3Department of Clinical Neurophysiology, HUS Medical Imaging, Helsinki University Hospital Introduction: To observe how the spatial distribution of transcranial magnetic stimulation (TMS)-evoked potentials can be utilized to study connectivity originating from the right-hemispheric homologue to Broca’s area. Methods: The data were collected by combining navigated TMS (nTMS) and electroencephalography (EEG) in three subjects. The right-hemispheric homologue of Broca’s area (opIFG) was stimulated by targeting 150 pulses at a stimulation intensity inducing the corresponding electric field as 90% of the motor threshold of the left APB in M1. The EEG datasets were preprocessed with novel artifactremoval algorithms (Mutanen et al., NeuroImage 2016). The first peak and its latency were determined from the global mean-field amplitudes (GMFA). At the latency, minimum-norm estimates (MNE) indicated sites of most prevalent cortical activity. Results: The new artifactremoval method proved to be useful as the first step in the data analysis. The neuronal activity spread from the right opIFG into the contralateral hemisphere very early as estimated with MNE. Conclusions: Our combination of experimental settings, data processing tools, and data-analysis methods can be used to evaluate effective connections between speech-related areas and their right-hemispheric homologues. This may be especially important when speech areas cannot be localized with direct stimulation due to the lesion. Because of muscle artifacts, TMS–EEG is often difficult to apply for the study of the initial reactivity and excitability of frontal and temporal cortical areas; however, our methods can make The Society for the Neurobiology of Language SNL 2019 Program  it possible. The combination of our technique and DTI can further elucidate the relation of effective and structural connectivity, which can be crucial when treating patients with rTMS or other types of neuromodulation. nTMS–EEG may be used to select stimulation sites on the cortex when specific neuronal connections in the speech network should be modulated by TMS, for example, in stroke patients. Language Therapy B13 The neural process of becoming a word as revealed by triple-echo fMRI Katherine Gore1, Ajay Halai2, Anna Woollams1, Matt Lambon Ralph2; 1University of Manchester, University of Cambridge 2 Neuroimaging studies of language rehabilitation-induced neural changes in post-stroke aphasia have yielded inconsistent results. These heterogenous results are difficult to interpret, as there is no ‘typical’ baseline of neural change in response to speech and language therapy (SLT) based learning in normal controls for comparison. This study sought to provide this information by conducting computer-based language training with semantic and phonological cues. Critically, newly learned items were compared to both previously unknown and also unknown/untrained items, providing a continuum of lexicalisation. Participants learned name-picture pairs of previously unknown, low word frequency nouns and item descriptions over three weeks. Training was successful with a mean 91% gain, with an average 4.12 hours training. Imaging data were acquired post-therapy with a tripleecho planar imaging paradigm. There were four blocked conditions: previously known items (100% accuracy pre-training), newly trained items (100% accuracy posttraining), unknown and untrained (0% accuracy pre- and post-training) and a baseline of phase-scrambled images. Items were specific per participant and participants responded out-loud to all trials. Images were preprocessed including ME-ICA (Kundu et al., 2017) in FSL and AFNI, and analysed using the general linear model in SPM. In an a priori region of interest (ROI) analysis, spherical ROIs with an 8-mm radius were defined from peak co-ordinates in a meta-analysis of fMRI episodic memory tasks (bilateral hippocampi) and semantic/ language tasks (left inferior frontal gyrus (IFG), angular gyrus (AG), bilateral anterior temporal lobes (ATLs). For the known>unknown contrast, there was a significant positive correlation between in-scanner normalised, median reaction times (RT) for known items and activation in the left IFG, AG and bilateral ATLs. There was no significant correlation for the hippocampi. Conversely, for the trained>unknown contrast, the results are reversed. There was a significant negative correlation between trained item RT and activation in the IFG, AG and ATLs, but a significant positive correlation between trained item RT and hippocampal activation. All correlations were significantly different between conditions within ROI. All significant differences are reported at p < .05. Consistent with the complementary learning systems (CLS; McClelland, McNaughton, & O’Reilly, 1995) model, these results indicate that 1) for established vocabulary The Society for the Neurobiology of Language Poster Session B (known items), less neural effort is required for more quickly accessible lexical items, but with no CLS type component, 2) retrieval of newly-learned items is initially supported by regions associated with episodic memory and 3) consolidation of learning (and resultant decreased RT) leads to learned-word retrieval being supported by regions associated with language/semantic memory, less so by the episodic memory regions. When these results are compared with the same paradigm in people with aphasia, we can explore to what extent therapy-induced changes parallel to those seen in novel vocabulary acquisition in the healthy brain. These results have implications for optimising future word-finding therapies and improving treatment outcomes for patients with aphasia. Development B14 The influence of the prenatal linguistic environment on the newborn cerebral language processing: A fNIRS study Laura Caron-Desrochers1,2,3, Phetsamone Vanassing1,2, Julie Tremblay1,2, Sarah Kraimeche1,2,3, Ioana Medeleine Constantin1,2,3, Kassandra Roger1,2,3, Sarah Provost1,2,3, Catherine Taillefer2,3, Isabelle Boucoiran2,3, Anne Gallagher1,2,3; 1 Neurodevelopmental Optimal Imaging Laboratory (LIONLAB), 2 Sainte-Justine University Hospital Center, 3University of Montreal From birth, the newborn brain reacts differently to familiar sounds, such as its mother’s voice or native language. For instance, hearing its native language triggers a lefthemisphere dominant neuronal response in temporal and frontal regions that differs from the right-dominant response to a foreign language. While it suggests that brain development can already be modulated by the fetus’ surrounding environment, the current knowledge is mainly based on postnatal correlational studies. The present study aimed to investigate the impact of a repeated prenatal exposure to a foreign language on the brain activation patterns at birth. METHODS: The preliminary sample included 31 healthy pregnant women recruited during their last trimester of pregnancy. They were randomly assigned to either the control group (n=11, no prenatal exposure) or one of the two experimental groups with a prenatal exposure to a foreign language, either German (n=10) or Hebrew (n=10). Between 35 weeks of gestation and birth, fetuses in the experimental groups were exposed daily to a children’s story. It was repeated twice in both their native (French) and a foreign language (German or Hebrew) using headphones placed on the mother’s abdomen. Thus, they heard each version 52 ± 14 times on average. In the first days after birth (average of 26 ± 11 hours after birth), the newborns undertook a brain imaging recording using functional near-infrared spectroscopy (fNIRS) while listening to the very same story. The story was presented in the three languages (French, German and Hebrew). fNIRS is a non-invasive technique that allows to indirectly measure cortical activity based on neurovascular coupling and hemoglobin concentration changes. The newborns’ fNIRS data were preprocessed offline – including normalization, segmentation, artifact correction, and estimation of concentration changes in oxy- and deoxy- 91 Poster Session B hemoglobin based on the modified Beer-Lambert law. The signal was averaged across participants, groups and language conditions. RESULTS: Results from the preliminary sample revealed a significant increase in hemodynamic concentration in bilateral temporal lobes for all conditions (p<.05). More specifically, the native language elicited a left-hemisphere dominant activation, while both foreign languages elicited a right-dominant activation. We then performed comparisons between brain responses to familiar (exposed prenatally) and unfamiliar (unexposed) foreign languages in the experimental groups. Interestingly, hemispheric dominance patterns differed across both foreign languages when taking to account the newborns’ familiarity. Cerebral response to the familiar language showed significant dominance in the left posterior temporal region compared to the unfamiliar language that elicited a right-dominant activation (p=.043). CONCLUSION: Our results suggest that newborns as young as one-day old process their native syllable-timed language differently from various stress-timed foreign languages. The brain shows a leftdominant processing regarding their native language, contrarily to both foreign languages. Moreover, our results imply that a repetitive prenatal exposure to new speech stimuli during the last month of pregnancy modulates the brain functional organization in a way that resembles the native language brain processing. It provides preliminary evidence of prenatal experience-dependent auditory and linguistic learning. B15 How early language development of international adoptees stands the test of time Gunnar Norrman1; Stockholm University 1 Language experience during early childhood shapes linguistic behavior in fundamental ways, but the nature of this interaction and its long-term consequences still elude researchers in the field. Individuals who have been adopted in early childhood and never exposed to their native language again offer a unique opportunity to study long-term effects of early language exposure. Here, we investigated the neural processing of phonological contrasts in adults who were adopted from China to Sweden before the age of 48 months (mean age of adoption 18 +/- 11 months), and who had completely lost any ability to produce or comprehend Chinese. Although data from international adoptees have been used as evidence for neural resetting and re-specialization for the language of adoption (Pallier et al., 2003), recent studies indicate that early experience may have more long-lasting effects under changing circumstances than previously thought (Pierce et al., 2014, 2015). However, although it seems critical, no previous study has examined the processing of phonological contrasts characteristic of the native and the adopted language in the same individuals, and no previous study has addressed this question benefiting from the high temporal resolution of event-related brain potentials. We tested the perception of phonological contrasts unique to either Chinese or Swedish, and compared the adoptees with Chinese and Swedish native speakers with no previous experience of 92 SNL 2019 Program the other language. Stimuli consisted of the combination of a Chinese lexical tone contrast (high-flat and highrising), and a Swedish vowel contrast (/ʉ/ and /y/). Neural responses were elicited during four blocks of a double deviant oddball task, where each of the four stimuli were presented once in each condition (standard, tone deviant, vowel deviant). Responses were then averaged across condition, and the standard condition subtracted from each deviant condition, to get the event-related difference wave that reflected the cortical response to the unique phonological contrast. Results show significant mismatch negativity (MMN) responses for both conditions (Tone, Vowel), as well as a significant interaction between condition and group (adoptees, Chinese natives, Swedish natives) (F(2,59) = 6.42, p = 0.003, η2 = 0.137). Further analysis reveal significant interactions between group and condition in the Swedish and Chinese natives (F(1,41) = 12.29, p = 0.001, η2 = 0.190), and the Swedish natives and adoptees (F(1,39) = 5.26, p = 0.027, η2 = 0.137), but not in the Chinese natives and the adoptees (F(1,38) = 0.62, p = 0.433, η2 = 0.01). This indicates that while Chinese natives and adoptees differed from Swedish natives in their relative sensitivity to the Chinese and Swedish contrasts, they did not differ from each other. These results show that despite having lost their native language Chinese, and despite being dominant speakers of their second language Swedish, adoptees retain a pattern of early cortical responses to speech similar to that of their native Chinese peers, while differing from native speakers of Swedish. Thus, early language experience leaves longterm traces in neural specialization for speech that persist into adulthood even in the absence of continued exposure to the language of origin. B16 Congenital blind causes the functional connectivity plasticity of the right inferior parietal lobe in number processing Runhua Guo1, Suting Feng1, Mingyang Li1, Linjun Zhang2, Ke Wang1, Zaizhu Han1; 1Beijing Normal University, 2Beijing Language and Culture University Number processing relies heavily on the inferior parietal lobe (IPL) in healthy people. The processing has also been found to depend on the connections between the IPL and primary sensory cortices (i.e., occipital lobe) in the blinds (Kanjlia et al., 2016). It remains unclear whether the congenital blindness causes the increased connectivity of other high-level cerebral regions in addition to primary perception cortices. To address this issue, we recruited 22 congenitally blind adults (CB) and 21 sighted controls (SC). We collected their resting-state functional magnetic resonance imaging (rs-fMRI) data and performance in number (e.g., numerical associative matching), language (e.g., word associative matching, lexical decision) and primary perception tasks (e.g., primary figure discrimination). The CBs were behaviorally tested with auditory and tactile input modalities, whereas the SCs were with visual and auditory ones. We first separately constructed the functional networks of each of two subject groups using rs-fMRI data. Also we compared the degree value of each voxel in the network (i.e., the sum of the functional connection strength of the voxel The Society for the Neurobiology of Language SNL 2019 Program  with other voxels in the whole brain) between the two groups. We observed three regions whose degree values were higher in the blind than the sighted: bilateral IPLs and left superior parietal lobe. More importantly, the activation intensity of the right IPL (rIPL) (measured by the area’s regional amplitude of spontaneous low-frequency fluctuations, ALFF) was significantly correlated with the performance of number processing in the blind of both auditory and tactile modalities, and in the sighted of visual but not auditory modality. Then, we compared the connectivity strength of each functional connection liking with the rIPL in the network between the two groups. The blindness led to the stronger connectivity of the rIPL with two primary perception areas (the right lingual gyrus, the right middle occipital), and three high-level processing areas (bilateral middle temporal poles, the left middle cingulate cortex). Of the regions, the ALFF values of the right lingual gyrus, and bilateral middle temporal poles were significantly correlated with the performance of number processing. It demonstrates that the blindness results in connection strength enhancement of the rIPL with both primary perception and high-level processing regions for number processing. These findings refine how the absence of visual experience modulates the cortical network of number processing, in which the rIPL, as a key node, strengthens the connectivity with the primary and high-level cortices. B17 Newborn infants show predictive inference of syllables in word-like items Emma Suppanen1, István Winkler2, Teija Kujala1, Sari Ylinen1,3; 1Cognitive Brain Research Unit, Institute of Psychology and Logopedics, University of Helsinki, 2Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, 3CICERO Learning, Faculty of Educational Sciences, University of Helsinki The brain aims to predict future sensory events to facilitate adaptive behavior. A recent study by Ylinen et al. (2017, Developmental Science) showed that such predictive inference is linked with word recognition and learning in 12- and 24-month-old children. Negative-polarity auditory event-related potentials (ERP) were elicited when the preceding word context predicted familiar word endings, whereas word-expectancy violations generated prediction error (PE) responses of positive polarity. PE strength correlated with vocabulary scores at 12 months. Here we aimed to investigate whether the facilitative effect of predictive inference on learning from auditory input can be observed already in newborn infants. We exposed newborn infants (N=75, mean age 9.3 days) to bisyllabic pseudowords AB and CD (p=0.5 for each) during ERP measurement. Then we used an oddball paradigm to probe prediction and learning effects. Standard stimulus was the familiarized pseudoword AB (p=0.8), where A was expected to create predictions of B if learned. Occasional deviant pseudowords were CD, CB, AD and AX (p=0.05 for each). We expected correct predictions to elicit suppressed responses, and prediction errors to elicit larger responses. We repeated the measurement when infants were 12 months old (N=65, mean age 369 days). The deviant pseudoword The Society for the Neurobiology of Language Poster Session B CB was left out from stimuli presented to 12-month-olds in order to make the measurement time shorter (p=0.79 for standard, and p=0.07 for each deviant word-like item). In newborns AD and AX violating the predictions elicited significant PE responses of positive polarity. In contrast, familiarized CD elicited a negative response, resembling the word familiarity effect observed at 12 months by Ylinen et al. (2017). The same kind of pattern repeated for 12-month-olds with faster responses to AD than AX. The findings suggest that newborns can learn to recognize potential words. Importantly, their brain automatically creates predictions about word endings after hearing a familiarized word beginning. Predictive inference may thus facilitate even the earliest language development. B18 Aging patterns of Japanese auditory semantic processing: an fMRI study Hengshuang Liu1, Makoto Miyakoshi2, Toshiharu Nakai3, SH Annabel Chen4; 1Guangdong University of Foreign Studies, 2University of California, San Diego, 3National Center for Geriatrics and Gerontology, 4Nanyang Technological University Introduction: The comprehension of auditory semantics could be viewed as the ultimate goal of verbal communication. Its efficacy is relatively sustained over life span, contradictory with the negative stereotype that older adults always encounter cognitive decline. Methods: The current study adopted the Japanese language to help pinpoint a generic pattern for auditory semantic aging. Twenty-two younger and 21 older Japanese participants performed an auditory semantic-tone task in a 3 Tesla magnetic resonance imaging (MRI) scanner. Results: Results showed that (1) the spared accuracy and slowed response for older adults were accompanied by their intensified long-range inter-hemispheric connectivity and largely unchanged activation and laterality; (2) higher accuracy amongst the older cohort manifested a more economical neural pattern with stronger functional connectivity and decreased brain activation; (3) neural dedifferentiation from semantic-specific to domaingeneral networks was the aging pattern most saliently seen in spoken Japanese comprehension, resembling past findings of visually-presented alphabetic languages. This implied that neural dedifferentiation was powerful and irresistible in aging such that it may occur irrespective of languages (alphabetic vs. character) and modalities (visual vs. auditory). With age, neural compensation appeared to develop in the connectivity system, given the stronger functional connectivity seen in the betterperforming older. The HAROLD model was weakly supported by the connectivity data but not by the laterality results, as reduced hemispheric asymmetry was likely inferable from older adults’ intensified crosshemisphere connectivity but not from their increased regional lateralization. Summary: In short, the auditory semantic function as a preserved cognitive ability seemed to experience greater age-related changes in interregional connectivity than in activation or laterality, underpinned by a relatively intact activation pattern whereas a rewired connectivity network over aging. In addition, the connectivity reorganization was likely predictable 93 Poster Session B through existing neurocognitive aging models such as neural dedifferentiation, compensation, and the HAROLD model. Apparently functional connectivity revealed some unique neural mechanisms not observable through activation analysis, ascertaining the need to examine functional connectivity data in relation to brain activity within the networks. Taken together, the present study advances our knowledge of the aging effect on auditory semantic processing, through examining the relationships amongst behavioral performance, neural activation, activity laterality and functional connectivity of the cortical network. Disorders: Developmental B19 Aberrant Pre-Stimulus Alpha-Band Phase-Locking Predicts Decreased Auditory MMN in Developmental Dyslexia Lars Meyer1, Gesa Schaadt2,3; 1Research Group “Language Cycles”, Max Planck Institute for Human Cognitive and Brain Sciences, 2Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 3Clinic of Cognitive Neurology, Medical Faculty, University of Leipzig Developmental dyslexia (DD) was repeatedly shown to be associated with reduced phonological skills—evident, for instance, from a reduction in the amplitude of the auditory Mismatch Negativity (MMN) in the eventrelated brain potential. Yet, it has been suggested that phonological deficits in DD are in fact secondary to an underlying attention deficit. Under this view, dyslexics cannot disengage their attention quickly enough from a stimulus to attend to a subsequent stimulus, causing a reduced MMN response as an epiphenomenon. We investigated here whether dyslexics’ reduced MMN responses are preceded by markers of aberrant attention in the electroencephalogram (EEG), analyzing the EEG from 28 dyslexic school children (mean age = 9.69 years; SD = 0.50 years) and 25 control school children (mean age = 9.85; SD = 0.56 years). Participants received an audio-visual oddball paradigm, involving visually presented mouth movements forming the syllable / pa/, while either hearing the congruently produced syllable /pa/ as a standard or the mismatching syllable / ga/ as a deviant stimulus; and vice versa in the second experimental block. In addition to the auditory MMN as a measure of phonological processing, we assessed pre-stimulus alpha-band inter-trial phase coherence (ITPC) as a measure of automatic attention; ITPC was averaged across both standard and deviant trials, hence any potential ITPC group difference could not have been a result of MMN group differences. In line with our hypothesis, dyslexic children showed a highly significant pre-stimulus ITPC increase relative to control children. ITPC was a strong predictor of MMN amplitude, such that aberrantly high ITPC predicted an aberrantly reduced MMN amplitude. Moreover, the ITPC × MMN interaction predicted reading abilities, such that poor readers showed both high ITPC and a reduced MMN, the reverse being true in good readers. Increased pre-stimulus alpha-band phase-locking may thus be an overlooked EEG marker of reduced auditory attention switching in DD, consistent 94 SNL 2019 Program with the hypothesis that phonological deficits in DD are in fact secondary to an underlying automatic attention shifting deficit. B20 Neural and perceptual phoneme discrimination in an acoustically variable context is compromised in dyslexia Paula Virtala1, Sanna Talola1, Eino Partanen1, Teija Kujala1; 1Cognitive Brain Research Unit, University of Helsinki Natural speech includes acoustic variation between and within speakers. Still, healthy adults and even newborns process and categorise speech sounds effortlessly. The case may be different in individuals suffering from the reading deficit developmental dyslexia, known to degrade auditory and phonological processing. The present study investigated neural and perceptual processing of phonemes embedded in an acoustically variable or constant context in developmental dyslexia. Dyslexic (n=18) and typically reading (n=20) adults heard acoustically distinctive /æ/ and /i/ phonemes belonging to their native language Finnish. Their electroencephalogram (EEG) was recorded to study the change-related mismatch negativity (MMN) and P3a responses to the phoneme changes. Response amplitudes were compared between groups, constant and variable auditory contexts, and ignore and attentive listening conditions. The participants also reacted to phoneme changes by button presses in a separate condition, in order to compare the groups in perceptual discrimination accuracy (hit-ratio) and speed (reaction time). The MMN amplitude was diminished in dyslexic participants in variable but not constant auditory context. Also the hit ratio was smaller in dyslexics than typical readers in the variable context. According to our results, even very distinctive native language phonemes are challenging to discriminate for dyslexics when the context resembles natural variability of speech. This poor tolerance to acoustic variation in speech could underlie dyslexics’ difficulties in forming strong neural representations of native language phoneme categories during development. Furthermore, future studies should take into account that simple, repetitive stimuli may be insensitive to the speech processing deficits in dyslexia. B21 Neural mechanisms of generalization for language learning in autism spectrum disorder Brian Castelluccio1, Allison Canfield2, Inge-Marie Eigsti3; 1Alpert Medical School of Brown University, 2University of Rochester Medical Center, 3 University of Connecticut Introduction: The structural linguistic abilities of people with autism spectrum disorder (ASD) vary substantially, ranging from the minimally verbal to those who show no impairments on standardized clinical assessments. Behavioral evidence indicates that this latter group demonstrates subtle syntactic deficits that are not necessarily apparent on clinical assessment. Language acquisition is a complex generalization task that requires the extension of learned form-to-meaning mappings to novel stimuli. Generalization itself, the process by which abstracted features of past experiences are extended to new instances, is an area of relative weakness in The Society for the Neurobiology of Language SNL 2019 Program  ASD. The current study aimed to determine whether linguistic generalization is an area of weakness in ASD, to compare the neural resources engaged for linguistic generalization in people with and without ASD, and to determine the degree to which linguistic generalization is a domain-general process. Methods: Seventeen young adults with ASD and 17 well-matched typically developing peers completed two experiments that tested the ability to abstract a principle and generalize it to new stimuli while undergoing functional magnetic resonance imaging. Experiment 1 tapped linguistic generalization, and Experiment 2 used a nonlinguistic visuospatial generalization task for comparison. Each experiment involved a manipulation of generalization distance, or the degree of difference between the exposure and test stimuli. Results: In the linguistic experiment, participants were more accurate and responded more rapidly on trials involving a small generalization distance compared to a large distance. ASD participants were marginally less accurate than TD participants, p=.06. Task performance in the ASD group only was driven by nonverbal IQ and deductive reasoning. Across groups, the linguistic generalization task engaged a robust network of task-correlated bilateral cortical and subcortical structures. Although neither main effects of group nor of generalization distance survived statistical correction for whole-brain analysis, trends for group differences in both directions were observed in frontal, temporal, and occipital regions. In the visuospatial experiment, there was no group difference in accuracy. Surprisingly, participants were more accurate on trials involving a larger generalization distance, though, in the ASD group, reaction times were longer for those trials. As with the linguistic task, the visuospatial task elicited broad activation across the cortex and in subcortical regions, but group differences did not survive statistical correction. Trends for group differences in both directions were extracted. Conclusion: The results provide novel insights into the overlap between generalization and language. They expose clinically relevant differences in generalization in a group with language vulnerability. Clinical implications include the consideration of specific supports for generalization in interventions targeted to high-functioning, highly-verbal adults with ASD. These findings provide a basis for understanding the neural circuitry supporting generalization, and they highlight the interplay of language and other cognitive faculties. B22 Lexical factors in confrontation naming among multilinguals with dementia: The effects of frequency, age of acquisition, phonological neighbourhood density, word class, imageability and cognate status Pernille Hansen1, Hanne Gram Simonsen1; 1MultiLing – Center for Multilingualism in Society across the Lifespan, University of Oslo All words have inherent properties linked to their form, meaning and usage patterns affecting how easily they are activated. The more often we encounter a word and the more connections it has to other units in our mental lexicon, the more easily it is produced. This paper studies lexical retrieval among seven multilinguals with The Society for the Neurobiology of Language Poster Session B dementia, investigating the effects of six properties known to influence processing: frequency, subjective age of acquisition, phonological neighbourhood density, word class, imageability and cognate status. Based on previous findings, we may assume that words are easier to retrieve from the mental lexicon the more frequently they are encountered, the earlier they were acquired and the denser their phonological neighbourhoods, that is, the more words that differ from them with one sound only (Luce & Pisoni 1998). Furthermore, nouns are typically retrieved faster than verbs, and words are retrieved more easily the more imageable they are, that is, the more easily they give rise to a mental image (Paivio et al. 1968). Finally, among multilinguals, similarities in form and meaning across languages also facilitate activation. Thus, cognates should be easier to retrieve than non-cognates (Gollan et al. 2007). The participants were seven elderly multilinguals with dementia who had acquired Norwegian as a second language. All seven spoke English, but otherwise, their language backgrounds differed: Two grew up as English monolinguals, whereas the rest acquired multiple languages (including Japanese, Tamil, Urdu, Finnish and an African Creole) from birth or early school years. Data collection was carried out in their home, at the university or in a day care center. For each language in active use, the participants completed a picturebased naming task, consisting of 31 nouns from the Psycholinguistic Assessments of Language Processing in Aphasia (Kay, Lesser & Coltheart 1996), and 31 verbs from the Verb- and Sentence Test (Bastiaanse et al. 2003). The current paper focuses on the results from English and Norwegian, the only two languages the participants had in common. Lexical properties were partly extracted from the two psycholinguistic databases MRC Psycholinguistic Database (Coltheart 1981) and Norwegian Words (Lind et al. 2015), and partly established for the current project. For cognate status, a scalar approach was preferred over a dichotomous distinction, following Friel & Kennison (2001). The effects of these six variables were examined through correlation analyses and regression models, using R. Preliminary analyses indicate that while the accuracy on the naming tasks correlated significantly with both frequency and age of acquisition, only frequency was a significant predictor in a regression model including both factors. Imageability was a better predictor than word class, accounting for variability both within and across word classes. The cognate effect appeared to be more complex, some times aiding the retrieval of the target words, but other times leading to the activation of words that do not match the picture in the given language. There were individual differences in the participants’ response patterns, and these will be discussed in light of diagnostic information as well as language history and current use. B23 Differences in the latent structure of language abilities in 8-year-old children with or without a familial risk of dyslexia Soila Kuuluvainen1, Julia Varjola1, Piia Turunen1, Teija Kujala1; 1Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki 95 Poster Session B Developmental dyslexia is a difficulty in learning to read in children with otherwise typical cognitive development, who have received normal reading instruction. It is estimated to be highly hereditary, with 35-50 % of children of dyslexic parents also developing dyslexia. These children, due to their family background, are often referred to as children at risk for dyslexia. One of the main theories attributes dyslexia to deficits in phonological processing, either in encoding or in manipulation, or both. However, dyslexics also often have problems in rapid alternating naming tasks, verbal working memory, and some also have poorer verbal reasoning compared to their nonverbal reasoning abilities. However, these tasks share a considerable proportion of variance. First, sufficient language-specific phonological representations lay the foundation to all language skills. Secondly, the ability to efficiently retrieve, maintain and manipulate verbal material in working memory is essential to any task requiring more than simple repetition of short utterances. Thus, Ramus et al. (2013), in their exploratory factor analysis, succeeded in separating verbal skills in 8-12-year-old English-speaking children with specific language impairment (SLI), dyslexia and their typically developed peers into three factors: phonological representations, phonological skills, and non-phonological skills. The latter two have higher cognitive requirements than the first, which only requires short-term maintenance and identification of phonological material. The current study aimed to confirm the results of Ramus et al. (2013) in a sample of 152 Finnish 8-year-old children, and further explore the differences in latent structure of language abilities in children at risk for dyslexia (N=84) and their controls (N=68). Language tests used consisted of NEPSY II Phonological processing, Word Lists, and Comprehension of Instructions, NEPSY-I Repetition of Nonwords, Rapid Alternating Naming of colours and objects, WISC-III Numbers forward and backward, and the Boston Naming Test. The confirmatory factor analysis revealed that the current data did not fit to the factor structure proposed by Ramus et al. (2013). However, the exploratory factor analyses showed good fit of a threefactor structure in both children at risk for dyslexia and their controls. The factor structures differed in these two groups, suggesting differences in the latent structure of language abilities in these two groups. The typically developed children’s factor structure suggested their phonological abilities to be at a level where they do not appear as an independent factor, but rather are combined with more difficult linguistic tasks, namely the Comprehension of Instructions. The at-risk children, however, showed a phonological representations/abilities factor, with high loadings from Phonological Processing and Repetition of Nonwords tasks. In addition, both groups showed a second factor reflecting verbal shortterm memory and a third reflecting rapid naming ability. The differences to the results of Ramus et al. (2013) are likely to arise from the different age and language of the tested children, the current study lacking the SLI group, as well as differences in the tests employed. 96 SNL 2019 Program Language Genetics B24 Dyslexia risk children with different forms of ROBO1 gene differ in their cortical representation of new phonological word forms Anni Nora1, Hanna Renvall1, Miia Ronimus2, Heikki Lyytinen3, Juha Kere4, Riitta Salmelin1; Department of Neuroscience and Biomedical Engineering, and Aalto Neuroimaging, Aalto University, 2Niilo Mäki Institute, Jyväskylä, Finland, 3Jyväskylä University, Finland, 4Karolinska Institutet, Stockholm 1 Dyslexia, or reading disorder, is the most common learning disability world-wide, affecting 5-10% of schoolage children and ranging from mild to severe. The underlying problems lie in phonological processing and learning. Several dyslexia candidate genes have been identified, with effects on brain structure. However, few studies have examined the effects of these genes on brain function in phonological processing. One of these genes, ROBO1, is an axon guidance gene that controls the development of the corpus callosum and has been linked to white and grey matter volume of the posterior corpus callosum and parietal cortices. ROBO1 is also implicated in pseudoword repetition performance, auditory response strength and interaction of the left and right cortices in auditory processing. Therefore, it is a good candidate gene for identifying different subgroups of dyslexics who might differ in phonological processing and learning, and in the interplay of the left and right auditory cortices that underlies the development of phonological processing. To tap the underlying phonological problems, we utilized pseudoword repetition, which has been shown to be an efficient paradigm in dissociating between dyslexic and non-reading-impaired individuals. Children on the first grade of elementary school, with high risk for dyslexia, were identified based on their performance in learning letter-sound-correspondences in a version of the GraphoGame and invited to further cognitive and reading tests, genetic mapping and brain imaging sessions. The control group consisted of children from the same classrooms. The participants were 28 dyslexia risk and 20 control children, with equal sex and age distribution in the two groups. Based on identification of the ROBO1 (3p12) gene allele in snp rs6770755 (GG vs AA/GA), the dyslexia risk group was further divided in half into two haplotype subgroups. We measured cortical responses with magnetoencephalography (MEG) while the children performed delayed overt repetition of four-syllable pseudowords. Half of the word forms were repeated four times during the first session and again on the following day, along with new words. Learning of the repeated pseudowords, without meanings attached, was measured with repetition and recognition accuracy. The dyslexia risk children improved in the overt repetition but were poorer in explicit recognition of the recurring word forms, despite showing normal-range nonverbal intelligence and associative learning task performance similar to the control group. In the auditory cortices, only the control group showed reduced left temporal activation for recurring pseudowords 400-800 ms after word onset, possibly related to more efficient left- The Society for the Neurobiology of Language SNL 2019 Program  hemispheric phonological processing and better explicit memory for the newly-learned word forms. Both groups showed a similar effect in the right temporal areas. In the comparison of different haplotype groups, dyslexics with the risk haplotype (AA/GA) did not show systematic cortical effects. In contrast, the GG-type children showed a right-hemispheric reduction of activation for the recurring word forms. Dyslexia risk children with this genotype may thus be able to compensate for the deficient left-hemispheric phonological memory with right-hemispheric processes. The results are unique in highlighting the interplay of the left and right auditory cortices in phonological development and dyslexia. History of the Neurobiology of Language B25 Brain mapping during awake surgeries: 10 years of studies on the neurobiology of language Sylvie Moritz- Gasser1,2,3; 1CHU Montpellier, Department of Neurosurgery, 2 Montpellier University, Department of Speech-Language Therapy, 3 Institute of Neuroscience of Montpellier, Inserm U1051 Brain mapping performed during awake surgery for diffuse low-grade glioma allows the neurosurgeon to address the twin challenges of maximizing the tumor resection while preserving functional networks, following a welldescribed procedure (Duffau, 2009). Briefly, a standard battery of language evaluation is administered to the patient by a speech-language therapist in the surgery theatre (as well as just before and a few days after the surgery) while the neurosurgeon applies direct electrical stimulations on the exposed cortical and subcortical surface of the brain using a bipolar electrode. When a disorder is elicited reproducibly during a direct electrical stimulation (e.g. anomia, paraphasia, reading disorder), the stimulated site is considered as functional and then preserved. All positive stimulation sites are marked with a tag number and tumor removal is thus achieved according to individual functional boundaries. Beyond its evident clinical relevance, this technique allows to establish very accurate anatomo-functional correlations and then brings precious lights in the field of the neurobiology of language. We propose to present here the major findings came out from the study of these anatomo-functional correlations by our team these last ten years, and based on more than 500 surgeries. We will thus address the neurobiology of multimodal semantic processing, of lexical retrieval, of reading, of stuttering, of mentalizing, and subsequently explore the neural foundations of the dual-stream model of speech-language processing, at the cortical and subcortical levels. We will finally conclude on the perspectives brought by these findings concerning the comprehension of aphasic clinical presentations and consequently language rehabilitation. The Society for the Neurobiology of Language Poster Session B Meaning: Lexical Semantics B26 The role of motor system in action-related language comprehension in L1 and L2: an fMRI study Lili Tian1,2,3, Hongjun Chen1, Wei Zhao4, Jianlin Wu5, Qing Zhang5, Ailing De5, Paavo Leppänen2, Fengyu Cong4, Tiina Parviainen2,6; 1School of Foreign Languages, Dalian University of Technology, 2Department of Psychology, University of Jyväskylä, Finland, 3Language and Brain Research Center, Sichuan International Studies University, 4 Department of Biomedical Engineering, Dalian University of Technology, 5Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, 6Centre for Interdisciplinary Brain Research, University of Jyväskylä, Finland The way that language is coded and decoded in our brain has long been an intriguing topic. For a long time, language has been studied from a modular perspective, claiming that language processing is not associated with other cognitive modules, such as sensory-motor system. However, the framework of embodied cognition has posed challenges to the modular view by suggesting that meaning-retrieval involves motor system. To date, despite of extensive research exploring language-motor coupling, the issue that to what extent motor system is engaged in different linguistic circumstances has rarely been discussed. To explore the graded nature of motor engagement in language processing, the present study, by adopting fMRI, has investigated neural activations of motor and language ROIs and functional connectivity between them in processing languages with different abstraction degree (literal, metaphorical, abstract) in both L1 (native language) and L2 (second language). A selected number of 29 Chinese-English speakers participated in the experiment, with Chinese as their native language and English as the second language. The study consists of L1 Experiment and L2 Experiment, where a one-factorial within-subject design is used. Phrase type is manipulated in the two experiments, including literal, metaphorical and abstract conditions. Action-related (related to hand or arm) verbs are embedded in both literal (抓住皮 球, zhuā zhù pí qiú, which means “catch the ball”) and metaphorical phrases (抓住意思, zhuā zhù yì sī, which means “catch the meaning”). The same meaning conveyed by metaphorical phrase is connoted in abstract one (理解 意思, lǐ jiě yì sī, which means “understand the meaning”). Stimuli in L1 are virtually semantic-correspondent to those of L2, with some exceptions due to the non-existence of some English metaphorical expressions in Chinese. Results showed an attenuated motor activation from literal to metaphorical to abstract language in both L1 and L2. Besides, contrast analysis between L1 and L2 showed overall greater activations of motor ROIs in L2. Through investigating neural activations and functional connectivity of motor and language systems, the present study has shed light on the gradations of motor system engagement in language processing, which will bring novel insight to our understanding of the neurobiological basis of language processing. 97 Poster Session B B27 How cognitive abilities modulate the brain network for auditory lexical access in healthy seniors Stefan Heim1,2, Barbara Wellner1,2, Bruno Fimm2, Christiane Jockwitz1,3, Svenja Caspers1,3, Katrin Amunts1,3; 1Institute for Neuroscience and Medicine (INM-1), Research Centre Jülich, Germany, 2RWTH Aachen University, 3Heinrich Heine University Düsseldorf Word-finding difficulties are a common phenomenon in healthy ageing. The causes seem to be deficits in lexical access rather than deficient representations of words and their meaning. Since ageing persons may also show decline in other cognitive functions such as executive control or attention, the question is in how far cognitive factors contribute to deficits in lexical access. The present study investigated the neural mechanisms of auditory lexical access in healthy elderly subjects with particular focus on modulation effects of general intelligence, selective attention, verbal fluency, and spoken language comprehension. Thirty-three older adults (mean age 62.9 years) performed an auditory lexical decision task during an event-related fMRI session. In addition, an extensive battery of neuropsychological tests was conducted. Performances in cognitive tests of auditory selective attention, verbal fluency, auditory comprehension, and general intelligence served as regressors for the BOLD contrast of auditory lexical access. The cortical network of auditory lexical access comprised the bilateral inferior frontal sulcus and inferior frontal gyrus (in particular the left area 44), the bilateral Heschl’s gyrus (areas Te1.0 and Te1.2), the bilateral superior temporal gyrus (partially overlapping with the right area Te3), the bilateral superior temporal sulcus, the left middle temporal gyrus, the bilateral parietal operculum (overlapping with the right area OP 4), parts of the bilateral precentral (areas 4a and 4p) and postcentral gyri (partially overlapping with the left area 3b and the right area 3a), the left inferior parietal lobule (area PFcm), and parts of the bilateral superior frontal gyrus. In addition, subcortical activation occurred in the right anterior cingulate gyrus, and the right insula. This cortical network was differentially modulated by the cognitive functions. Positive modulatory effects were found for auditory selective attention, verbal fluency, and general intelligence. In contrast, auditory language comprehension only exerted negative modulation. Among these factors, only the effect of verbal fluency was reduced over the period of 12 months in a reduced sample of 28 participants, with all other effects remaining stable. In conclusion, in correspondence to the data of healthy young adults, the general network of auditory lexical access seems to be preserved until at least the age of early 70. The present results link cognitive performance to a functional cortical network of speech recognition and thus complemented the current literature of structural cortical changes at the interface of language, cognition, and brain during healthy ageing. B28 The angular gyrus is activated by episodic retrieval but deactivated by semantic retrieval Gina Humphreys1, Matthew Lambon Ralph1; 1The University of Cambridge 98 SNL 2019 Program Introduction The inferior parietal lobe (IPL), particularly the angular gyrus (AG), forms a key component of the default mode network (DMN) i.e. a network that shows task-related deactivation relative to rest. A large metaanalysis of fMRI studies implicates overlapping IPL regions in numerous cognitive domains [1]. There are several alternative hypotheses regarding IPL’s underlying cognitive function: a) IPL is a semantic-hub storing multi-modal semantic information [2]. b) IPL stores/ buffers episodic memory [3,4]. c) IPL, as part of the wider DMN, is involved when attention is internally-directed, such as when recalling semantic/episodic memories [5,6]. Hence, when attention is externally directed (e.g. during the performance of most experimental tasks), internally-directed thoughts are suspended thus leading to deactivation of the DMN. Methods Twentytwo participants completed an fMRI study in which we manipulated internally vs. externally directed attention, and episodic retrieval vs. semantic retrieval. There were four conditions, two involving internally directed attention (semantic retrieval or episodic retrieval) and two involving externally directed visual attention (real-world object decision, or scrambled pattern decision). Additionally, for the episodic retrieval task, the participants reported the vividness of their episodic memory on each trial. Results The results showed that the AG, as well as the majority of the DMN, showed a clear preference for episodic retrieval over all other tasks: during episodic retrieval the AG showed significantly positive activation relative to rest, whereas during semantic retrieval and real-world object decision the AG was significantly deactivated (the scrambled pattern decision did not differ from rest). Furthermore, AG activation was found to correlate positively with the self-reported vividness of the retrieved episodic memory, whereas for all other conditions activation was negatively correlated with task difficulty (i.e. greater deactivation for greater difficulty). Conclusions The AG, and indeed the DMN more generally, is actively engaged during episodic retrieval, and the level of engagement relates positively to the vividness of the memory. Contradicting the semantic hypothesis, the semantic retrieval task was found to deactivate the AG, with the level of deactivation related to task difficulty, as shown elsewhere [7]. There was also little evidence to implicate AG in internally- vs. externally-directed attention. We theorise that in this study the IPL acts as online buffer of episodic information that is stored elsewhere in the brain. References [1] Humphreys & Lambon Ralph (2015) Cereb Cortex. 25: 3547-3560. [2] Binder et al. (2009) Cereb Cortex 19:2767-2796. [3] Vilberg & Rugg (2008) Neuropsychologia 46:1787-1799. [4] Wagner et al. (2005) Trends in Cognitive Sciences 9:445-453. [5] AndrewsHanna (2012) Neuroscientist 18:251-270. [6] Buckner et al. (2008). Ann N Y Acad Sci 1124:1-38. [7] Humphreys et al. (2015). PNAS 112: 7857–7862. B29 A cross-method, cross-language comparison of semantic feature norms Sasa Kivisaari1, Annika Hultén1, Riitta Salmelin1; 1Department of Neuroscience and Biomedical Engineering, Aalto University The Society for the Neurobiology of Language SNL 2019 Program  Introduction: We perceive the physical world around us as rich with meaning. Models of semantics often assume that these meanings are composed of features such as taste, feel or function of an object. Presently, multiple approaches are used to quantify this semantic feature space, but it is not known how the different approaches relate to one another. For this study, we collected a set of Finnish behavioral production norms for 300 (99 abstract + 201 concrete) words using an online questionnaire. We compared these semantic feature norms with an existing behavioral production norm set (Centre for Speech Language and the Brain concept (CSLB) property norms) in English. In addition, we compared word embeddings from large-scale Finnish and English text corpora using the Word2vec algorithm. This allowed us to compare two different acquisition methods (behavioral production norms vs. Word2vec) and two genetically dissimilar languages (Finnish and English), and evaluate the extent to which they produce similar information. Method: 273 respondents filled in an online questionnaire. Each target word, (e.g. apple, Finnish: omena) was presented with 15 open text fields to which respondents filled in the the attributes they considered relevant for this item. The open field responses were first automatically lemmatized using the Omorfi parser. Synonyms or similar words were collapsed into one feature (e.g. small, smallish, little and miniature → small) and the production frequency was normalized with the number of respondents. Only concrete words were examined in this study. The CSLB property norms were extracted from www.csl.psychol.cam. ac.uk/propertynorms. Word embeddings were based on a 6-billion token internet-derived text corpus for English and 1.5 billion token internet corpus in Finnish (lemmatized). In both cases, the semantic space was built using a Word2vec skip-gram model with a maximum context of 5 + 5 words (5 words before and after the word of interest). The norms were compared using a second-order correlation dissimilarity matrices based on cosine distance of feature vectors. Results: All norm sets demonstrated a clear and comparable taxonomical category structure, in that words from the same semantic category clustered together. The norm sets also significantly correlated with one another. The highest correlation was between the two Word2vec based word embeddings in Finnish and English (rho = 0.56, p < 0.001). The second highest correlation was between the Finnish Word2vec model and CSLB production norms (Spearman rho = 0.36, p < 0.001) on par with the English word2vec model and CSLB production norms (Spearman rho = 0.33, p < 0.001). The Spearman rho between Finnish production norms and English and Finnish word embeddings was 0.25 (p < 0.001) and 0.23 (p < 0.001). Conclusions: Semantic features extracted using different methods provide comparable information. The highest similarity was observed between the two word embeddings based on different languages, which suggest that the norm collection method may be more important than cultural variations in semantics. Differences between production norms and word embeddings may in part reflect the fact that the latter are more heavily influenced by associative relationships. The Society for the Neurobiology of Language Poster Session B B30 The robustness of prediction effects based on article-elicited negativity in spoken language comprehension depends on the communicative intentions of listeners and speakers Angèle Brunellière1, Laurence Delrue2; University of Lille, CNRS, UMR 9193 - Sciences Cognitives et Sciences Affectives, 2University of Lille, CNRS, UMR 8163 - Savoirs Textes Langage 1 Natural speech flow is very fast in everyday communication, yet listeners seem to understand the message conveyed by speakers effortlessly and efficiently. Neuronal mechanisms underlying top-down predictions at different linguistic levels (phonological, lexical, syntactic, semantic) about the upcoming input (Pickering and Garrod, 2007) may account for such a paradox. Although the rapidity of natural speech in everyday communication could cause listeners to generate top-down predictions, the role of predictive mechanisms is not clearly established in spoken-language comprehension (Huettig, & Mani, 2016). The goal of the present study was to explore how robust are prediction effects based on articleelicited negativity in spoken language comprehension and whether their robustness is dependent on communicative intentions of listeners and speakers. We conducted two event-related potential (ERP) experiments during which French-speaking participants listened to semantically constraining sentences predicting a target word which was not presented. In the two ERP experiments, we focused on a negativity time-locked to an article which could be either in agreement or in disagreement with the gender of the expected, yet not presented, word. In a passive listening task, the amplitude of the negativity elicited by the recognition of the article was not affected by its gender-agreement with the expected noun. In contrast, when listeners were asked to judge the speaker’s communicative intention, prediction effects were found on the negativity with a stronger amplitude for the predictioninconsistent article. However, this pattern was only observed when the speaker’s communicative intention was strong. Therefore, it appears that prediction effects based on article-elicited negativity in spoken language comprehension are flexible and dependent upon the communicative intentions of listeners and speakers. B31 Say What You Mean: Contrasting BOLD responses of the ventral anterior temporal lobe during visual recognition of iconic and arbitrary word forms Richard J. Binney1, Baihan Lu2, Mairead Healy1, Gabriella Vigliocco2; 1School of Psychology, Bangor University, 2Division of Psychology and Language Sciences, University College London It is a traditionally held view that the relationship between a word’s form (i.e., its phonology and orthography) and its meaning (i.e., semantics) is arbitrary. Indeed, there is evidence from within both classical and contemporary literatures to suggest that arbitrariness creates learning advantages and greater flexibility in language use, including the acquisition of larger and abstract vocabularies. Conversely, there is a growing body of evidence for an important function of iconicity in language, that is, of a direct mapping from word form on to aspects of its meaning. This includes a role in early 99 Poster Session B language learning, where direct mappings facilitate a child’s appreciation of the relationship between words and their sensorimotor experience. Further, there is evidence that iconicity can afford greater resilience of word knowledge to neurological damage. The present study contrasted, for the very first time, neural responses to iconic and arbitrary words during a visual lexical decision task presented in the English language. The aim was to determine whether there may be alternative neural mechanisms to achieving recognition of these two types of words. We hypothesized that this would be realized in differential engagement of semantic and phonologic cortical systems. The study focused upon a familiar form of iconicity in English, known as onomatopoeia, in which the phonology of a word imitates the sound to which it refers (e.g., quack, boom and sizzle). Iconic and arbitrary words were identified via ratings performed by an independent participant sample, and then matched a priori on a number of other psycholinguistic variables including but not limited to lexical frequency, imageability, age of acquisition, and number of phonemes. An additional set of words that were similar to the iconic words by way of strong associations with auditory events (e.g, choir, radio, drums) were also used to ascertain whether any differences can be explained by semantic properties of words rather than iconicity per se. Thirty participants at Bangor University underwent fMRI which was acquired in a rapid event-related design and using a dual-echo EPI sequence that is optimized to reduce magnetic-susceptibility associated signal loss in key parts of the ventral language pathway (e.g., the visual word form area and the basal language area). No differences were observed in behavioural responses (as measured by accuracy and decision times) to each of the word categories. A univariate analysis of the neuroimaging data revealed that during recognition of iconic words, as contrasted to each of the two sets of non-iconic words, there was decreased activation in the ventral anterior temporal lobe, an area associated with integration of multimodal semantic features. This suggests that recognition of these iconic words can be achieved without accessing such higher-level semantic information. Increased activation for iconic words was also observed in the precuneus, an area associated with mental imagery. We discuss the results of an ongoing multivariate patternbased analysis, and the implications of these novel data for models of word recognition, comprehension and language development. B32 Online build-up of cortical representations for novel morphemes and words: multi-modal evidence of ultra-rapid functional and structural plasticity in language acquisition Yury Shtyrov1,2; 1Aarhus University, 2St. Petersburg University Humans learn new language elements rapidly, an essential skill which ensures high efficiency of our communication system. However, the neural bases of this important function are poorly understood. How exactly are words, morphemes and their combinations acquired by our brain, and can we track this process 100 SNL 2019 Program neurophysiologically? To this end, we ran a series of multimodal neuroimaging studies using electrophysiological, structural and neurostimulational techniques to address the online acquisition processes. We employed electroand magnetoencephalography (EEG, MEG) to register ERP/ERF indices of (1) long-term memory trace activation, visible in the form of enhanced responses to familiar (i.e. successfully acquired) morphemes, and (2) connections between morphemic representations, manifest as priming effects leading to response reduction. Using this approach, we addressed the brain mechanisms of online learning of new language representations for monomorphemic meaningless wordforms, new meaningful words as well as novel affixes in the native or second languages. Furthermore, to tackle the causal role of the neurophysiological processes at hand, we used brain stimulation methods – transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) – to interfere with the cortical function in the learning process both at the core language systems level (Broca and Wernicke areas) and in the domain of referential semantics (modality-specific areas). Finally, we used novel structural MRI methods (diffusion kurtosis imaging, DKI) to understand the microstructural changes underpinning language learning in the brain. We find that the temporal and inferior-frontal areas of the neocortex exhibit complex changes in activation patterns in in the process of acquiring novel linguistic representations. These become exhibited as both an increase in activation for novel representations, and a decrease of response amplitudes for morphologically primed elements. These effects are (1) manifest almost immediately, within mere minute of exposure, (2) to a substantial degree independent of attention, reflecting a largely automatic nature of initial word acquisition stages, (3) most efficient for native language, (4) present both immediately and after an overnight consolidation, and (5) operate in both visual and auditory modalities. By using neurostimulation techniques to interfere with modality-specific systems in the process of learning, we could show their role in the acquisition of semantics (e.g. motor cortex for actionrelation words). Modulating activity in the core language cortices with tDCS changes the balance between the acquisition of concrete and abstract language and may help promote more successful learning. Finally, our experiments with microimaging DKI methodology suggest that rapid plastic changes in a range of cortical areas (including ATL and hippocampus) take place within even a short naturalistic language learning session. These experiments show that our brain is capable of a rapid formation of new cortical circuits online, as it gets exposed to novel linguistic patterns in the input. They demonstrate that the use of a comprehensive combination of neuroimaging tools to address function, structure, dynamics and causal relationships may provide the best window on the dynamic processes of neural memorytrace build-up and activation. The Society for the Neurobiology of Language SNL 2019 Program  Meaning: Combinatorial Semantics B33 Sociality effects along linguistic hierarchies Nan Lin1,2, Xiaohong Yang1,2, Meimei Zhang1, Guangyao Zhang1,2, Yangwen Xu3,4, Huichao Yang5; 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, 2Department of Psychology, University of Chinese Academy of Sciences, Beijing, 3 Center for Mind/Brain Studies (CIMeC), University of Trento, 4 International School for Advanced Studies (SISSA), Trieste, 5 National Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University Introduction A growing number of studies have found that linguistic stimuli conveying social semantics evoke activation in a specific brain network. Although this network has been associated with the function of representing social concepts underlying word meanings, it may, according to the literature of language processing, have considerable overlap with the brain areas supporting sentence and discourse comprehension. Does the sociality effect in language comprehension arise from the activation of word meanings or from high levels of semantic processes required in sentence and discourse comprehension? We investigated this question by manipulating the sociality (high/low) and linguistic hierarchies (word/sentence/discourse) of stimuli in this fMRI study. Methods Thirty-six healthy undergraduate and graduate students participated in the fMRI experiment. The fMRI experiment employed a silent reading task and a block design, containing three runs of 10 minutes and 26 seconds each. The experiment contained six conditions, namely, the high-sociality discourse condition, the lowsociality discourse condition, high-sociality sentence condition, the low-sociality sentence condition, highsociality word condition, and the low-sociality word condition. We carefully matched several semantic and linguistic variables between the high- and low-sociality stimuli. The stimuli of the sentence and word conditions were constructed by pseudo-randomized the sentences and words used in the discourse conditions. The data analysis was mainly conducted at the region-of-interest (ROI) level. The ROIs were defined based on the result of an ALE meta-analysis of the activity peaks from nine previous fMRI studies that had reported the sociality effect in language comprehension. The meta-analysis revealed six significant clusters located at the bilateral anterior temporal lobes (ATLs), temporal–parietal junctions (TPJs), posterior cingulate (PC)/precuneus, and the left superior frontal gyrus (SFG), respectively. Results The discourseand sentence-level sociality effects (high-sociality > low-sociality) were significant in all ROIs while the wordlevel sociality effect was significant only in the bilateral ATLs and the left SFG. All ROIs except the PC/ precuneus showed a significant sentential effect (sentence/discourse > word). All ROIs showed significant interactions between linguistic hierarchies and sociality, with the sociality effect significantly enhanced along the wordsentence and word-discourse hierarchies. Conclusion Our results indicate that the sociality effect in language comprehension reflects not only the activation of social The Society for the Neurobiology of Language Poster Session B concepts underlying word meanings but also higher levels of social semantic processes required in sentence and discourse comprehension, such as social semantic composition, social working memory, or theory of mind. B34 Neural Correlates of Semantic Role Processing in Naturalistic Language Comprehension: an fMRI study Shulin Zhang1, Jixing Li2, Wen-Ming Luh3, John Hale1; University of Georgia, 2New York University Abu Dhabi, 3National Institute on Aging 1 Introduction: A central goal of human language comprehension is determining who-did-what-to-whom in the event described by a sentence. The assignment of semantic roles during comprehension has been extensively studied in both clinical and experimental settings. It has often been suggested that Agent is easier to assign than the Patient (see Bornkessel-Schlesewsky and Schlesewsky, 2016, section 30.4.3 among others). The current study examines this suggestion using neuroimaging during ecologically-natural comprehension of a literary audiobook. Methods: 35 Chinese participants (30 female, mean age=21.3) listened to a Chinese audiobook version of “The Little Prince” for about 100 minutes. BOLD functional scans were acquired using a multi-echo planar imaging (ME-EPI) sequence with online reconstruction (TR=2000 ms; TE’s=12.8, 27.5, 43 ms; FA=77; matrix size=72x72; FOV=240.0x240.0 mm; 2 x image acceleration; 33 axial slices, voxel size=3.75x3.75x3.8 mm). Preprocessing was carried out with AFNI and ME-ICA (Kundu et al., 2012). The text of the book has been annotated by Li et al 2016 with Abstract Meaning Representations. We considered 1876 words that were marked as Agents and 1401 marked as Patients. Aligning these role-bearing words in time with their utterance in the audiobook, we defined binary regressors in a GLM analysis that compares brain activity across roles. Following Brennan et al 2012 we also included a phrase-structural node-count regressor to model syntactic processing difficulty. This co-regressor helps rule out a syntactic interpretation for any effects that we might observe in response to semantic roles. Other control regressors include the word rate at the offset of each word, the unigram frequency of each word based on the Google Book ngrams, and RMS intensity at every 10 ms of the audio. A whole brain GLM analysis was carried out with SPM12. The predictors were convolved using SPM’s canonical HRF. Results: Agent words give rise to 4 clusters of activation: the bilateral STGs, the right Precuneus and the right frontal lobe. Patient words are correlated with a left-lateralized network including the left MTG, ITG, MFG, IFG, SFG, Parietal Lobe, and the right Cerebellum. The Patient-Agent contrast yields bilateral activation in the Cingulate Gyrus, Precuneus and Insula, and right hemisphere activation in the AG, MFG and SFG. Conclusion: We find that the neural signature for Agents is more focused within the superior temporal lobe than for Patients. This asymmetry in activation could offer a neural basis for the suggestion that Agenthood is easier to recognize than Patienthood. References and a Figure are available at https://bit.ly/2GvkMmk 101 Poster Session B B35 Dividing Attention During Language Comprehension Affects ERP Components of Contextual Integration and Revision Ryan Hubbard1, Kara Federmeier1; University of Illinois, Urbana-Champaign 1 Prior behavioral and electrophysiological work has demonstrated that when individuals process language input, such as reading a sentence, they can use the contextual information provided by the sentence to facilitate processing of upcoming information, likely through predictive processing. For instance, electrophysiological research has shown that the N400 of the ERP is reduced for expected words in strongly constraining sentences compared to weakly constraining sentences, while unexpected words in either context produced large N400s (Federmeier et al., 2007); additionally, disconfirmed predictions elicit a later frontal positivity which likely reflects revision of the previously built contextual representation in order to integrate the unexpected but plausible information. However, these anticipatory comprehension mechanisms may not always be engaged; for instance, when presentation rate is speeded (Wlotko & Federmeier, 2015), the typically observed ERP responses suggestive of prediction are diminished. These results suggest that these anticipatory mechanisms are flexibly engaged and require topdown allocation of resources, and thus may rely on attentional processes. Here, we tested this hypothesis with a divided attention language comprehension paradigm. We first developed a novel dot tracking task that allowed participants to focus their eyes centrally in order to still read words, but required constant attention in order to track moving dots. In a separate group of 8 participants, we verified the attentional demands of the task in a dual-task oddball paradigm designed to elicit P300 ERP responses (Isreal et al., 1980). The dot tracking task reliably reduced P300 amplitude and increased oddball reaction times. We then tested a new group of participants in an experiment in which they first read strongly or weakly constraining sentences with expected or unexpected endings (Federmeier et al., 2007), and then read a new set of sentences while continuously tracking dot movements. Accuracy for comprehension questions on the sentences was reduced in the dual-task condition compared to the standard reading condition. Additionally, a frontal positivity for prediction violations was observed during standard reading, but was abolished when attention was divided. Finally, divided attention also affected contextual integration, as indexed in the P2 / N400 time window. Namely, the difference in ERP amplitude between expected and unexpected endings in strong constraint sentences was larger in standard reading than when attention was divided, and this difference was due to amplitude differences of expected endings. These results demonstrate that multiple aspects of language comprehension utilize attentional resources, and when attention is divided, comprehension may be negatively impacted and contextual revision may be completely disengaged. These results also potentially give insight into what mechanisms are affected in populations with language comprehension difficulties, such as older 102 SNL 2019 Program adults (Wlotko, Lee, & Federmeier, 2010); namely, deficits in attentional processing may lead to deficits in language processing. B36 Semantic Integration During Language Comprehension in Natural Contexts Mengxing Liu1,2, Xiaojuan Wang1, Xiangyang Zhang1, Rui Zhang1, Pedro M. Paz-Alonso2, Jianfeng Yang1; 1Shaanxi Normal University, 2Basque Center on Cognition, Brain and Language Successful comprehension of language relies on effectively integrating information online with our semantic knowledge. Cognitive neuroscience has recently investigated the neural mechanisms underlining semantic integration highlighting the role of left-lateralized regions along the language network, including the inferior frontal gyrus (IFG), the posterior superior and middle temporal gyrus (STG/MTG), the angular gyrus (AG) and the anterior temporal lobe (ATL). Most of the previous neuroimaging evidence examining the role of these regions comes from studies contrasting congruent versus incongruent sentences, or ambiguous versus unambiguous sentences. In the present fMRI study, we sought to better understand the mechanisms supporting semantic integration using naturalistic materials and a inter-subject correlation analytical approach without explicitly modeling the integration process. To this end, a total of 30 right-handed adult participants were asked to read or to listen natural stories silently while undergoing MRI scanning. More specifically, participants performed a comprehension task involving reading or listening five stories (i.e. fairy tales) in Chinese, with one full story presented in each run. The main experimental conditions were sentences that, belonging to a different unrelated story, were inserted at six different points of the ongoing main story, and included: coherent sentences (CS), or sentences from another story presented in their natural order; unconnected sentences (US) or sentences from another story presented in an altered order in relation to their natural order; and, scrambled word lists (SW) or sentences from another story in which the word ordering was scrambled within each sentence. Thus, we manipulated the units at which semantic integration was possible within these conditions: at the paragraph level for CS; at the sentence level for US; and, at the singleword level for SW. To identify the involvement of brain regions without modeling the specific stimuli time course, we measured across individuals the correlation of the BOLD signal evoked by the same stimuli in each condition within and between reading and listening tasks. First, we mapped the inter-subject correlation for each condition CS/US/SW within both reading and listening tasks, by correlating the fMRI time courses across subjects. The results indicated that the BOLD-response reliability (i.e. observed correlations) varied across associative regions as a function of semantic integration level. Second, to further identify areas that reliably respond to integrating the same semantic content, we mapped the areas that shared the similar response between listening and reading tasks. A clear gradient along the temporal lobe was observed upon varying the semantic integration level, with The Society for the Neurobiology of Language SNL 2019 Program  Poster Session B more extensive and stronger correlations observed from anterior to posterior MTG/STG and AG as a function of the units that can be semantically integrated. Moreover, the posterior IFG only showed BOLD-response reliability between modalities for the CS condition. In sum, using naturalistic materials and an inter-subject correlation analytical approach, the results from the present study highlight the stronger involvement of perisylvian areas as a function of the units that can be semantically integrated across language comprehension modalities. ratings on addressee-favor. The RSA results suggested that the voxel patterns in these areas can be predicted by the degree to which the action is in favor of the interlocutors, and premotor cortex tended to represent more action-related information than LIFG and LMTG. Taken together, the current study demonstrates the role of premotor cortex in representing and understanding speech acts. Meaning: Discourse and Pragmatics Coopmans1,2, Mante Nieuwland1,3; 1Max Planck Institute for Psycholinguistics, 2Centre for Language Studies, Nijmegen, 3 Donders Institute for Brain, Cognition and Behaviour B37 Decoding linguistic communications in human premotor cortex Wenshuo Chang1, Lihui Wang2, Xiaolin Zhou1; Peking University, 2Otto-von-Guericke University, Magdeburg, Germany 1 Language using has been proposed as a type of communicative action (i.e., speech acts, Austin, 1955; Searle, 1969) which serves the function of achieving the speaker’s goal, rather than merely making a statement that can be proved true or false. In daily conversations, for example, we ask and answer questions, and make requests and promises. Here we test whether linguistic communications are represented as actions by decoding speech acts in human premotor cortex, a brain area known for action preparation and simulation. In an fMRI experiment, participants were presented with scripts describing a daily-life scenario, each of which consisted of an event concerning two interlocutors (a speaker and an addressee) and a critical sentence said by the speaker. This critical sentence was either a promise, a request, or an answer to a confirmative question that was included as the control for the promise (Answer 1) or the request (Answer 2). After fMRI scanning, participants were asked to rate the degree to which the event would be favored by the speaker (i.e., speaker-favor) and the addressee (addressee-favor). The multivariate pattern analysis (MVPA) revealed that the speech acts (Promise vs. Answer 1, Request vs. Answer 1, Promise vs. Request) can be discriminated not only by the voxel patterns in left inferior frontal gyrus (LIFG) and left middle temporal gyrus (LMTG), the brain areas known for semantic processing, but also by voxel patterns in dorsal and ventral premotor cortex. The discrimination, in terms of classification accuracies of the MVPA classifier, was specific to speech acts (only in pair-wise classifications involved Promise or Request), but was not observed for confirmative answers (Answer 1 vs. Answer 2). Moreover, the discriminative performance was better in premotor cortex than in LIFG and LMTG. Further representational similarity analysis (RSA) showed that, for voxels in LIFG and LMTG, the pattern similarity between Promise and Answer 1 can be accounted for only by ratings on addressee-favor but not by ratings on speaker-favor, whereas the pattern similarity between Request and Answer 2 can be accounted for only by ratings on speaker-favor but not by ratings on addressee-favor. The same pattern of results was observed for voxels in premotor cortex, except that the pattern similarity between Request and Answer 2 can be accounted for by both ratings on speaker-favor and The Society for the Neurobiology of Language B38 Dissociating activation and integration of discourse referents: Evidence from ERPs and oscillations Cas A key challenge in understanding stories and conversations is the comprehension of ‘anaphora’, words that refer back to previously mentioned words or concepts (‘antecedents’). In psycholinguistic theories, anaphor comprehension involves the initial activation of the antecedent from a representation of the discourse, and the subsequent integration of this antecedent into the unfolding representation of the narrated event (e.g. Garrod & Terras, 2000). The neural implementation of these two processes is unknown, but a recent proposal suggests that they draw upon the brain’s recognition memory and language networks, respectively, and may be dissociable in patterns of neural oscillatory synchronization (Nieuwland & Martin, 2017). Following up on this proposal, the current EEG study with pre-registered analyses examines whether referent activation and integration can be disentangled by event-related potentials (ERPs) and by activity in the theta and gamma frequency bands. Forty Dutch participants read two-sentence mini stories containing proper names. We investigated referent activation and integration in a two-by-two factorial design by separately manipulating whether the proper name in the second sentence was repeated or new (ease of activation) and whether it rendered the target sentence coherent or incoherent with the preceding discourse (ease of integration). Analyses of ERPs (N400, Late Positive Component) and oscillations (theta, gamma) were timelocked to the onset of the proper name in the second sentence. We found that repeated names elicited lower N400 and LPC amplitude than new names, and they elicited an increase in theta-band (4-7 Hz) synchronization compared to new names, which was largest around 300500 ms after name onset. Discourse coherence was not accompanied by modulations of either the N400 or the LPC, but discourse-coherent proper names elicited an increase in gamma-band (60-80 Hz) synchronization around 500-1000 ms compared to discourse-incoherent ones. Beamformer source analysis localized this gammaband effect to the left frontal cortex. In line with the predictions of Nieuwland and Martin (2017), our findings show that activation and integration of discourse referents can indeed be dissociated in event-related EEG activity. Processes related to referent activation are reflected in N400, LPC and theta activity, while integration processes modulate gamma activity. We suggest that the N400 effect reflects facilitated activation of the repeated name. In line 103 Poster Session B with the recognition memory literature (Jacobs et al., 2006; Chen & Caplan, 2017), we take the increase in theta-band synchronization to reflect a successful match between anaphor and antecedent. The gamma-band effect, which originated from left frontal areas, likely reflects integration of this antecedent into a meaningful discourse representation (Bastiaansen & Hagoort, 2006, 2015). In all, our results show that analyses of ERPs and oscillations provide complementary views on the processes underlying anaphor comprehension. In addition, they further emphasize the importance of memory processes, possibly mediated by theta oscillations, in online language comprehension. B39 Modulating expectations with connectives in discourse comprehension Linmin Zhang1,2, Linyang He3,4, Xing Tian1,2; 1NYU Shanghai, 2NYU-ECNU Institute of Brain and Cognitive Science, 3Fudan University, 4Toyota Technological Institute at Chicago Introduction. Natural language connectives (e.g., but, even so, so, because) enable humans to go beyond describing single events and reason through relations among them. Most typically, connectives address the conditional probability or causal dependence among events, modulating our expectation in processing discourse-level meanings. For example, in “Mary failed in the exam, (even so,) she celebrated”, the conditional probability “P(Mary celebrated | Mary failed in the exam)” should be low according to our world knowledge. However, the use of concessive connective “even so” here reverses our expectation, improving the coherence of the whole discourse and leading to an attenuated N400 during the processing of target words (here “celebrated”) (Xiang & Kuperberg, 2015). Moreover, the use of “even so” also enhances predictive processing, leading to larger N400 attenuation between conditions with this connective (e.g., Mary failed/aced in the exam, even so, she celebrated) than between those without this connective (e.g., Mary failed/aced in the exam; she celebrated) (Xiang & Kuperberg, 2015). How do other types of connectives modulate our expectation and affect predictive processing? Here we adopted a 3 by 2 design, in which the use of connectives (no connective (null) vs. but vs. so) and conditional probabilities (high vs. low) between the two events in a discourse are the two factors (sample stimuli: Mary lived far away, (null / but / so) her commuting time was (long / short)). We aimed to test whether (i) both “but” and “so” enhance predictive processing so that larger N400 attenuation can be observed for conditions with (vs. without) these connectives and (ii) compared to conditions without connectives, only “but”, but not “so”, reverses expectation and leads to larger N400 effects for conditions with high P(event 2 | event 1). Methods. We collected 32-channel EEG data from 16 participants (native speakers of Chinese). Each participant read 180 trials in Chinese, with a word-by-word presentation, and answered comprehension questions after 1/3 of the trials. The manipulation of conditional probabilities between the two events was on the last word of the whole discourse. 104 SNL 2019 Program N400 effects were examined between 300 to 500 ms after the onset of the last word in each trial. Results. Within this time window, we found that compared to conditions without connectives, conditions with “but” or “so” elicited larger EEG differences between the discourses with high vs. low P(event 2 | event 1), suggesting that the use of connectives enhanced predictive processing. With the use of “but”, discourses with high P(event 2 | event 1) elicited larger EEG responses than those with low P(event 2|event 1), suggesting that “but” reversed participants’ expectation in discourse processing. Intriguingly, the use of “so” showed the same pattern as the use of “but”. Presumably, these results suggest that different types of connectives share the same mechanism in enhancing general predictive processing, but differ in modulating specific expectation in context. Conclusion. In discourse comprehension, the use of connectives robustly enhances predictive processing. Some connectives reverse expectation, while the modulating effects of others do not solely depend on conditional probabilities. B40 Event-related potentials to visual processing of incongruities in negated and affirmative sentences Sara Farshchi1, Annika Andersson2, Joost van de Weijer1, Carita Paradis1; 1 Lund University, 2Linnaeus University In spite of the fact that negation has been the focus of many studies, the way it is processed in human communication still eludes us. Previous studies of negation using event-related potentials (ERPs) have reported inconclusive results as to whether or not negation poses difficulties for processing. While some have found that negation is initially ignored and incongruities in negated sentences do not modulate the N400 effect (Fischler et al., 1983; Lüdtke et al., 2008), others have reported that the N400 is modulated in incongruent negated sentences similarly to affirmative sentences (Nieuwland & Kuperberg, 2008). This research, however, has been limited to negators, such as “not” and “no” while prefixally negated forms with “un” are largely unexplored despite their frequency of use (Tottie, 1980). To make up for this, we pose two questions: 1) Is there a difficulty in the processing of negation as measured by ERPs? and 2) Are prefixally negated forms processed similarly to sententially negated forms or to affirmative forms? In order to answer these questions, the processing of affirmative (e.g., authorized), prefixally negated (e.g., unauthorized) and sententially negated (e.g., not authorized) adjectives was investigated in sentences such as “The details in the new Obama biography were correct/ wrong because the book was authorized/unauthorized/ not authorized by the White House”. A member of an opposite pair (correct/wrong) in the first part of the sentence was combined with a negated or affirmative adjective (critical word) in the second part creating a semantically congruent or incongruent context. The amplitudes of the N400 (300-500 ms) and the P600 (500700 ms) to the critical words, as well as accuracy rates and response times to sentences were recorded and analyzed using mixed-effects modelling. The analyses of accuracy and response times suggested that sentential negation The Society for the Neurobiology of Language SNL 2019 Program  Poster Session B was more difficult to process than prefixal negation and affirmative forms. The ERP analyses were consistent with these results in that the most effortless processing was observed for affirmatives where incongruities elicited a larger N400, indicating a successful detection of the incongruities. Prefixal negation was more difficult than affirmative forms, resulting in an N400 combined with a P600 that indicated a re-evaluation of the sentence. Sentential negation seemed to be the most difficult form to process as the ERP effects of congruency were restricted to a P600, suggesting that incongruities in these sentences were processed differently compared to the other two conditions. In line with previous research, we conclude that sentential negation is more difficult to process than affirmatives and prefixal negation. We present two novel findings: 1) Different mechanisms are involved in processing incongruities in negated sentences (P600) than in affirmative sentences (N400), 2) Participants are as fast and accurate to judge prefixally negated sentences as they do affirmative sentences, but the neurocognitive processing patterns for prefixally negated forms are different suggesting a more demanding processing for these forms than affirmative forms. reflects sequence information, then oscillations at those bands should be equivalent between condition (4) and the regular sentences in condition (1). The first sentence from each trial was excluded to avoid potential EEG responses to sound onset. Data were manually cleaned of artifacts, filtered from 0.1-25 Hz, and re-referenced offline to common average. For each condition, we compute Evoked Power (EP), Induced Power (IP) and Inter-trial phase coherence (ITPC) from 0.5 to 10 Hz in increments of 0.111 Hz. Conditions were compared via one-way ANOVA for each measure in each frequency of interest. Results replicate Ding et al.’s finding: A sentence-level peak was shown at 1 Hz in Condition (1) only. Phrase-level peak was found at 2 Hz in Condition (1) and (3). Syllablelevel peaks were shown in all conditions. Consistent with prior work, these patterns were observed for EP and ITPC but not for IP. The current study confirms that oscillatory synchronization can be modulated by linguistic hierarchical structures, not just word-sequences. Please find supporting plots here: https://www.scribd.com/ document/406740993/SNL19-Plots. Syntax Phaedra Royle1,2, Alexandre Herbay2,3, Clara Misirliyan1,3, Karsten Steinhauer2,3; 1École d’orthophonie et d’audiologie, Université de Montréal, 2Centre for Research on Brain, Language, and Music (CRBLM), Montréal, 3School of Communication Sciences and Disorders, McGill University B41 Cortical tracking of Mandarin structures Chia-Wen Lo1, Tzu-Yun Tung1, Alan Ke1, Jonathan R. Brennan1; 1University of Michigan Neural responses can be entrained to linguistic structures (Ding et al., 2016, 2017). Ding and colleagues (2016) observed cortical tracking of linguistic structures at evoked frequencies corresponding to phrasal (2 Hz) and sentence structure (1 Hz) levels of Mandarin structures in continuous isochronic speech. Non-Mandarin speakers show only syllable effects only when processing the same stimuli. However, Frank & Yang (2018) suggest that these results may follow from the tracking of lexical and/or partof-speech sequence information, not phrasal structure. For example, verbs occur at a frequency of 1 Hz while nouns occur at a 2 Hz rate. We aim to replicate Ding et al’s results using Mandarin stimuli with EEG and also test whether the results reflect lexical sequence or hierarchical information. N=31 native speakers of Mandarin Chinese listened to trials consisting of ten 4-syllable sentences. Each sentence consistent of 4 monosyllabic Mandarin words generated individually using a computer speech program. Each syllable was presented in 250 ms and thus 1 sentence is 1s in duration. Four experimental conditions were included: (1) normal four-syllable sentences, which were adopted from Ding et al. 2016 ([[Adj N] [V N] ] or [[N N] [V N]]; e.g. English gloss: “Old cattle plow ground”); (2) normal two-syllable phrases extracted from (1), (e.g. “Old cattle tree wood”); (3) semantically-mismatched sentences, made by switching words between items in (1) keeping syntactic position constant (e.g. “Soldier child run grass”); and (4) syllable scramble sequences, such that phrases were presented with a reversed word order (e.g. “Cattle old land plow”). Crucially, condition (4) maintains word-sequence patterns at 2 Hz and 1 Hz but the words do not form grammatical phrases; if neural tracking The Society for the Neurobiology of Language B42 Reevaluating the dynamics of auditory sentence processing: an ERP study in French Lauren A. Fromont1,2, Syntactic categories (SC; noun, verb, etc.) are used to build larger linguistic structures for sentence comprehension, and encounters with unexpected categories create processing difficulties. Serial approaches to sentence processing consider that SCidentification is “ultra-rapid” (reflected by an ELAN, Friederici, 2012) and can block semantic (i.e. meaning) processing (no N400). However, EEG studies supporting this model have serious design issues (Steinhauer & Drury, 2012). While few studies in the visual modality provided empirical evidence against the serial approach (e.g. Fromont, Steinhauer, Royle, submitted), no auditory data has clarified this issue. This experiment is the first auditory study to create pure anomaly conditions, using a balanced design in French to probe mechanisms underlying SC-identification and semantic processing. Thirty-four participants listened to French sentences and judged their acceptability while their EEG was recorded. In correct sentences 1-2, the contexts select for a specific SC: the control verb (+pronoun) in 1. selects a verb (‘to tackle’), and the transitive verb (+determiner) in 2. selects a noun (‘toad’). Since pronouns and determiners are homophonous in French, the context preceding the target is kept constant. We created a 2x2 design that systematically introduced SC anomalies by swapping the 2 target words, and semantic anomalies by swapping introductory sentences (which contained a prime, here ‘hockey’ and ‘swamp’), thus creating “pure” SC violations and semantic anomalies, as well as combined SC+semantic errors. //1. Jeanne joue au hockey avec son copain. Elle ose le plaquer… ‘Jeanne plays hockey 105 Poster Session B with her friend. She dares to-tackle him …’// 2. Jeanne va au marais avec son copain. Elle ôte le crapaud… ‘Jeanne goes to the swamp with her friend. She removes the toad …..’// ERP data on the N400 time-window revealed main effects of semantic anomalies (p < .001), and SC errors (p = .02). A three-way interaction with anteriority (p = .02) revealed that while both effects were large at posterior sites (p < .001), semantic anomalies elicited larger frontal negativities (p < .001) than SC errors (p = .03). After 800 ms, we observe a small P600 in response to SC errors only (p = .003) but sustained frontal negativities in response to semantic anomalies (p = .006). Our results confirm findings obtained in a previous visual study (Fromont et al, submitted): no ELAN, but an N400 effect for SC-violations. Furthermore, additive N400 effects in both studies show that semantic and SC processing can operate in parallel and may be subserved by distinct processing streams. Later effects in the P600 time-window differ from the visual experiment, in which processes linked to both anomalies competed for the same resources. The present auditory study revealed a P600 only in response to SC errors. This difference could be due to participants being worse at categorizing anomalous sentences as unacceptable in auditory tasks (p = .004), consistent with the P600 being a task-related component. The sustained negativity for semantic anomalies, regardless of syntax, could reflect an increase in working-memory load associated with target-word processing difficulties within spoken discourse (Van Berkum et al, 2003). B43 Lists with and without syntax: MEG effects of syntactic composition when local conceptual combination is controlled Ryan Law1, Liina Pylkkänen1,2,3; 1NYUAD Institute, New York University Abu Dhabi, 2Department of Psychology, New York University, 3Department of Linguistics, New York University INTRODUCTION Constructing complex meanings requires the language system to syntactically and semantically combine smaller pieces into larger structures. The neurobiology of syntactic composition remains challenging to study largely due to its tight link with semantic composition. Specific contributions of different subregions within the combinatorial language network remain unclear. Some argue that lexical and syntactic representations are not dissociable, at least in hemodynamic responses (Fedorenko et al., 2012). Recent MEG findings implicate the posterior temporal lobe for syntactic composition as evidenced by cases in which local conceptual combination occurs in both conditions but syntactic merge only in one (Flick & Pylkkänen, 2018). Here we employed the reverse approach to controlling semantics: our stimuli were void of local conceptual combination but only in one condition did the stimuli participated in a syntactic tree. To achieve this, we measured the MEG activity elicited by (i) lists of nouns that occur inside a sentence, and thus participate in the syntax of that sentence, vs. (ii) the same lists embedded inside longer lists. Crucially, the members of the critical lists do not conceptually compose with each other in either case, allowing for a better than usual control of semantics. DESIGN Stimuli were presented visually 106 SNL 2019 Program via RSVP. We embedded lists of three nouns within sentences or within longer lists. Additionally, we varied the level of semantic association (calculated as Latent Semantic Analysis scores) among the members of the lists, to contrast effects of syntax with effects of semantic association: SENTENCE-HIGHASSOC: The music store sells pianos violins guitars drums and clarinets. LISTHIGHASSOC: theater graves drums mulch pianos violins guitars crates knuckles cocoas SENTENCE-LOWASSOC: The eccentric collector hoarded lamps dolls guitars watches and shoes. LIST –LOWASSOC: forks pen toilet rodeo lamps dolls guitars wood symbols straps Each trial contained a memory probe task to encourage participants to pay attention. RESULTS Participants took longer to recall words drawn from a list-inside-list than list-insidesentence. Despite recall being more effortful in the listinside-list conditions, we observed consistent neural activity increases for list-inside-sentence conditions in several regions, often with sustained timing. Here we report on effects observed on the third member of the list, i.e, “guitars” above. On this item, cluster-based permutation tests on left hemisphere ROIs showed greater activity at (i) ~360-385ms in the anterior temporal lobe, (ii) ~400-440ms in the middle temporal lobe, and (iii) ~35-60ms in the temporo-parietal junction/angular gyrus. Trending effects emerged at ~50-70ms in the posterior temporal lobe. Semantic association did not affect behavioral responses and only elicited trends in ROI activity. CONCLUSION Our results show that, for lexically identical three-member strings, syntactic composition in the absence of local conceptual combination results in activity increases in a distributed network of temporoparietal regions, including the anterior and middle temporal lobe as well as the temporo-parietal junction/ angular gyrus, from which the middle temporal lobe showed the most reliable effect. While an explanation in terms of the global semantics of the sentences cannot be ruled out, explanations in terms of local semantic composition can. Morphology B44 Morphological sensitivity is associated with ventral white matter pathways across orthographies Maya Yablonski1, Michal Ben-Shachar1; 1Bar Ilan University Morphological information is an important factor in lexical access, a process which is hypothesized to rely on the ventral stream (Rastle, 2018). In line with this view, we recently showed that sensitivity to morphological information in adult English readers is associated with the ventral reading pathways, bilaterally (Yablonski et al., 2018). It remains unclear, however, whether this association generalizes across languages with different orthographies and morphological systems. To investigate this question, we assessed neurocognitive correlations between white matter properties and morphological sensitivity in Hebrew, a Semitic language where morphemes are interleaved, not concatenated. Morphological sensitivity was operationalized using the Morpheme Interference Effect (MIE, e.g., Crepaldi et al., 2010), a robust measure that may be quantified reliably at The Society for the Neurobiology of Language SNL 2019 Program  the single subject level. In the Hebrew version of the MIE (Yablonski & Ben-Shachar, 2016), participants perform a lexical decision task on pseudowords that contain either real or invented root morphemes. Participants with enhanced sensitivity to morphological information may slow down or respond erroneously in response to real-root pseudowords. Morphological sensitivity is therefore defined for each participant as the difference in performance on real root vs. invented root pseudowords, divided by their overall performance. Based on our prior findings in English, we hypothesized that morphological sensitivity in Hebrew relies primarily on the ventral reading pathways, bilaterally. Accordingly, we targeted the inferior fronto-occipital fasciculus (IFOF), inferior longitudinal fasciculus (ILF) and uncinate fasciculus (UF). As a control, we further analyzed two dorsal pathways: the long and anterior segments of the arcuate fasciculus. Forty-five adult native Hebrew-speakers (29 females, 2035y) completed the morpheme interference paradigm as part of an extensive behavioral battery, and underwent a diffusion MRI scan (Siemens 3T scanner, 64 diffusion directions at b=1000 s/mm2 and 3 volumes at b=0; isotropic voxel size: 1.7*1.7*1.7mm3). Tracts of interest were identified bilaterally in each participant’s native space, using deterministic tractography and automatic tract segmentation (Yeatman et al., 2012). Fractional anisotropy (FA) and mean diffusivity (MD) profiles were calculated along each tract, and Spearman’s correlations were calculated between these profiles and morphological sensitivity. Our results show significant correlations between morphological sensitivity (accuracy) and FA of the left IFOF, as well as MD of the left ILF (r = -0.376 and r = 0.382, respectively). In addition, morphological sensitivity (RT) correlated with FA of the right UF (r = 0.463). These correlations are significant after correcting for multiple comparisons within a tract by controlling the familywise error at p<0.05, and controlling the false discovery rate (FDR) across tracts at a level of q< 0.05. We followed-up on these effects by calculating partial correlations that controlled for timed measures of word and nonword reading. These partial correlations remained significant, suggesting some level of cognitive specificity. In sum, implicit morphological sensitivity for Hebrew written words is associated with structural properties of the bilateral ventral tracts, in striking similarity to our findings in English. Our results support the view that morphological information contributes to lexical access along the bilateral ventral pathways, across orthographies and morphological systems. B45 Sensitivity to syntactic affixation restrictions in L1 vs. L2 processing as revealed by ERPs Linnaea Stockall1, Phaedra Royle2,5, Christina Manouilidou3, Sam Steddy1, Carsten Steinhauer4,5; 1Queen Mary University of London, 2Université de Montréal, 3Univerza v Ljubljani, 4McGill University, 5Centre for Research on Brain, Language and Music Recent MEG experiments in English and Greek identify spatially and temporally distinct evoked processing responses associated with grammatical category vs. verb argument structure violations in complex word- The Society for the Neurobiology of Language Poster Session B formation (Neophytou et al, 2018). In both languages, pseudowords composed of a verbal-stem selecting affix plus a nominal stem (CATegory errors e.g. EN: rehat, GK: varelimos ‘barrelable’) evoke increased left posterior temporal activity between 200-300ms post-stimulus onset in a visual lexical decision paradigm. Pseudowords that respect categorial restrictions but violate ARGument structure requirements of the affix (verb must take a direct object, e.g. EN: relaugh, GK: gelasimos ‘laughable’) evoke increased orbitofrontal activity from 300-500ms p.s.o. LeftPTL sensitivity to syntactic computation has been found in a number of studies (see Flick & Pylkkänen, 2018), while OF activity has been associated with semantic ill-formedness and coercion (Fruchter & Marantz, 2015). Thus, the spatial and temporal CAT vs. ARG dissociation is consistent with ‘syntax-first’ models of language processing. We adapt the CAT vs. ARG violation paradigm in a pair of ERP experiments to address the following questions: (a) can we find evidence for early syntactic processing – as manifest through CATegory errors (rehat) – using EEG? We anticipated LANs and late P600s as reflexes of syntactic wordstructure error processing; (b) do L2 speakers show the same neural and behavioural responses for early syntactic processing and for word-formation violations in general as native speakers? We compared results from 26 native Greek speakers who had immigrated to the UK as adults within five years of the experiment, in their two languages (L1:Greek, L2:English) when processing well-formed affixed words (eg. refill), CAT-violations, and ARG-violations. RESULTS: Behavioural: We replicated the pattern of significantly more robust rejection of CAT than ARG violations, in both languages (L1:94% vs. 78%, L2:89% vs. 81%). ERPs: L1, participants exhibit larger N400-like negativities to both violations than real words, with a slightly earlier onset for the CAT violation. L2, group data showed a biphasic N400-P600 pattern which differed for CAT versus ARG conditions: the N400 was stronger in the CAT condition, and the P600 was earlier in the ARG condition, possibly because of the smaller N400 preceding it. This pattern also differed depending on participants’ ability to categorise real vs. pseudo-words in their L2; Low performers showed mostly reduced P600s to both types of errors while high performers exhibited a substantial negativity followed by a late positivity after 700 ms. SUMMARY: Participants exhibit differing patterns of sensitivity to pseudoword violations based on (1) the error type, (2) the L1 or L2 status and (3) their proficiency in L2 as measured by their behavioural results. L1 ERP patterns are consistent with syntactic violations being parsed more rapidly than semantic violations. L2 ERPs appear to converge on L1 patterns with increased proficiency in these adult L2 learners, and apparently more so for more salient, earlier CAT word-structure violations. B46 Neither ipsilateral, nor contralateral pathway advantage for early compound decomposition: Evidence from anaglyphs Roberto G. de Almeida1, Shirley Dumassais1; Concordia University 1 107 Poster Session B Most studies on the role of morphological analysis in word recognition have suggested that compounds such as FOOTBALL are decomposed into its constituent morphemes FOOT and BALL. In fact, most models of word recognition incorporate either a single morphological decomposition route or allow for morphological decomposition to occur after the word has been initially recognized in full (e.g., Libben & de Almeida, 2002). We investigated the earliest moments of visual word recognition, when the retinal information-by hypothesis split vertically along the fovea--is divided into two visual pathways, projecting the right visual field into the left hemisphere (LH), and the left visual field into the right hemisphere (RH) (e.g., Brysbaert et al., 2012). While contralateral (nasal) projections are taken to be stronger than ipsilateral (temporal) ones (Obregon & Shillcock, 2012) for word recognition, compounds provide for an opportunity to understand how these early retinal projections interact with posterior representations involved in word identification (e.g., the visual word form area; Cohen et al., 2000). Thus, binocular fixation on the morpheme boundary of a bimorphemic compound should yield the following projections: the compound head (e.g., BALL), carrying most syntactic and semantic information, is projected to the language dominant LH, while the modifier (e.g., FOOT) is projected to the RH, allowing for a “head-start” in constituent access and semantic composition. We manipulated compound recognition using a novel masked presentation technique involving words colored in blue and red (or in black, as baseline) while subjects wore red/blue anaglyph glasses. We aimed to understand (a) the potential differences between ipsilateral and contralateral projections, (b) the sensitivity of these projections to different types of linguistic stimuli, and (c) how early visual word forms can be detected as potential morphemes of natural language via legal and illegal splits of the stimuli. Forty-six native English speakers participated in a masked lexical decision task. The stimuli were 168 words of the following types: (1) compounds (ex: BLACKBIRD, FOOTBALL), (2) pseudo compounds (e.g., CARPET, HAMMOCK) and (3) monomorphemic words (VACCINE, JINGLE). The stimuli were presented in three color combinations, all black, red/blue, and blue/red. For the red/blue and blue/red conditions, the colors were split either at the morpheme boundary (legal split) or at a character to the left or to the right of the split (illegal split). Trials consisted of a fixation (1000ms), a forward mask (500 ms), and the word/ nonword (40ms). Data were analyzed by conducting a linear mixed effects (LME) model. The model with a fixed effect of the interaction between pathway and split type was compared to a null model consisting of only random predictors and was found to provide a statistically significant better fit to the data, χ2(5) = 33.24, p < 0.001. Multiple comparisons with Tukey’s correction reveal a legality effect (constituent morphemes faster than nonconstituents), but no differences between ipsilateral and contralateral projections, p = 0.0152. While ipsilateral and contralateral pathways do not differ, our results 108 SNL 2019 Program suggest that at its earliest processing stages the visual word recognition system is sensitive to morphological properties of words. Multilingualism B47 Positive L1 to L2 transfer allows for multifaceted processing in the initial stages of adult second language tone and grammar acquisition Sabine Gosselke Berthelsen1, Merle Horne1, Yury Shtyrov2,3, Mikael Roll1; 1Lund University, Aarhus University, 3University of St. Petersburg 2 How learners process linguistic input in the early stages of second language acquisition (SLA) has recently attracted a great deal of research interest. In this context, it has been shown that, in particular, semantic properties of novel words can be processed real-language-like already within minutes of contact or learning. We set out to investigate whether this type of fast acquisition can also be applied to tonal and grammatical properties of novel words and what role native language (L1) transfer plays in this context. To this effect, we taught a set of pseudowords with morphosyntactic tone to two learner groups: a group without L1 tone (Non-Tonal L1s) and a group with grammar-related L1 tone (Tonal L1s). They were exposed to the novel words with a sound-picture matching paradigm for each two and a half hours on two consecutive days while the behavioural and neural responses were recorded with continuous EEG as well as participants’ response time to random question trials dealing with sound-picture validity. We found that both groups performed identically on the behavioural level but differed significantly from each other neurophysiologically. Behaviourally, the biggest improvements in both accuracy and response times took place about 40 minutes into the acquisition. Neurophysiologically, the morphosyntactic tonal words elicited typical ERP components for grammar processing in the Tonal L1s within just 20 minutes: early left anterior negativity (ELAN), left anterior negativity (LAN) and P600. In addition, the Tonal L1s exhibited very early automatic processing (~50ms) when singling out the relevant words session-initially. In the Non-Tonal L1s, the novel words with morphosyntactic tone elicited neither a word recognition component, nor an ELAN – although we could detect ELAN-tendencies in those Non-Tonal L1s who scored best on a non-linguistic pitch detection test. Furthermore, the Non-Tonal L1s as a whole produced a LAN only after an overnight consolidation period – while LAN-tendencies were visible for the best learners in this group already on day 1. The P600, finally, was indistinguishable from that in the Tonal L1 group. We interpret these results as showing that it is possible to rapidly acquire not just semantic properties of novel words but also complex tonal and morphosyntactic content. However, it seems as though a native language based neural network for the processing of morphosyntaxrelated tone allows for almost instantaneous nativelanguage-like processing of the morphosyntactic content in the tonal learners. The non-tonal learners rely on less automatic processing but can still successfully process and acquire the complex tonal words. The learner groups’ different processing strategies can be explained against The Society for the Neurobiology of Language SNL 2019 Program  the backdrop of dual-route processing: combinatorial, rule-based processing in the tonal learners and wholeword processing in the non-tonal learners. B48 Are neural effects of composition sensitive to codeswitching? Sarah F. Phillips1, Liina Pylkkänen1,2; 1New York University, 2New York University Abu Dhabi INTRODUCTION Recent evidence has suggested that language switching incurs little cognitive effort outside artificial switching paradigms. If switching is easy, then how do our brains compute structure across language switches? We addressed this by studying two-word sentences consisting of a subject and an intransitive verb, both of which varied in the language of presentation (Korean vs. English). Our aim was to replicate a welldocumented effect of composition in English, localized in the left anterior temporal lobe (LATL) at ~250ms, and then to test whether this effect is observed even when the subject and verb are presented in different languages. Observing this effect in a code-switching context would suggest that the brain has combinatory mechanisms that can operate across languages, a finding intuitively consistent with the bilingual experience. METHODS Korean-English bilinguals (n=19) read two different types of stimuli, sentences and verb-lists, during an MEG measurement. Two words that combine to form a sentence (e.g. “icicles melt”) were presented successively to elicit a composition effect, whereas two verbs that do not compose (e.g. “jump melt”) were presented as controls. Orthography also alternated for the Korean items since they can be expressed in either the Roman alphabet or Hangul. Doing so allowed us to additionally test for interactions between script, script change and composition. Source localized MEG data were analyzed with cluster-based permutation tests in four ROIs: the ATLs bilaterally, to identify composition effects, and the anterior cingulate cortex (ACC) and the dorsolateral prefrontal cortex (dlPFC), as motivated by prior research on effects of language switching in the executive control network. RESULTS A cluster-based permutation ANOVA across the full design showed a reliable main effect of composition in the left (but not right) temporal pole (BA 38) at 242-292 ms. This effect did not significantly interact with any of our other factors but pairwise comparisons revealed that it was not uniformly driven by each of the composition vs. list contrasts. Specifically, the LATL combinatory increase was only clearly observed for the stimuli in which the second element was in English. That is, English verbs showed larger LATL activity both when following an English subject or a Korean subject. When the verb was in Korean, this pattern was not observed. We speculate that this may be due to the possibility of serial verb constructions (e.g. “try speak” to mean “try to speak”) in Korean, i.e., the verb-list stimuli may have included combinatory processing in Korean but not in English. Executive control regions showed no increases for language switching, suggesting that the switches were not effortful. Instead, dlPFC showed an increase for verb-lists over sentences, likely reflecting more effortful processing for the less natural list stimuli. CONCLUSION The Society for the Neurobiology of Language Poster Session B We provide evidence that (i) combinatory operations reflected by the LATL can operate across languages and that (ii) executive control regions do not engage for language switches between a subject and its verb. For future studies on combinatory processing across code-switches, our study also highlights the challenge of designing parallel non-combinatory control conditions within different syntactic systems. B49 The grammaticalization of different relations during adult second language (L2) acquisition Nicoletta Biondo1,2,3, Simona Mancini1; 1Basque Center on Cognition, Brain and Language (BCBL), 2Marica De Vincenzi Foundation, 3University of Trento L2 grammaticalization represents the process of instantiating the L2 rules into the language system of the speaker. A rule error (compared to its correct counterpart) can elicit different event-related potentials (ERPs) at different stages of grammaticalization (Steinhauer et al.,2009; McLaughlin et al.,2010): a subject-verb violation may elicit a (delayed) N400 (rather than LAN-P600) at low L2 proficiency, and a smaller/ delayed P600 at intermediate/high L2 proficiency. If this grammaticalization pattern is expected for different linguistic rules is unknown. Interestingly, studies show that different relations such as subject-verb and adverbverb agreement are differently processed in native language processing (Biondo,2017; Biondo et al.,2018), are differently impaired in agrammatic aphasia (Wenzlaff & Clahsen,2004; Clahsen & Ali,2009), and are differently acquired in child language acquisition (Belletti & Guasti,2015; Weist,2014). In this cross-sectional ERP study, we extend the investigation of subject-verb (number) and adverb-verb (tense) agreement to adult L2 acquisition. Although both relations entail the concord between the verb morphology and another constituent, the constituents and features involved differ in many aspects. The adverb is optional while the subject is a primary/core constituent. Tense needs a reference to the discourse (i.e. the speech time) to be interpreted while Number does not. Based on these differences, we expect the adverbverb relation to be grammaticalized later compared to the subject-verb relation. We asked Spanish native speakers (L1), low (L2L) and intermediate/high (L2H) proficient English speakers of Spanish to judge sentences as in (1-3), presented word by word while (32-channel) EEG was recorded. Here only the past singular conditions are displayed but future/plural were counterbalanced across conditions, i.e. 40 past (20sg/20pl) and 40 future (20sg/20pl) items were created for each condition. The relative position of the subject and the adverb with respect to the verb was also counterbalanced, as well as the number of correct and incorrect items (80 correct filler items were added). Each participant read 160 correct and 160 incorrect sentences. (1) Control “Ese novelista ayer temprano presentó su nuevo libro” (This novelist yesterday early presented his new book), (2) Number m. “*Esos novelistas ayer temprano presentó su nuevo libro” (These novelists yesterday early presented his new book) (3) Tense m. “*Ese novelista mañana temprano presentó 109 Poster Session B su nuevo libro” (This novelist tomorrow early presented his new book). The visual inspection of the ERPs (timelocked to the verb onset) from the sample tested so far (L1, N=14; L2L, N=8; L2H, N=8) seem to support our hypothesis. As in previous studies, L2L show a delayed N400 response for both violations. Crucially, L2H seem to show a delayed N400-like for tense violations and a small P600 for number violations. L1 show a P600 with smaller amplitude for tense compared to number violations. These data, albeit preliminary, suggest that the two violations are not treated the same way by the parser, both in native processing (P600 modulation) and in L2 processing (N400-like response at high proficiency). The data will be discussed with reference to the current models of L2 acquisition, and to more general accounts of adult sentence processing. B50 Prior knowledge predicts early consolidation in learning a novel language Dafna Ben-Zion1, Micha Nevat1, Anat Prior1, Tali Bitan1; 1University of Haifa Language learning occurs in multiple phases. Whereas some improvement is already evident during training, offline consolidation processes that take place after the end of training play an important role in learning of linguistic information (Davis & Gaskell, 2009). Studies on individual differences in second language learning showed that linguistic abilities in L1 predict the attained proficiency in L2 (Melby‐Lervåg & Lervåg, 2011), but these studies do not typically differentiate between online and offline processes. The timing of offline consolidation is thought to depend on the type of task (Tamminen et al., Cog. Psych. 2015), with the consolidation of generalizable implicit knowledge, or integration with existing knowledge, suggested to require more time and sleep. The current study aims to investigate individual differences in the timing of consolidation following learning of morphological inflections in a novel language. Eighteen adults learned to make plural inflections in an artificial language, where inflection was based on morphophonological regularities. Participants were trained in the evening, and consolidation was measured after two intervals: 12 hrs (one night) and 36 hrs. (two nights) post training. We measured both inflection of trained items, which may rely on item-specific learning, and generalization to untrained items, that requires extraction of morpho-phonological regularities. The results for both trained and un-trained items showed two patterns of consolidation: while some participants improved during the first night, others, who deteriorated in performance during the first night, improved in the later consolidation interval. However, there was no correspondence between consolidation patterns of trained and un-trained items. Importantly, only early consolidation gains for trained items were correlated with phonological awareness in L1, measured prior to training. Our results suggest that consolidation timing depends on the interaction between task characteristics and individual abilities. Importantly the results show that prior meta-linguistic knowledge predicts the quality of early consolidation processes. These results are consistent with studies in rodents (Tse et 110 SNL 2019 Program al., Science 2007) and humans (Hennies et al., JNS 2016) showing that prior knowledge accelerates consolidation of newly learnt episodic memories. Because the rate of consolidation processes can determine the accumulation of knowledge across separate exposures to the language, we suggest that these individual differences in prior linguistic knowledge and in the timing of consolidation can explain some of the variability found in the attained level of second language proficiency. B51 Speech production in second language learners of Russian Mara Flynn1, Yanyu Xiong1, Sharlene Newman1; 1Indiana University Bloomington This study investigated fMRI measured brain activation during reading/speech planning and discourse production in participants’ L1 (English) and L2 (Russian). During the study participants watched short, silent animated videos after which they read questions about the videos (e.g., “Why does the cat want the man to wake up?”), and provided verbal responses. A total of 4 experimental runs (2 for each language) were presented. Participants were instructed as to which language the questions would be written in and the language they were to produce prior to each run. The questions were designed to require responses longer than a few words, and subjects were instructed to respond in full sentences. The two phases of interest were planning (which includes reading the question and formulating a response) and production. The results revealed greater activation for L2 than L1 for both the planning/reading and production phases. During the planning/reading phase increased activation for Russian was observed in regions linked to orthographic and phonological processing as well as the orbitofrontal cortex (OFC). Russian also elicited increased activation in right inferior occipital-temporal regions, which is consistent with more effort required to read the less familiar Russian orthography. Russian production elicited increased activation in the OFC, right inferior occipitaltemporal region and medial occipital/parietal cortex. The increased activation in inferior occipital-temporal regions suggests that subjects may visualize Russian words to aid in production. The results obtained also speak to the engagement of cognitive control regions. In previous studies of bilingualism regions such as the dorsal anterior cingulate, supplmentary motor area, or basal ganglia have been found to be involved (Rossi et al., 2018). However, those regions were not observed here. It may be because the current study had the languages blocked and required discourse or sentence production; therefore, the type of control processes required in the current study may be different that previous studies. The OFC has long been associated with using information about the task/situation to exert inhibitory control to reduce impulsive responses (Rudebeck & Rich, 2018). A more recent theory is that the OFC represents a cognitive map of the task space which is used to, in the case of speech production, make decisions about how to respond to questions including which words to use (Schuck et al., 2016). The findings presented demonstrate the need to examine sentence and discourse production in L2 learners. Rossi, Eleonora, The Society for the Neurobiology of Language SNL 2019 Program  Sharlene Newman, Judith F. Kroll, and Michele T. Diaz. “Neural signatures of inhibitory control in bilingual spoken production.” Cortex 108 (2018): 50-66. Rudebeck, P. H., & Rich, E. L. (2018). Orbitofrontal cortex. Current Biology, 28(18), R1083-R1088. Schuck, N. W., Cai, M. B., Wilson, R. C., & Niv, Y. (2016). Human orbitofrontal cortex represents a cognitive map of state space. Neuron, 91(6), 1402-1412. B52 Effects of orthographic depth on functional connectivity between the posterior visual word form area and dorsal and ventral reading networks in proficient Welsh-English bilinguals Pauliina Sorvisto1, Paul Mullins1, Marie-Josèphe Tainturier1; 1Bangor University Background: It has been suggested that word reading occurs along two main anatomical pathways: the dorsal stream implicated in phonology and the ventral stream engaged in lexico-semantic processing. There is crosslinguistic evidence that the recruitment of the ventral stream is stronger in languages with a deeper orthography such as English. In contrast, the recruitment of the dorsal pathway is more prominent in more transparent languages with more consistent letter-to-sound correspondence (e.g. Das et al., 2011; Richlan, 2014). An interesting question is what happens in bilingual readers of languages with different orthographic depth. Would the cross-linguistic patterns also apply or would reading bilingually lead to a more unified system for the two languages? Oliver, Carreiras and Paz-Alonso (2017) compared fMRI activation and functional connectivity while reading for meaning in Spanish speakers that had either English or transparent Basque as a second language. They observed that Basque L2 readers showed greater coactivation within the dorsal network while English L2 readers showed greater coactivation within the ventral network. This is consistent with cross-linguistic results. However, reading bilingually from an early age may lead to a higher convergence in written word comprehension processes. Aims: The goal of the current study was to examine the relative involvement of the ventral vs dorsal pathways in proficient early bilingual readers of two languages with highly contrasted orthographic depth: English vs Welsh. In previous GLM analyses, we showed comparable patterns of activation in the two languages. However, MVPA analyses revealed language sensitivity in several language regions. In this study, we examined if English and Welsh are associated with different patterns of functional connectivity in the ventral/dorsal streams. Methods: While being scanned for fMRI, 20 proficient Welsh-English early bilingual adults performed a semantic categorisation task (natural vs man-made) on 192 Welsh and 192 English written, non-cognate translation equivalent nouns presented. We performed psycho-physiological interactions (PPI) analyses for each language using the posterior visual word form area (pVWFA) as the seed region as it is engaged in the earliest stages of written word processing in alphabetic systems (Bouhali et al., 2014). Results: We observed greater connectivity for English compared to Welsh words between the pVWFA and regions along the ventral pathway, mainly the inferior and middle temporal gyri (BA20; BA21) and the fusiform gyrus (BA37), as The Society for the Neurobiology of Language Poster Session B well as some regions outside this pathway bilaterally. However, the connectivity observed for Welsh compared to English words was only higher in LH frontal regions outside the dorsal pathway, mainly the orbitofrontal area (BA11). Conclusion: Our results suggest greater temporal coherence of the ventral lexical-semantic pathway in the deeper language of early proficient bilingual readers. However, Welsh reading does not show greater connectivity than English between pVWFA and dorsal regions. This contrasts with earlier studies and may reflect a greater convergence of reading processes across languages differing in orthographic depth in early bilinguals. However, it remains possible that stronger dorsal PPI patterns may emerge in reading tasks less explicitly reliant on processing meaning and making conscious meaning-based decisions. Language Production B53 Linking production and comprehension – Investigating the lexical interface Arushi Garg1, Vitória Piai1,2, Atsuko Takashima1,3, James M. McQueen1,3, Ardi Roelofs1; 1Donders Institute for Brain, Cognition and Behaviour, Radboud University, 2 Radboudumc, 3Max Planck Institute for Psycholinguistics In a typical conversation, listening and speaking go hand in hand. However, we do not yet know what cognitive machinery and brain areas are shared between production and comprehension. It is believed that the lexical (lemma) and conceptual levels are shared between listening and speaking (Levelt et al., 1999). In this project, we test these claims using behavioural and neuroimaging methods. Cumulative semantic interference effects have been shown in picture naming using a continuous naming paradigm (Howard et al., 2006). A linear increase in picture naming latencies has been found for subsequent exemplars of a semantic category (e.g., fork, spoon, knife, etc.) when pictures from different categories are mixed together (e.g., fork, horse, desk, spoon, rocket, pig, chair, knife, etc.). Some models claim that the effect originates in conceptual to lexical mappings (Howard et al., 2006; Oppenheim et al., 2010) while WEAVER++ claims the effect lies at the conceptual level (Roelofs, 2018). As these levels are claimed to be shared between production and comprehension, mechanisms similar to those underlying cumulative semantic effects in picture naming should also be present during speech comprehension and thus, we should observe these effects in a listening task involving conceptual and lexical levels. We used four tasks – bare picture naming and gender-marked picture naming (production), and semantic classification and gender classification on spoken words (comprehension) - in a within-subject (N=32) experiment, conducted in Dutch, using a continuous naming paradigm (Howard et al., 2006). Based on the WEAVER++ model, we expected to replicate the cumulative semantic interference effect found in the bare picture naming task in previous studies and predicted cumulative semantic facilitation in the semantic classification task. We included the gendermarked picture naming and gender classification tasks because gender, a syntactic property, is believed to be associated with the lemma in gender-marked languages. 111 Poster Session B We expected a cumulative semantic interference effect on the gender-marked picture naming task and no effect on the gender classification task. We replicated the cumulative semantic interference effect on the bare picture naming task. A cumulative semantic facilitation effect was found in the semantic classification task. We found no cumulative semantic effects in either the gender classification task or the gender-marked picturenaming task. The findings on semantic classification support WEAVER++ model but challenge other models of cumulative semantic effects (Howard et al., 2006; Oppenheim et al., 2010) because they currently do not include mechanisms to account for such effects in comprehension. From a neural perspective, there is evidence suggesting that lemmas are likely localised in the middle portion of the left middle temporal gyrus (left mMTG) (Dronkers et al., 2004; Indefrey & Levelt, 2004; Indefrey, 2011; Piai et al., 2014; Schwartz et al., 2009). To establish this empirically, we conducted an fMRI study with the four tasks that were used in the behavioural study. We are acquiring data from the last two (of 32 participants). Results of the fMRI data will be discussed. We predict that the conjunction of activation across the four tasks will be localized in the left mMTG. B54 Neural architecture of discourse-derived fluency in post-stroke chronic aphasia Brielle Stark1, Barbara Marebwa2, Alexandra Basilakos3, Lorelei Philip3, Helga Thors3, Grigori Yourganov3, Chris Rorden3, Leonardo Bonilha2, Julius Fridriksson3; 1Indiana University, 2Medical University of South Carolina, 3University of South Carolina Fluency, as a construct, involves both speed and quality of verbal output. Fluency has often quantified by verbal fluency tasks (e.g. phonological fluency, “list words that start with /s/”). Of particular interest is connected speech fluency; for example, mean length of utterance in words (MLU). In addition to enhanced ecological validity, measuring mean length of utterance in words has the added benefit of accounting for word-finding difficulty and pauses, in addition to efficiency of verbal output1. Research evaluating MLU performance in primary progressive aphasia (PPA) suggests that reduction in cortical thickness in dorsal stream structures, including bilateral posterior middle frontal gyrus and left inferior frontal sulcus, impair fluency (Rogalski et al., 2011). Complementary work evaluating MLU in PPA suggests importance of the left frontal aslant tract for fluency (Catani et al., 2013). A recent study evaluated fluency derived from a phonological verbal fluency task in stroke, where fluency was negatively associated with lesion damage to the left frontal aslant tract and positively associated with fractional anisotropy of the tract (Li et a., 2017). Therefore, the purpose of the current study was to evaluate neural architecture associated with discourse-extracted fluency in post-stroke aphasia, effectively elaborating on prior research in stroke using non-discourse fluency and prior research conducted in PPA using discourse-derived fluency. In 49 people with chronic aphasia as a result of left hemisphere stroke (14 F; moderate aphasia [WAB-R aphasia quotient 112 SNL 2019 Program M=64.99±17.71]; age at assessment, M=60.71±10.15 years; months post-stroke at assessment, M=40.06±37.39), we acquired structural and diffusion-weighted MRI scans and a sample of spoken discourse (the Cinderella story). Discourse was transcribed and analyzed with CHAT/ CLAN software (MacWhinney et al., 2000), allowing us to extract MLU, which we z-scored. To analyze lesion damage associated with fluency, we performed voxel-wise lesion-symptom mapping (5000 permutations; p<.05) with cluster correction (z>2.33), regressing lesion volume. To analyze structural disconnection associated with fluency, we performed connectome lesion-symptom mapping between gray matter parcels for left language (Lang) and domain-general (DG) networks. These analyses were done using NiiStat. Finally, to analyze network organization associated with fluency, we computed the modularity statistic (Q) for the left hemisphere Lang and DG networks from which partial correlations with lesion volume and fluency were computed. Fluency was associated with a cluster of lesion damage involving left anterior dorsal stream structures, including inferior frontal gyrus (IFG) (peak z: opercularis, z=-2.33; triangularis, z=-2.33; orbitalis, z=-2.51). Reduced fluency was also associated with damage to connections between left middle frontal gyrus (MFG) and precentral gyrus (z=2.91) (DG), superior frontal gyrus posterior segment and supramarginal gyrus (z=3.09) (DG) and MFG posterior segment and IFG pars opercularis (z=3.15) (Lang), suggesting disruption to frontal aslant tract and anterior superior longitudinal fasciculus / arcuate fasciculus. Finally, Lang network modularity was positively correlated with fluency when controlled for total lesion volume (r=.45, p=.0001). Our result extrapolates on prior work, suggesting that fluency, derived from discourse, is modulated by left anterior dorsal stream structures and connections between structures, as well as preserved modularity of the left language network. B55 The multiple-demand network in language processing of the aging brain Sandra Martin1,2,3, Dorothee Saur3, Gesa Hartwigsen1,2; 1Lise Meitner Research Group “Cognition and Plasticity”, MPI for Human Cognitive and Brain Sciences, Leipzig, 2Research Group “Modulation of Language Networks”, Department of Neuropsychology, MPI for Human Cognitive and Brain Sciences, Leipzig, 3Language & Aphasia Laboratory, Department of Neurology, University of Leipzig Normal aging leads to changes in neural networks of speech and language perception and production. Specifically, older adults commonly display less activation of domain-specific areas but a stronger involvement of the domain-general “multiple-demand” network in different tasks (Grady et al., 2010). This change in activation patterns appears to be especially robust when task demands increase and older adults show poorer behavioural performances in comparison to young adults (Hoffman & Morcom, 2018). A central question is to what extent language processing in the aging brain relies on domain-general versus domain-specific networks. So far, relatively few studies have investigated the interplay of these networks in language production in normal The Society for the Neurobiology of Language SNL 2019 Program  aging. In the current study, we used functional magnetic resonance imaging (fMRI) to characterize the role of the multiple demand network in an overt speech production task with varying task demands in healthy older adults. The data from twenty-eight older adults (mean age: 65.18 years, age range: 60-69 years) were included in the data analyses. Three additional participants had to be excluded from analyses due to heavy motion during scanning, i.e. motion parameters showed movement of more than one voxel size (2.5 mm) in any of the six directions. Participants performed two language production tasks during a continuous-sampling fMRI block design: semantic word generation (easy vs. difficult categories) and counting (forward and backward, control task). The semantic word generation task consisted of 20 categories that were assessed for difficulty during a pilot study in a separate sample of 24 older adults. The semantic word generation task was employed to engage both language and executive networks while the counting task served as a high-level baseline of a non-propositional, overlearned speech task. Behavioural results showed an effect of condition (semantic word generation vs. counting, Wilcoxon signed-rank test: Z = -12.65, p < .001) and difficulty (easy vs. difficult categories, Z = -9.76, p < .001), and an interaction between both factors. Whole brain fMRI analyses revealed different activity patterns for the semantic word generation task and the counting task (whole brain analyses, thresholded at FWE-corrected p < .05). Semantic word generation engaged distributed brain regions comprising frontal areas and the cerebellum when directly compared with the counting task. Notably, areas of the frontal network included domain-specific regions such as left inferior frontal gyrus as well as regions that have been associated with the multiple-demand network (Fedorenko et al., 2013) such as left supplementary motor area, bilateral middle frontal gyrus, left superior frontal gyrus, adjacent dorsal anterior cingulate cortex, and bilateral insulae. Areas that showed stronger activation for counting compared to semantic word generation consisted of bilateral precuneus and right anterior superior temporal gyrus which confirmed previous results on automatic speech tasks (e.g. Birn et al., 2010). The current study thus confirms evidence for an involvement of the multiple-demand network in a language production task with increased cognitive demand compared to an overlearned speech task. In the next step, we will use effective connectivity analyses to characterize functional interactions between domain-specific and domain-general regions. Perception: Auditory B56 Classification of consonants and vowels with fast oscillation-based fMRI Mikkel Wallentin1,2,3, Torben Ellegaard Lund3, Camilla M. Andersen1, Roberta Rocca1,2; 1Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, 2 Interacting Minds Centre, Aarhus University, 3Centre of Functionally Integrative Neuroscience, Aarhus University Hospital Poster Session B Evidence for “phonotopic” representations in auditory cortices have been reported (e.g. Formisano et al., 2008), but the temporal and spatial resolution of neuroimaging has made it difficult to study phonemes in the brain, especially consonants are difficult. Here, we test an oscillation-based method (Wallentin et al., 2018), using a fast fMRI protocol with syllable-rate temporal resolution. METHODS Stimuli consisted of syllables (pairs of vowels (v) and consonants (c)). Two conditions combined 9 Danish vowels/consonants with 5 consonants/vowels to create 45 unique syllables per condition (i.e. 9vx5c and 9cx5v). In each condition, consonants and vowels were repeated in a fixed order, i.e. in condition 9v5c, a vowel would be repeated on every 9th trial, and the consonant would be repeated every 5th trial, making every combination new for the 45 trials, but at the same time creating two independent rhythms for vowels and consonants. Sessions consisted of 6x4 blocks: [6x9v5c, 6x9c5v, 6x9v5c, 6x9c5v], lasting 18 minutes. Three sessions were acquired in 23 participants. fMRI data was acquired at 3T using a whole-brain fast acquisition sequence (TR = 371ms, multi-band EPI acquisition) to capture signal changes at syllable resolution. Data were modelled using sine and cosine waves at the presentation rate for vowels and consonants, i.e. either 1/9 Hz or 1/5 Hz. Fitted sine and cosine waves were used to generate amplitude maps for each participant and condition. These were submitted to a 2nd level repeated-measures ANOVA in SPM. Single participant classification of vowels/consonants was conducted by creating a phase map for each 45s block. Phase is indicative of the delay in a voxel’s responsiveness to a repetitive stimulus, thus indicating differences in phoneme sensitivity. Phase maps from each block were used to perform a multivariate classification test. The phase maps for the 72 blocks were divided into two parts. The first half was used to conduct a search-light analysis (using Nilearn) in order to select the 1000 most predictive voxels. These voxels were used in a subsequent pattern classification analysis on the 2nd half of the phase maps. Both steps involved a Gaussian Naïve Bayes classifier. Cross-validation and permutation tests were used to determine significance. RESULTS The univariate amplitude analysis showed a bilateral maineffect of phoneme type (vowel vs. consonant) in Planum Temporale, bilaterally (P<0.05, FWE-corrected). The same areas also differentiated between phonemes oscillating at 1/5 Hz and 1/9 Hz, regardless of phoneme type. On individual participants, classification tests were able to classify 45 second phase maps from the 9s condition into consonants and vowels with accuracy score of 69% (std. 10%) and the 5s condition with 64% (std.15%) accuracy on average. CONCLUSIONS This protocol provides the first step towards mapping a phonotopic “fingerprint” at the individual participant level. This map may potentially predict native language or foreign language exposure. It is also a step towards making use of fMRI signal for decoding events at near speech rate temporal resolution. INTRODUCTION The auditory cortices contain tonotopic representations sound (e.g. Saenz and Langers, 2014), but how about functional organization of speech sounds? The Society for the Neurobiology of Language 113 Poster Session B B57 Vasopressin modulates the neural response to infant cries in fathers Jurriaan Witteman1, Anna van ‘t Veer1, Sandra Thijssen1, Niels Schiller1, Marinus Van IJzendoorn1, Marian Bakermans-Kranenburg2; 1Leiden University, 2Vrije Universiteit Amsterdam Introduction: Human infant cries communicate adverse psychophysiological states. An adequate response of parents is crucial for survival, making it likely that a neural network has evolved to ‘automatically’ detect infant cries.1 The present study presented task irrelevant infant cries to fathers during a verbal working memory task to test to what extent neural processing of infant cries persists despite low attentional resources. In nonhuman mammals, the hormone arginine vasopressin (AVP) has been shown to influence paternal caregiving2, so maybe AVP modulates automatic cry perception in fathers. Therefore, we tested whether AVP administration modulates automatic processing of infant cries among human fathers. Methods: Task irrelevant infant cries and acoustically matched control sounds were presented in an event-related paradigm while fathers (N = 23) memorized 1 letter (low load) or 5 letters (high load) and decided whether a target letter was present among the remembered letter(s) for a total of 96 trials. All fathers took part in two fMRI sessions with exactly the same protocol but with either intranasal AVP (20 IU) or placebo spray administered approximately 55 minutes before the working memory task. The EPI scans were analysed in FSL with EV’s CryLo, CryHi, ControlLo, ConrolHi at first level. At second level, the main effects of Sound (Cry vs. Control) and Load (Low vs. High) were modelled and the interaction with Hormone (AVP, Placebo). Clusters were determined by a Z > 2.3 cluster forming threshold and p < 0.05 cluster correction. Results: RM-ANOVA of the behavioral data revealed a main effect of Load (F(1,22) = 100.3, p < .001) and of Hormone (F(1,22) = 3.43, p = .05), showing that performance degraded under High load and AVP administration. On the neural level, infant cries activated the bilateral superior temporal gyri (STG) and High load (vs. low load) was associated with activation of frontoparietal areas previously implicated in working memory. Crucially, there was a Sound × Hormone interaction in the left inferior parietal cortex, continuing medially into the precuneus. Extraction of activation in this area for every condition against baseline revealed that AVP decreased deactivation for cry sounds, but increased deactivation for control sounds. Conclusion: Infant cries activated the ‘cry network’ known from the previous literature3, even though attention was diverted away from the auditory channel, suggesting that infant cries are processed relatively automatically1. AVP may have reduced working memory performance by interfering with default mode network (DMN) deactivation, hampering attention allocation to the working memory task. Similar effects have been found previously for ∆9THC4 administration. References: 1. Pessoa, L., et al. (2002). Neural processing of emotional faces requires attention. PNAS, 99,11458-11463. 2. Rilling, J. (2013). The neural and hormonal bases of human parental care. Neuropsychologia, 51, 731-747. 3. Witteman, J., 114 SNL 2019 Program Van IJzendoorn, M.H., Rilling, J.K., Bos, P.A., Schiller N.O., Bakerman – Kranenburg, M.J. (2019). Towards a neural model of infant cry perception. Neuroscience & Biobehavioral Reviews, 99, 23. 4. Bossong, M.G., et al. (2013). Default mode network in the effects of Δ9Tetrahydrocannabinol (THC) on human executive function. PLoS One, 8, e7074 Perception: Speech Perception and Audiovisual Integration B58 Neurophysiological evidence of the transformation of phonological into phonographic representations Chotiga Pattamadilok1, Deirdre Bolger2, Anne-Sophie Dubarry1; 1Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, 2Aix Marseille Univ, Institute of Language, Communication and the Brain (ILCB), Aix-en-Provence Studies on the influence of reading acquisition on speech processing have shown that acquiring an orthographic code changes the nature of speech representations. One important change that has been discussed in the literature is that the phonological representations are contaminated by spelling knowledge and transformed into “phonolographic” representations. Nevertheless, so far, no empirical evidence of such transformation has been reported. The present study addressed this issue using a combination of a new word learning protocol and EEG (electroencephalography) acquisition during an unattended oddball paradigm which elicited the Mismatch Negativity (MMN) component. Two groups of 16 native French speakers participated in the study that was conducted in two experimental sessions. During the first session, all participants learned the spoken and written forms of three new words, which all ended with the same phonological rime, i.e., /izo/, /ivo/, /iʒo/. For one group of participants, /izo/ and /ivo/ shared the same rime spelling, i.e., “izôt” and “ivôt”, while / iʒo/ had a different spelling, “ijaux”. For the other group of participants, the rime spellings of /ivo/, /iʒo/ were reversed yielding “ivaux” and “ijôt”. A letter detection task and a dictation task were conducted to ensure that the participants had correctly learned the spellings of the new words. Immediately after the learning phase, the auditory version of the new words were presented in an unattended oddball paradigm in which /izo/ represented the standard stimulus, and /ivo/ and /iʒo/ the raredeviant stimuli, which were either orthographically congruent or incongruent with the rime spelling of / izo/. During the oddball paradigm, participants watched a silent movie while they were passively exposed to the spoken inputs. Their EEG activity was continuously recorded. The second session took place one week later. Participants only performed the oddball paradigm part of the study with EEG acquisition, without being exposed to the written forms of the new words. A dictation task was conducted again at the end of each EEG session. The data obtained in the two groups of participants were combined in order to eliminate any acoustic differences between the two deviant conditions. MMN responses generated by the deviant stimuli were calculated separately for the The Society for the Neurobiology of Language SNL 2019 Program  orthographically congruent and incongruent conditions and for the two sessions. In both sessions, typical MMNs were observed over fronto-central and central regions, reflecting automatic discrimination of standard and deviant stimuli. In the first EEG session, we found no evidence that the characteristics of the MMNs were modulated by the orthographic congruency between the standard and the deviant stimuli, thus suggesting that only the phonological representations played a role immediately after the learning phase. Most interestingly, the impact of orthography emerged in the second EEG session, with orthographic incongruity leading to an increase in MMN amplitude. The delayed emergence of the impact of orthographic knowledge reported here provides new evidence of an integration of phonological and orthographic information and, thus, the transformation of “phonological” to “phonographic” representations that occurs once newly acquired knowledge had been consolidated. B59 Bilateral Opercular Syndrome and Speech Perception Grant Walker1, Ryan Stokes1, Patrick Rollo2, Nitin Tandon2, Gregory Hickok1; 1University of California, Irvine, University of Texas at Houston 2 Different theories have posited a range of responsibilities for the motor system in the perception of speech: from motor representations being the critical loci for perception, to the motor system merely providing support for disambiguation in noisy situations. Proponents of the motor system’s prominent contributions point to studies showing that TMS of lip and tongue primary motor areas induces effector-specific perceptual discrimination errors and response delays. Proponents of a limited, supplementary role for the motor system point to instances of humans and non-humans performing speech perception tasks normally despite disruption or lack of speech motor representations. Preserved perceptual abilities in cases of motor disruption caused by unilateral, left-hemisphere stroke leave open the possibility that contralateral homologous systems might be compensating for the deficit. We had the rare opportunity to analyze structural MRI and a comprehensive speech perception task battery in two cases with a bilateral opercular syndrome with sparing of temporal cortex. Although the opercular syndromes occurred in the context of different etiologies (Case 1: epilepsy, Case 2: brain hemorrhages from multiple cavernous malformations), both participants underwent resection of left frontal operculum (by another neurosurgeon, not in the authorship), and after a period during which speech production recovered, a right-sided resection resulted in the loss of voluntary control of speech effectors and permanent mutism. Lesions were segmented from T1-weighted MRI. Participants’ receptive language abilities were assessed at the phoneme, word, and sentence levels. Verbal short-term memory was also evaluated. Lesion reconstruction revealed damage in bilateral opercula in both cases. Case 1 had larger resections in homologous frontal regions, whereas Case 2’s right-sided lesions extended posteriorly from the inferior precentral region destroying frontal-parietal The Society for the Neurobiology of Language Poster Session B white matter. Both participants performed within or near normal limits on the word recognition, word-topicture matching tasks, and word-pair discrimination task, even when resolution of subtle phonemic cues was required. Case 1 had difficulty with discrimination of nonwords and synthesized speech in noise, likely stemming from a phonological short-term/working memory deficit. Both participants showed sensitivity to mismatched audiovisual speech signals and were able to comprehend simple sentences. Case 2 was also able to comprehend syntactically complex sentences showing largely preserved receptive language abilities. Case 1 had difficulty with complex syntax and limited immediate recall span, consistent with a working memory deficit. Bilateral damage to the motor speech system had little effect on the ability to recognize speech. Such damage can impair phonological short-term memory and produce an agrammatic-type comprehension pattern. Audiovisual speech integration is not necessarily impaired, nor is speech-in-noise perception necessarily profoundly affected. These results contribute to the evidence against a critical role of the motor system for perception of speech. Special consideration of task demands on verbal short-term/working memory is recommended to disambiguate performance declines that are unrelated to residual speech perception abilities. B60 Lexical information guides retuning of neural patterns in perceptual learning of speech Sahil Luthra1, Joao M. Correia2, Dave F. Kleinschmidt3, Laura Mesite4, Emily B. Myers1,5; 1University of Connecticut, 2Basque Center on Cognition, Brain and Language, 3Rutgers University, 4Harvard Graduate School of Education, 5Haskins Laboratories Listeners make perceptual adjustments in how acoustic information maps onto internal phonetic categories. This process of phonetic recalibration can be guided by context such as lexical knowledge (Norris, McQueen, & Cutler, 2003). Myers and Mesite (2014) examined the neural basis of phonetic recalibration using fMRI. During exposure blocks, participants heard speech sounds that were ambiguous between ‘s’ and ‘sh,’ with some hearing these sounds in lexical contexts that biased them towards ‘s’ and another group towards ‘sh.’ The size of the biasing effect was subsequently measured with a categorization task on an ‘asi’-‘ashi’ continuum. As predicted, lexical context affected the subsequent placement of the category boundary, although there was considerable trialto-trial variability in categorization of ambiguous tokens. Myers and Mesite analyzed how region-specific activity changed across as a function of the biasing context, but such an analysis cannot provide insight into how the specific pattern of activation might change as a result of phonetic recalibration. In the current study, we re-analyzed archival data from Myers & Mesite (2014), leveraging a machine learning algorithm (a support vector machine with recursive feature elimination) to examine changes in functional activation during phonetic recalibration. The classifier was trained on the multi-voxel patterns from the unambiguous endpoints of the ‘asi’-‘ashi’ continuum and then tested on patterns from ambiguous tokens taken 115 Poster Session B from the middle of the continuum; in this way, we asked whether the information that was useful for discriminating between continuum endpoints was generalizable to classify the ambiguous tokens. Critically, the classifier successfully discriminated between ambiguous trials on the basis of subjects’ behavioral responses (i.e., whether the subject perceived that stimulus as a ‘s’ or a ‘sh’ on that particular trial). However, it did not achieve abovechance accuracy when these same tokens were labeled with respect to the underlying acoustics. We take these findings as evidence that phonetic recalibration involves neural recalibration. That is, the activation pattern on a given trial predicts the participant’s ultimate decision of how they heard that ambiguous stimulus. For instance, if a participant perceived a given stimulus as ‘sh’ on a particular trial, the pattern more closely resembled the patterns for unambiguous versions of ‘sh’ than those of ‘s.’ Targeted ROI analyses showed that left parietal regions (supramarginal and angular gyri) were the most informative for categorization. This finding is consistent with research suggesting a role for left parietal regions in lexical influences on phonetic processing (e.g., Gow, Segawa, Ahlfors, & Lin, 2008). Overall, the pattern of neural activity across a variety of regions, but especially left parietal areas, reflect listeners’ ultimate perception of ambiguous sounds rather than the bottom-up acoustics. B61 Hang on the lips: The listeners’ brain entrains to the theta rhythms conveyed by lip and auditory streams during naturalistic multimodal speech perception. Emmanuel Biau1, Danying Wang1, Hyojin Park1, Ole Jensen1, Simon Hanslmayr1; 1 School of Psychology, Centre for Human Brain Health (CHBH), University of Birmingham In this electro-encephalogram (EEG) study, we investigated whether neural oscillatory responses induced in the sensory areas during natural audiovisual speech perception reflect the predominant theta activity (4-8Hz) aligning lip and auditory streams. Using short segments taken from real interviews, we calculated mutual information (MI) between speakers’ lip movements and speech envelope to assess instantaneous phase dependence and establish their alignment at theta rhythms. We selected audiovisual clips for which the peaks of MI were dominant in the theta band of interest, and created two conditions: a synchronous condition for which the natural alignment between video and audio onsets was intact, and an asynchronous condition for which video and auditory onsets were shifted by 180 degrees in the theta phase. Participants were presented with the audiovisual clips in an asynchrony detection task, while recording their EEG. After each clip, participants had to indicate whether video and sound were synchronous or asynchronous, basing on speech information. They were also presented with two additional unimodal conditions (silent videos and sounds only) in order to establish the locations of neural responses induced by lip movements and auditory signal. The quality of lip-speech multimodal integration was assessed by the asynchrony detection performance (d-prime). At scalp level, we calculated the MI between EEG single-trials and their corresponding 116 SNL 2019 Program lip movements or auditory information, to quantify the neural entrainment induced by dominant theta activity in separate modalities. We expected an increase of MI in the auditory cortex to reflect neural entrainment to auditory signal, and in the occipital cortex to reflect lipbased visual entrainment. Behavioral results revealed d-prime scores significantly greater than chance level showing that participants were capable to determine whether lips and utterance were synchronous or asynchronous in the clips (although significantly more accurate in synchronous condition). EEG analysis from the sound only trials revealed a clear increase of power in the theta band with a central topography, in line with auditory speech processing literature. In contrast, lip movements in silent videos induced an increase of theta power in the occipital areas, although less salient. Further, greater MI between EEG epochs and auditory trials was observed in central regions compared to MI with mismatched data, confirming neural entrainment to dominant theta activity in auditory speech signal. Greater MI between EEG epochs and movie trials was observed in the occipital areas compared to mismatched data, suggesting that the visual cortex entrained to the theta activity conveyed by lip movements. Interestingly, we also found localized MI in the central area, overlapping with the same topography as in the auditory modality, suggesting that lip movements may also entrain auditory cortex even when participants watched silent videos. Both behavioral and neurophysiological results suggest first that listeners match visual and auditory streams together basing on speech features conveying information at theta rates. Second, the alignment between lip movements and voice modulations onto dominant theta rhythms may tune specialized sensory areas together, as hypothesized in previous audiovisual studies, and facilitate later multimodal binding during natural speech processing. B62 Text-induced shifts in speech perception in children with and without developmental dyslexia: Distinct cortical activation patterns underlie similar behavioural findings Linda Romanovska1, Roef Janssen1, Milene Bonte1; 1Dept. Cognitive Neuroscience, Maastricht University While most children successfully learn to read within the first few years of schooling, around 10% of children struggle with reading acquisition due to developmental dyslexia. A proposed core deficit of the reading problems observed in adults and children with dyslexia is altered letter-speech sound coupling. In our study we investigate the associations between speech and text in 8-10 year-old children using MRI and text-based recalibration. In this paradigm, an ambiguous speech sound midway between /aba/ and /ada/ is combined with disambiguating ‘aba’ or ‘ada’ text to bias the perception of the ambiguous speech in the direction of text. While this shift (recalibration) has been shown not to be significant in adult dyslexic readers, we have recently demonstrated that behaviourally, children with and without dyslexia show comparable recalibration effects. Preliminary fMRI findings employing this paradigm in the same children groups, reveal different activation patterns between highly fluent typical readers The Society for the Neurobiology of Language SNL 2019 Program  and dyslexic readers in key reading and language-related brain areas including the auditory cortex, prefrontal areas and visual fusiform areas. Activation in these areas also appears to be linked to reading skills, with higher reading fluency and better letter-speech sound substitution associated with higher activation levels. These findings suggest fine-tuned differences in the neurocognitive mechanisms underlying recalibration in children with and without dyslexia despite the comparable observed behavioural effect. Phonology and Phonological Working Memory B63 Inner speech in silent reading evokes theta phaselocked responses in the auditory cortex Bo Yao1, Jason Taylor1, Briony Banks2, Sonja Kotz1,3; 1Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, 2 Department of Psychology, Lancaster University, 3Departments of Neuropsychology & Psychopharmacology, Maastricht University A growing body of research shows that theta-band (47Hz) neural oscillations in the auditory cortex track (or are phase-locked to) the rhythm of natural speech (Giraud & Poeppel, 2012; Luo & Poeppel, 2007). It seems to reflect a neural mechanism for segmenting and coding continuous speech signals into hierarchical linguistic units for comprehension (Ding et al., 2015; Peelle et al., 2013). However, the role of neural oscillations remains unknown for inner speech processing - a pervasive mental phenomenon observed in a range of cognitive tasks including thinking, problem solving, working memory, reading and writing. Inner speech is an inwardly audible speech experience without outward articulation. Like overt speech, inner speech can be characterised by acoustic features such as tempo (Alexander & Nygaard, 2008; Yao & Scheepers, 2011) and activates the same cortical areas as overt speech (Brück et al., 2014; Yao et al., 2011). The shared acoustic features and neural substrates between inner and overt speech suggest that they may be governed by a common oscillation-based neural mechanism. The present EEG study explored for the first time whether inner speech may be associated with theta oscillations in the auditory cortex. Thirty-two native speakers of English silently read 120 short stories (+60 fillers) that contained either a direct speech quotation (e.g., Mary said: “This dress is lovely”) or a linguisticallymatched indirect speech quotation (e.g., Mary said that the dress was lovely). The stories were presented sentence-by-sentence for self-paced reading, with the critical speech quotations (e.g., “This dress is lovely”) presented last. EEG data were recorded throughout the experiment using a 64-channel Biosemi ActiveTwo system. The EEG data were pre-processed and epoched to the presentation onsets of the critical speech quotations. We decomposed EEG data in time-frequency space using a 7-cycle Morlet wavelet in SPM12 (www. fil.ion.ucl.ac.uk/spm/software/spm12/). We calculated evoked power and phase-locking values (PLVs; i.e. intertrial phase coherence) in theta frequencies (4-7Hz), which were converted into 3D spatial parametric maps (2D scalp map × time) by conditions for 2nd level group analysis. The Society for the Neurobiology of Language Poster Session B Group analyses revealed significantly higher phase-locked activity (power and phase-locking) at ~250-500ms when reading direct (relative to indirect) speech quotations, implying a stronger theta phase reset at the beginning of inner speech processing. This phase-locked activity was left lateralised (fronto-temporo-parietal) and was source localised to the left occipito-temporal and fusiform area (roughly BA37), bilateral temporal areas (mostly BA20/21) and the left inferior and middle frontal area (BA45/46), part of the phonological processing network in reading. Our results demonstrate the importance of theta oscillations in silent reading, and suggest for the first time that inner speech may result from more synchronous oscillatory sampling of phonological representations that are decoded during silent reading. Prosody B64 Neural correlates of lexical tone and intonation in tonal and non-tonal language speakers Pei-Ju Chien1, Angela D. Friederici1, Gesa Hartwigsen1, Daniela Sammler1; 1Max Planck Institute for Human Cognitive and Brain Sciences It has been shown that pitch processing can be tuned differently according to the linguistic function carried by pitch information. However, it is unclear how different types of linguistic pitch are organized in neural networks for speech and language and how speakers’ language backgrounds influence these representations. Here, we investigated the neural correlates underlying the perception of lexical tone and intonation in tonal and non-tonal language speakers. To this end, we adopted Mandarin Chinese (hereafter Mandarin) as materials and compared Mandarin and German speakers’ processing. We conducted an fMRI experiment and tested the categorization of tone and intonation, and a control baseline (voice gender categorization). Monosyllabic stimuli ‘bi’ spoken with two lexical tones (tone 2, ‘nose’, tone 4, ‘arm’) and two intonation types (statement, question) by four Mandarin speakers (2 males) were audio-morphed by steps of 12.5% to create pitch-varied continua of tone, intonation and gender, respectively. During scanning, participants categorized the audiomorphed stimuli for either tone, intonation or gender in separate blocks. Comparisons of task-related activity indicated that between-task differences were found only in Mandarin speakers. Between-group differences were found only in the tone task. First, neural activity of tone and intonation in Mandarin speakers segregated in the bilateral parietal cortices for tone against intonation and in the right frontal cortex for the reverse contrast (intonation vs. tone). The parietal activity for tone vs. intonation included bilateral angular gyrus and superior parietal lobule, likely reflecting semantic processing. Frontal areas for intonation vs. tone included right middle frontal gyrus (MFG), inferior frontal gyrus (IFG), presupplementary motor area (preSMA) and insula, possibly reflecting stronger recruitment of pitch contour evaluation in intonation and a difference in cognitive control between tasks. Both tasks overlapped (against gender) in the left supramarginal gyrus (SMG), possibly indicating common involvement of phonological processes. In German 117 Poster Session B speakers, task-related activity of tone and intonation (against gender) largely overlapped in the left MFG, IFG, premotor cortex (PMC), preSMA, SMG, insula and cerebellum, suggesting that tone and intonation might not be discriminated by their linguistic functions but similarly related to phonological and pitch contour processing and share comparable demands for cognitive control. When comparing between groups, Mandarin speakers showed significantly stronger activity in the tone task in the bilateral insula, posterior cingulate gyrus, putamen and cerebellum, suggestive of stronger involvement of semantic processing and tonal articulation in native speakers. Both groups overlapped in the intonation task in the left IFG, insula, PMC and bilateral preSMA, possibly indicating similar processes across native and nonnative speakers. Together, these findings demonstrate the interplay between linguistic pitch processing and speakers’ language backgrounds. Multisensory or Sensorimotor Integration B65 Sensorimotor processing in language production: Evidence from audiovisual speech entrainment in people with aphasia Marja-Liisa Mailend1, Gabriella Vigliocco2, Myrna Schwartz1, Laurel J. Buxbaum1; 1Moss Rehabilitation Research Institute, 2University College London Research on speech entrainment demonstrates that some people with chronic non-fluent aphasia show dramatic improvement in fluency when shadowing the speech and mouth movements of another speaker in real time (Fridriksson et al., 2012; 2015). The theoretical and clinical importance of this effect calls for further research to replicate and extend the basic findings. We report early results from an ongoing study of audiovisual speech entrainment that uses improved methods and a case series design with the potential to elucidate which patients benefit from entrainment support, and why. Data are currently available from four participants with aphasia resulting from a left hemisphere stroke. Two participants (NF1 and NF2) presented with a non-fluent profile consistent with Broca’s aphasia (WAB AQ was 68.6 and 48.4 respectively); the other two participants had fluent aphasia of conduction (F1; WAB AQ=71.9) and anomic types (F2; WAB AQ=89.7). Research-quality structural MRI scans are currently available for three participants. The lesion of NF1 localized primarily to premotor cortex and the insula, NF2 had a large perisylvian lesion, and F2 had lesions in the primary motor and the somatosensory cortices. All lesions extended deep into the white matter with significant lesion overlap in premotor cortex. The within-subjects design required participants to produce speech in three conditions. In all conditions, participants watched a Tweetie and Sylvester cartoon and then listened to a narrative that captured the events of the cartoon. Next, in the two entrainment conditions participants imitated the speech of a recorded model in real time. In the Audiovisual conditions, participants heard the model and saw her mouth movements while in the Auditory-Only condition only auditory information was available. Finally, in the Spontaneous Speech condition participants described the events of the cartoon in their 118 SNL 2019 Program own words. The participants’ speech was recorded, transcribed, and coded for three outcome measures: different words per minute, % intelligible script words, and the time lag between the model and the participants’ speech. Findings replicated previous findings in terms of fluency. The greatest number of different words were produced in the Audiovisual condition (mean=30; SD=2) and fewest in the Spontaneous Speech condition (mean=22, SD=11) with Auditory only condition in the middle (mean=28, SD=5). The effect was driven by the non-fluent group. A more consistent pattern across participants was observed for the % of intelligible script words with the average of 82% for the Audiovisual condition and 65% for Auditory Only condition. Furthermore, the average lag between the model and participants’ speech was 158 ms and 151 ms respectively, indicating that speakers were truly entraining rather than merely repeating the words after the model. In summary, this study replicates previous findings of improved fluency under audiovisual speech entrainment for people with non-fluent aphasia, and also shows that people with fluent aphasia may benefit from audiovisual information in producing specifically targeted words. Results from this preliminary sample indicate that people with very different aphasia profiles and overall impairment levels are able to successfully entrain to another speaker in real time. Speech Motor Control B66 Population dynamics in Broca’s area during overt and covert speech Philémon Roussel1,2, Florent Bocquelet1,2, Stéphan Chabardès3, Blaise Yvert1,2; 1INSERM, 2Grenoble-Alpes University, 3Grenoble-Alpes University Hospital While overt and covert speech have been shown to share overlapping neural substrates, the detailed ensemble dynamics of either form of speech production remain largely unknown. Indeed, the activity of single cortical neurons in speech-related structures have only been described in few particular situations. In particular, the activity of Broca’s area, a region of the inferior frontal gyrus of the dominant hemisphere playing a major role in speech production, has not yet been investigated at the ensemble level. Here, we investigated the intracortical population dynamics within the Broca’s area of a patient undergoing awake surgery and temporarily implanted with a Utah array in the pars triangularis of the left hemisphere. Neural activity was recorded while the subject performed a task driven by cues on a screen. Each trial of the task consisted in reading aloud a short sentence, repeating it aloud and finally repeating it covertly. Over the 96 microelectrodes of the array, 38 putative single units showing stable activity on 33 consecutive trials could be isolated using spike sorting. Their smoothed firing rates were then computed by applying Gaussian kernel convolution on the binned spike times. Trials were divided into annotated intervals belonging to 3 conditions: overt speech, covert speech and silence. Ten cells were found to be significantly modulated across these 3 conditions, two of them showing similar increased activity in overt and covert conditions compared to silence periods, while some others were specific to overt speech. A linear The Society for the Neurobiology of Language SNL 2019 Program  discriminant model was then trained to classify all time samples between the 3 conditions using a relevant set of time-shifted firing rates, selected by a greedy approach. We found that it was possible to classify all overt and covert speech time samples with more than 80% and 70% accuracy, respectively. These results suggest the existence of different global ensemble dynamics specific to either form of speech in Broca’s area, with a subset of units showing similar activity in both cases. B67 Distinct auditory neural populations track vocalization feedback delays Muge Ozker Sertel1, Qingyang Zhu1, Zhuoran Huang1, Beenish Mahmood1, Daniel Maksumov1, Werner Doyle2, Orrin Devinsky1, Adeen Flinker1; 1New York University School of Medicine, Department of Neurology, 2New York University School of Medicine, Department of Neurosurgery Neural responses in the auditory cortex are suppressed during self-generated vocal sounds. This auditory suppression mechanism has been hypothesized to facilitate the detection of vocalization errors. Nevertheless, while such a link has been demonstrated in the primate vocalization literature, it remains unclear in humans. To test this hypothesis, we conducted an auditory repetition task and a delayed auditory feedback task with neurosurgical human subjects using electrocorticography (ECoG) recordings. Neural responses in the highgamma broadband frequencies (70-150 Hz) were used as the primary measure of neural activity. In the auditory repetition task, subjects listened to a word and repeated it afterwards. In line with previous findings, auditory responses were suppressed during speaking compared to listening, but to a different extent in different electrodes. In the altered feedback task, subjects read aloud visually presented words, while their voice was recorded by a microphone and played back to them through earphones in real time with 0, 50, 100 or 200 millisecond delays. Behaviorally, articulation duration increased with increasing amount of delays. Recordings from auditory cortex exhibited different types of responses to delayed feedback. Some electrodes showed a response profile sensitive to articulation duration, exhibiting longer response durations for increasing delays. While other electrodes showed sensitivity to the amount of feedback perturbation, exhibiting larger response amplitudes for increasing delays. Moreover, the degree to which electrodes were sensitive to feedback delays was significantly correlated with the amount of suppression in each electrode. These results suggest that distinct neural populations of the auditory cortex are sensitive to articulation duration and alterations in the auditory feedback during speech production and these responses are linked to vocalization-induced suppression in the auditory cortex. These findings constitute one of the first reports from neural recordings in humans, providing direct evidence that auditory suppression is linked to a mechanism for vocalization error detection. The Society for the Neurobiology of Language Poster Session B Computational Approaches B68 Beyond Phrase Structure: An Alternative Analysis of Brennan and Hale (2019) Using a Dependency Parser Phillip M. Alday1, Ingmar Brilmayer2; 1Max Planck Institute for Psycholinguistics, 2University of Cologne Recently, Brennan & Hale (2019) published an analysis of EEG data recorded while participants listened passively to an audio recording of Alice in Wonderland. They found that part-of-speech predictions based on a hierarchical phrase structure model correlate with human EEG activity more strongly than sequential models of syntactic structure in everyday language (cf. Brennan et al., 2016; Hale et al., 2018). In particular, B&H compared model fit of a sequential trigram model, a simple three-level recurrent neural network (SRN; Frank et al., 2015) and a contextfree grammar (CFG) that explicitly assumes hierarchical syntactic structure and found that the CFG model performs best in modeling the EEG data. Here, we present an alternative analysis of the data of Brennan & Hale (2019) using on a dependency grammar for hierarchical structure. While phrase structure grammars use structural categories and phrases to model dependencies between words, dependency models use words and functional relationships (e.g. subject or object). One advantage of dependency models lies in their ability to model crosslinguistic variability in word category membership: a word is interpreted as a verb, when it has an “subject” and an “object” parameter, while it is interpreted as a noun when it is a (e.g. object or subject) parameter to a verb. Moreover, transition-based or shift-reduce dependency parsers provide a convenient model of incremental sentence processing, with every shift operation correlating with an increase in memory load and every reduce operation the construction of a single complex entity (comparable to “unification” or “composition” in some accounts) and a corresponding decrease in memory load and overall different feature profile (cf. Lewis, Vasishth,and Van Dyke, 2006). In addition to online measures about whether each new token results in a shift or a reduce, there are also offline measures such as the number of arcs connecting a particular token to preceding elements (nleft) or succeeding elements (nright) and the overall depth of a token in the final parse tree. EEG data were preprocessed similarly to Brennan & Hale, likewise regression timecourses were computed samplewise at each electrode with covariates for word frequency of the current, previous and next word; position of a word within the current sentence; and position of the sentence within the overall story. In contrast to B&Hale, we performed all modelling hierarchically, i.e. by using mixed-effects models on the combined data from all subjects at each electrode, and inserted all predictors simultaneously. We found effects for nleft, which we interpret as reflect the dynamics of memory load: the positive effect of nleft prestimulus corresponds to the increasing memory load over time, while the large negativity beak between 400 and 600ms corresponds to the processing associated with changing representation in memory. We also found a weaker effect for nright at roughly 400ms, which may represent 119 Poster Session B predictive processes. Thus, our analysis demonstrates the feasibility of dependency models of natural language syntax as models of syntactic structure building during the comprehension of naturalistic, everyday language stimuli without the strong (empirically difficult to test) assumptions of phrase structure grammar. B69 Neurobiological modeling of the Mental Lexicon Hartmut Fitz1,2, Dick van den Broek2; 1Donders Centre for Cognitive Neuroimaging, Radboud University, 2Neurobiology of Language Department, Max Planck Institute for Psycholinguistics Language processing requires long-term memory where linguistic units are stored and maintained (Mental Lexicon). Theories of the Mental Lexicon differ on what these units are, how much internal structure they have, and how they are being retrieved. Here, we investigate the neurobiological basis of the Mental Lexicon. That is, we are asking how engrams evolve in neurobiological infrastructure, how they remain stable despite ongoing plasticity, and how they can be reactivated from partial retrieval cues. We addressed these questions through the simulation of sparsely-connected recurrent networks of spiking neurons (5,000) with conductance-based synapses (20% connectivity), similar to Litwin-Kumar & Doiron (2014). Neurons were either excitatory (E, adaptive exponential) or inhibitory (I, leaky integrate-and-fire). Networks were driven by sequences of words (15,000) corresponding to English sentences, followed by an idle period without language input (12 min). There was noisy, random background activity across both phases. Words were represented as Poisson spike trains (8 kHz) that targeted subpopulations of excitatory neurons. Each word consisted of a lexeme and phonological, syntactic and semantic features, and was presented for a duration proportional to phonemic length (e.g., cat: k|æ|t = 300 ms). There were 93 word features in total. Inhibitory plasticity (iSTDP) was used on I->E synapses to balance the network and achieve asynchronous, irregular firing (Vogels et al., 2011). Voltage-based spike timingdependent plasticity (STDP) was used on E ! E synapses to model learning in terms of long-term potentiation and depression (Clopath et al., 2010). To counteract instability due to Hebbian plasticity, synaptic normalization was used on E->E synapses (Turrigiano et al., 2008). These different plasticity mechanisms were switched on throughout the simulations. During learning, word features were rapidly encoded into strongly connected cell assemblies while connectivity between assemblies was depressed. These engrams emerged from naive, random networks through STDP. The functional role of iSTDP was to prevent runaway processes and winnertake-all synaptic dynamics. These findings confirm previous results with similar networks that used a different stimulation protocol and inputs without internal structure (Litwin-Kumar & Doiron, 2014). To test word retrieval, networks received phonological input and had to activate lexemes, and syntactic and semantic features from these partial cues. Lexemes were activated with high accuracy (84%) but syntax and semantics remained below 50%. Although fine-grained features were encoded into the 120 SNL 2019 Program neurobiological substrate, it was difficult to retrieve them from phonological cues alone. During the idle period with unspecific background activity, engrams spontaneously reactivated. Strongly inhibitory synapses depotentiated while excitatory ones remained stable. This homeostatic balancing consolidated memories and facilitated retrieval. We also found that engram formation required a pool of excitatory neurons that were not stimulated by language input. This motivates a layered network structure where extrinsic input targets some layers but not others. This work provides a first step towards explaining how the Mental Lexicon could be implemented at the neurobiological level. Storage and maintenance were robust but the retrieval of features over time lacked precision. Future work needs to address how contextual information from sentence-level processing can further constrain lexical selection. B70 EARSHOT: Emulating Auditory Recognition of Speech by Humans Over Time James Magnuson1, Heejo You1, Monica Li1, Jay Rueckl1, Monty Escabi1, Kevin Brown2, Hosung Nam3, Paul Allopenna1, Rachel Theodore1, Nicholas Monto1; 1 University of Connecticut, 2Oregon State University, 3Korea University INTRODUCTION. A critical gap in human speech recognition is that no comprehensive cognitive models operate on real speech. We present EARSHOT, based on the DeepListener model we presented last year. EARSHOT is a neural network that incrementally maps spectrographic speech inputs to pseudo-semantics via a single recurrent layer of long short-term memory (LSTM) nodes. Compared to our previous report, we have achieved significant gains in accuracy, variability (17 talkers instead of 10), potential ecological validity (with 2 human talkers and 15 high-quality speech synthesized talkers), and we have conducted more sophisticated analyses that reveal emergent representations in hiddenunit responses. METHODS. We transformed speech files to 512-channel spectrograms (22050 hz sampling rate, approximately 11000 hz frequency range). Spectrograms were presented as model inputs as 512 channel vectors (~21.5 hz per channel) in 10 ms steps. For each talker, we recorded 240 words. The 512 inputs map to a recurrent layer of 512 LSTM nodes, which map to 300 pseudosemantic sparse random vector outputs (30 of 300 elements “on;” a common simplification, given the largely arbitrary mapping from form to meaning). We trained 17 separate models. In each, a different talker was completely excluded from training. In addition, a different 15 words were excluded from training for each trained-on talker. In each epoch, 3600 items (225 words x 16 talkers) were presented in random order. A simulation was counted as correct if the output of the model was closer (as indexed by vector cosine similarity) to the target word than any other. Initially, we trained the model for 2000 epochs. We used two methods to assess emergent representations in the model’s hidden units: sensitivity analyses based on methods from human electrocorticography, and representational similarity analysis (RSA) based on methods from human neuroimaging. RESULTS. Mean The Society for the Neurobiology of Language SNL 2019 Program  accuracy was 91% for trained items, 52% for words excluded for each trained talker, and 29% for talkers excluded from training. When training resumed with excluded items and talkers, accuracy increased to ~75% for excluded words, and ~55% for excluded talkers within 50 epochs, and to ~90% and 88% within 500 epochs. However, generalization was worse for excluded human than synthetic talkers; hence, future work will focus on real speech produced by humans. The timecourse of lexical competition closely resembled that observed in humans (Allopenna, Magnuson, & Tanenhaus, 19998) and the “gold standard” TRACE model (McClelland & Elman, 1986), although rhyme competition was somewhat diminished compared to our previous simulations with fewer talkers (10) but more words (1000). Sensitivity and RSA revealed emergent distributed phonological codes in hidden unit activation patterns even though the model was not trained to produce phonetic labels. Sensitivity closely resembles results from direct recordings from human superior temporal gyrus reported by Mesgarani et al. (2014). CONCLUSIONS. EARSHOT provides proofof-concept that a shallow model can operate on real speech while producing similar behavior as previous models that worked on abstract inputs. Emergent internal representations closely resemble human cortical responses. This framework opens the way to ongoing work aimed at increasing ecological and developmental validity. Speech Perception B71 Constraint Asyntactic Structure of Ancient Chinese Poetry Facilitates Content Parsing: A study combining MEG, RNN, and crowdsourcing Xiangbin Teng1, Min Ma2, Jinbiao Yang3,4,5, Stefan Blohm1, Qing Cai4,5, Xing Tian3,4,5; 1Max Planck Institute for Empirical Aesthetics, Frankfurt, 2Google, Inc, 3 Division of Arts and Sciences, New York University Shanghai, 4 East China Normal University, Shanghai, 5NYU-ECNU Institute of Brain and Cognitive Science at New York University Shanghai Poetic forms and genres are uniquely structured and only allow limited variations within a genre. This strict formal and thematic structure distinguishes poetry from everyday speech and raises an interesting question: how does this structural constraint affect poem appreciation? To comprehend continuous speech streams, listeners need to establish hierarchies of perceptual, linguistic and conceptual chunks online (Ding et al., 2015). Hence, the appreciation of poetry presupposes that listeners correctly segregate the poetic stream and construct linguistic and conceptual structures of increasing complexity. Apart from prosodic segmentation cues imposed by human speakers, parsing poems may be guided by the regularities of poetic structure - listeners’ prior knowledge of the formal and thematic constraints may function as a cognitive ‘template’ to actively group words and lines and to predict the unfolding of poems. Testing this hypothesis can reveal how listeners appreciate poetry and illuminate why poetic genres are stringently structured. More broadly, it can deepen our understanding of speech perception by testing whether listeners can capitalize on asyntactic structures to parse speech streams. Methods: We chose The Society for the Neurobiology of Language Poster Session B Jueju, a genre of Chinese ancient poems with the most stringent form, and used a recurrent neural network to create thousands of artificial poems. This procedure controlled the language complexity of the poems and listeners’ familiarity with the materials. We selected 150 artificial poems and collected 55982 behavioral ratings on those poems regarding linguistic and aesthetic aspects of poem appreciation through online crowdsourcing. Next, we presented those poems to 13 native Chinese speakers while conducting magnetoencephalography (MEG) recording. Each poem was presented twice as a sequence of isomorphous syllables with each syllable individually synthesized, so that acoustic indicators of poetic structures, such as prosodic cues and pauses, were excluded from the auditory stimuli. The syllabic rate was 3.33 Hz. During MEG data analysis, we conducted both frequency-tagging spectral analyses and temporal analyses of phase precession in both the sensor and source spaces of MEG recording. Results: We found that the participants could rely on their knowledge of Jueju to correctly segment each sentence in the novel artificial poems, which was reflected by the salient MEG frequency component around 0.64 Hz corresponding to the sentential rate of the poems. The 0.64 Hz component was localised primarily on the auditory and motor areas of the left hemisphere and positively correlated with the language complexity. When the participants heard the same poem the second time, such auditory-motor component decreased in magnitude, but a sustained temporal response emerged in the left pars triangularis. The phase series of MEG signals during the second presentation of the poems advanced faster than the first time – a phenomenon of phase precession (Lisman, 2005). Moreover, the behavioural ratings correlated negatively with the language complexity but positively with MEG signals between 4 to 5 Hz. Conclusion: Listeners employ an auditory-motor circuit to actively parse incoming unfamiliar poem/speech streams based on their knowledge of the poetic structure, followed by the engagement of language areas to establish linguistic hierarchies to better predict the incoming speech contents. B72 Neural Computations of Prediction Error Can Explain MEG Responses during Recognition of Spoken Words and Pseudowords Yingcan (Carol) Wang1, Ediz Sohoglu1, Rebecca Gilbert1, Richard N. Henson1, Matthew H. Davis1; 1 MRC Cognition and Brain Sciences Unit, University of Cambridge INTRODUCTION. Listeners recognise familiar words spoken in their native language with a speed and accuracy that is unmatched by artificial systems. One impressive aspect of human perception is the ability to identify words while detecting and encoding unfamiliar pseudowords. One putative computational mechanism is Predictive Coding (PC) (Davis & Sohoglu, in press), by which computations of prediction error (the difference between heard and expected speech segments) support both the recognition of familiar words (captain) and detection of novel words (captick). METHODS. Twentyfour participants made lexical decisions on 160 sets of 121 Poster Session B spoken words (e.g. captain, /kæptɪn/) and pseudowords (e.g. captick, /kæptɪk/) either with or without prior auditory presentation of another item sharing the same initial syllable (e.g. hearing captive, /kæptɪv/, or captiss, /kæptɪs/, before captain or captick). While performing this task, participants’ brain activity was recorded using concurrent electro/magnetoencephalography (EEG/MEG). Neural responses were time-locked to the deviation point (DP) where stimuli begin to acoustically diverge from each other (e.g. immediately after /ɪ/ for /kæptɪ…/). RESULTS. Behavioural results showed lexical competition effects (Monsell & Hirsh, 1998). Specifically, word recognition was significantly delayed by prior presentation (priming) from another word sharing the same initial sounds. However, behavioural effects of competitor priming were absent for trials with pseudoword primes or pseudoword targets. MEG results indicated that from around 400 to 700ms post DP, pseudowords elicited significantly stronger neural responses in the superior temporal gyrus (STG) than words. In the same time window and sensory locations, neural signals evoked by words and pseudowords showed an interaction between lexicality and priming in line with behavioural effects. Specifically, words (captain) primed by another similar sounding word (captive) evoked stronger neural responses in the STG than unprimed words. Whereas neural signals evoked by primed and unprimed pseudowords (captick) did not differ, regardless of the lexical status of the priming item (captain vs captiss). CONCLUSION. A number of different computational accounts can explain why STG responses elicited by pseudowords compared to words might differ (e.g. due to additional processing difficulty for segments in pseudowords, or greater segment prediction error). However, our observation that changes to the ease of word identification modulate the same STG responses appears more consistent with the PC model than other accounts (Davis & Sohoglu, in press). In the PC account, competitor priming is due to word identification strengthening predictions between pre- and post-DP segments of primed words (captain). Correspondingly larger PEs will be elicited by subsequent presentation of a lexical competitor (captive) time-locked to the divergent post-DP segments. In contrast, pseudoword prime presentations do not affect word identification or PE computations in the same way. Instead, pseudoword presentations elicit maximal PE signals that promote memory encoding of pseudowords but do not modify segment predictions. Overall, our study links neural computations of prediction error to lexical competition and pseudoword encoding processes, in line with PC principles. B73 Cultural background shapes brain activity and associations elicited during listening to a narrative Maria Hakonen1, Arsi Ikäheimonen1, Annika Húlten1, Janne Kauttonen1, Miika Koskinen2, Fa-Hsuan Lin3, Anastasia Lowe1, Mikko Sams1, Iiro Jääskelainen1; 1Aalto University School of Science, 2University of Helsinki, 3University of Toronto 122 SNL 2019 Program Introduction The interpretation of a narrative can vary between individuals based on the differences in listers’ previous experiences. People from the same culture tend to share more similar experiences, values, believes and attitudes. We hypothesized that cultural differences in familial background shape how the brain processes a narrative, as well as how the narrative is interpreted. Methods We recruited 48 healthy volunteers who were fluent in Finnish. Half of the subjects had parents with a Finnish cultural background, whereas the other half had one or both parents from a Russian cultural background. The subjects listened to a 71-min narrative during ultra-fast fMRI. The narrative told a story of two protagonists, one with a Finnish and the other with a Russian background. Afterward, the narrative was replayed in 101 segments, and the subjects were asked produce associations related to the previous segment for 20-30 sec to describe what had been on their minds while they had heard the story in the fMRI scanner. Between-subject similarities of brain hemodynamic activity were estimated using inter-subject correlation analysis. The similarity in how the narrative was interpreted was estimated by comparing the semantic relatedness of the associated words across the two groups in a semantic space (Word2Vec) generated from a large internet text corpora. Results The cosine similarity of the associated words in the semantic space was different between the two groups. Further, there were betweengroup differences in the inter-subject correlation of brain hemodynamic activity in the left superior temporal gyrus and Heschl’s gyrus, as well as in the bilateral middle temporal gyrus, lateral occipital cortex, and precuneus. Conclusions Our results suggest that even subtle cultural differences in the familial background shape how a person interprets a narrative and how their brains process it. The brain loci of differences in inter-subject correlation suggest that cultural familial background modulates processing of words and sentences, processing of the information accumulated across the narrative as well as mental imagery elicited by the narrative. B74 Pitch, Formants, and Formant Differences are Decisive Factors in Vowel Processing – Electrophysiological Evidence from N1 Amplitude and Latency Analyses. Marina Frank1, Beeke Muhlack1, Franka Zebe1, Mathias Scharinger1; 1Philipps University Marburg Introduction: Vowels in spoken language combine pitch and spectral properties, both of which seem to determine neural processing, as evidenced by the N1. The N1 is a negative evoked potential measured by electroencephalography (EEG) peaking at around 100 ms after the onset of an auditory stimulus. The purpose of this study is to investigate the role of the N1 in vowel processing. To be precise, we were interested in the factors influencing the N1 component, namely the perceived pitch and spectral features. In order to analyze the contribution of pitch, the stimuli used in this study were manipulated in fundamental frequency. Thus, the factors under investigation were f0, F1, F2, and the distance between F1 and F2. The impact of these factors on amplitude and latency of the N1 peak were The Society for the Neurobiology of Language SNL 2019 Program  analyzed. Methods: We conducted an EEG experiment with 20 native speakers of German. The stimuli consisted of six different vowels of the German vowel inventory which were manipulated in fundamental frequency. We additionally used three sinusoidal tones as a control condition. Subjects were asked to listen to the vowels and to press a button on a response box when hearing a (pseudo-randomly inserted) friction noise. This task ensured the attention of the subjects. The EEG data from 32 electrodes were processed in MATLAB. Results: Overall, the pattern of results demonstrated that f0 only has an influence on the latency of open vowels (e.g. [a]), but not of closed vowels (e.g. [i]), while F1 influenced both amplitude and latency of all vowel categories. F2 as well as the distance between F1 and F2 correlated significantly with the amplitude but not with the latency. The amplitude and latency of the N1 of the three pure tones was not significantly affected by differing pitches. Conclusion: This study supports the hypothesis that f0 is not as relevant as other factors, i.e. formant values, in vowel processing. This was tested using three pure tones as control condition. Pitch only showed effects regarding the latency of the N1, while F2 only correlated with its amplitude. Thus, F1 is a factor that is more relevant for both amplitude and latency of the N1 peak. However, the strongest correlation concerning the amplitude was the distance between F1 and F2. The results are in line with well-known research in phonetics, namely the importance of F1 and F2 for vowel discrimination. Within the epoch of the N1 component, vowels are processed on the basis of their spectral features. We found evidence that the (intrinsic) vowel pitch is not as relevant for the N1 as the distance between the first two formants. The results from the current study form an important contribution to research on pitch vs. spectral processing in spoken language. B75 Native-language experience reflected in low gamma, theta and delta frequency bands. Monica Wagner1, Silvia Ortiz-Mantilla2, Valerie Shafer3, April Benasich2; 1St. John’s University, 2Rutgers University, 3The Graduate Center of the City University of New York It has been proposed that discontinuities within the temporal structure of speech are tracked at multiple time scales in auditory cortex, with syllables of approximately 200 ms duration tracked at ~5 Hz and phonemes of approximately 25 ms duration tracked at faster rates at ~40 Hz. This processing is hierarchically nested, with delta oscillations modulating theta, which, in turn, regulates gamma activity. Phase locking in the theta band to discretized units associated with syllables has been demonstrated, however, phase locking to phoneme level segmentations has been elusive. It is possible that entrainment to phonological sequences is obscured by lexical level processing of meaningful spoken words. Thus, the aim of the study was to determine whether increased measures of inter-trial phase locking (ITPL) within the electroencephalogram (EEG) would be found in the low gamma (LG) frequency band ~40 Hz for native-language phonological sequences within nonwords, but not for non-native sequences. EEGs were obtained from 24 The Society for the Neurobiology of Language Poster Session B native-English monolingual and 24 native-Polish bilingual adults while they listened to same and different nonword pairs during two counterbalanced conditions, an attend and a passive listening condition. Nonwords within the pairs contained the phonological sequences onsets /sət/, /st/, /pət/ and /pt/ that occur in the Polish and English languages with the exception of /pt/, which never occurs in English in word onset. A three-dipole source model, fit at the time intervals for the N1 and P2 components, was created for the English and Polish groups, separately. The models, which identified sources in left auditory cortex (LAC), right auditory cortex (RAC) and an anterior-central (AC) source, explained 90% of the data variance for each group. Single-trial data segmented between -100 to 900 ms was transformed into the time-frequency domain between 2-90 Hz, using a 1 Hz wide frequency bin and 50 ms time resolution. Cluster analysis, in combination with permutation testing, was used to compare the language groups’ measures of temporal spectral evolution (TSE) and ITPL from the three sources. These measures of spectral power and phase synchrony were analyzed between 50 to 450 ms in LG (30-58 Hz) and the lowfrequency range (2-30 Hz) for the attend and passive conditions to each word type (e.g., /sət/), separately. Increased ITPL values, without increases in power, were found for the Polish compared to the English group to the /pt/ onsets in nonwords in the LG subband ~30-40 Hz. These effects were found bilaterally in LAC and RAC. Language group differences in LG to /sət/, /st/, and /pət/ onsets that occur in both languages were not found. Thus, phase resetting to discontinuities within the acoustic structures that correspond to a phonological sequence may be a feasible mechanism for enhancing linguistic information in auditory cortical pathways. Also, increased ITPL values for the Polish listeners were observed in the delta-theta frequency band to all sequences in RAC and LAC with one notable exception; no differences were found to /pt/ in LAC. These results suggest that experience with phonological sequences modulates phase resetting in the LG, delta and theta frequency bands. B76 The role of attention in visual language information processing Iuliia Lamekina1, Yury Shtyrov1,2, Andriy Myachykov1,3, Sara Liljander4; 1Natonal University - Higher School of Economics, 2Aarhus University, 3Northumbria University, 4Aalto University Automaticity in language processing has been a debated issue in recent decades. Previous data from auditory experiments is consistent with the biased competition model of attention: strong connections within lexical circuits determine activation that is largely independent of the level of attention/inhibition. In contrast, pseudowords activate several circuits partially - the activity is dependent on the inhibition level. However, no experiments with control for attention (two conditions) in visual modality, as well as with investigation of early semantic processing, have been conducted before. Moreover, there exists a consistent controversy as to timing and localisation of automated responses. We conducted an experiment with 128 EEG channels recording. The stimuli were in 123 Poster Session B Russian and included matched words vs. pseudowords vs. non-words as controls. Words sample also was aimed at studying embodied semantics: 100 action vs. 100 non-action words. The task was to find on the screen either a specific letter combination (attend condition) or color pattern (non-attend). We predicted stable activation for words, regardless of attention, activation for pseudowords dependent on attention, and differences in response localisation: action words would induce additional activation in sensorimotor cortex. The results of our experiment indicate that visual symbol processing comprises three stages : 140 (pre-attentive/automatic), 240 (attentional) and 300 ms (reprocessing). Visual modality differs from auditory: due to strong effect of attention, differences between attend and non-attend conditions were higher than between lexical/non-lexical units. However, our first hypothesis was still valid: at 140 and 300 ms, the responses for words had smaller range than for pseudowords, and this lexicality effect was statistically significant. There was also evidence for early semantic processing at 140 ms and reprocessing at 300 ms. As to the topographic results, we found that attended stimuli exhibit strong central negativity and occipital positivity at 240 and 300 ms. Both words and pseudowords elicited fronto-temporal right and left-biased negative responses in non-attend condition and more central responses in attend; for non-words, the pattern was reversed. Occipital positivity was characteristic for visual modality (but not for non-attend condition at 140 ms). As well as that, additional activation in motor cortex for action words was found. Our results support hypothesis for more stable activation for words at 140 and 300 ms. In general, our study confirms the multi-stage processing model, the intervals being in line with previous studies. Nevertheless, we didn’t find any significant traces of “ultra-rapid” lexical activation (around 30-70 ms), due either to vulnerability of these earlier peaks or to the lack of response. The topographic results, while consistent with the traditional account for linguistic processing in left frontal and temporal regions, also provide evidence for activation in the right hemisphere, and suggest a distributed network of language processing. The additional activation in motor cortex for action words is in line with the embodied semantics hypothesis. B77 Distinct Speech Production and Speech Perception Regions in the Human Cerebellum: A Neuroimaging MetaAnalyses Daniel R Lametti1, Sarah Bobbit1, Jeremy I Skipper2; Acadia University, 2University College London 1 The cerebellum plays a known role in human speech motor control, but its role in speech perception and language comprehension remains somewhat of a mystery. Recent neuroimaging meta-analyses report a functional topology of activation in the cerebellum related to language use. However, these meta-analyses are based on a small number of studies and they fail to illuminate the roles of the cerebellum in both language use and the subprocesses that define it. Here we conduct a large scale and specific meta-analysis to test the prediction that the cerebellum contains distinct circuits for action and 124 SNL 2019 Program perception in speech processing. To do this, we collated all available neuroimaging studies reporting cerebellar activity in the “neurosynth” database prior to July 2018 that overlapped with studies in the PubMed database related to 20 speech and language-related terms and their Medical Subject Headings (MeSH) equivalents. We then went through the resulting N=2168 studies by hand to find those specifically associated with speech production/articulation (N=189), active speech perception (e.g. perception tasks with a motor response; N=283), and passive speech perception (e.g. perception tasks without a motor response; N=74). These were used to conduct ALE meta-analyses. Cerebellar clusters from meta-analyses were also used in a cortical co-activation analysis with the full set of N=14,371 studies in the neurosynth database. The resulting networks were functionally labelled using a text-mining procedure on the articles associated with the networks. Whereas, production and active speech studies strongly overlapped in the cerebellum, speech production was nearly completely dissociable from passive speech perception, even at lower thresholds. Clusters in the cerebellum active during passive speech perception co-activated with a number of distributed and dissociable cortical networks, ranging from those labelled as sensorimotor (including, e.g., primary motor and somatosensory cortex and the SMA) to those associated with semantic processing (e.g., posterior MTG and IFG). The results suggest that the cerebellum contains distinct circuits for the production and perception of speech, operating at various levels of the perceptual system. The cerebellum has a simple circuitry that has previously been causally linked with prediction and timing. Separable circuits for action and perception in speech may work to maintain the predictive timing of language production, reception, and the sub-processes that constitute these components of language. B78 Patterns of structural covariation with left precentral gyrus predict words in noise performance Alexis Hervais-Adelman1, Robert Becker1; 1Neurolinguistics, Department of Psychology, University of Zurich Structural covariation of the brain refers to the fact that across a population some brain areas show correlated inter-individual differences. Patterns of structural covariation have been assessed using a technique named MACACC (“Mapping anatomical correlations across cerebral cortex”) and have been shown to reflect brain connectivity. Furthermore they have been revealed to be consistently different between groups with differing IQ. The technique therefore provides a method to generate insights into inter-areal relationships that might be related to individual performance differences. Here we apply it to the neural bases of speech in noise perception. We choose to examine left precentral gyrus as it is a region whose implication in speech comprehension remains controversial although there is compelling evidence that it may be implicated in phonological processing and degraded speech perception. We investigated structural covariation in a large sample (N=1110) of individuals, for whom structural brain imaging had been carried out as The Society for the Neurobiology of Language SNL 2019 Program  part of the Human Connectome Project (HCP). These individuals underwent evaluation of their word in noise recognition ability, using the words in noise test (WIN). The WIN requires individuals to listen to monosyllabic words embedded in multitalker babble. Participants repeat words, and the SNR is decreased stepwise until no more targets can be correctly reported. Participants were divided into three groups of N=370 based upon their performance, designated as High, Mid and Low. In order to ensure that effects were specific to WIN, we deconfounded scores for the possible influence of a large number of other behavioural and demographic variables made available as part of the HCP. Structural data were preprocessed as per the standard protocol of the HCP. Cortical thickness values were used for a set of 68 parcels based on the Desikan-Killiany atlas. In order to determine whether structural covariance is related to WIN scores, we tested whether the slope of the relationship between thickness of left precentral gyrus and the other 67 parcels was significantly different as a function of group. This was achieved by testing the interaction between group and target thickness in a linear model. Significant interactions were found for a number of targets, principal among which (in terms of statistical significance) were: right cuneus (F(1,1106)=9.05, p=.003), left pars triangularis (F(1,1106)=7.30, p=.007) and the left entorhinal cortex (F(1,1106)=6.55, p=0.011). The nature of the covariation pattern in these regions differed as a function of WIN performance group, such that there was an increase in slope of the relationship between left precentral gyrus and the right cuneus with performance, while the inverse relationship was apparent between the seed and both left entorhinal cortex and left pars triangularis. We acknowledge that these results are derived from a crude parcellation scheme, and that they target only one relatively controversial component of the speech comprehension system. Nonetheless, applying the MACACC method reveals some structure in the cortical relationships that might related to speech in noise comprehension. Thus, evaluating covariation patterns derived from structural measures may provide intriguing and worthwhile insights into inter-areal cortical relationships that contribute to behaviour. Reading B79 Associations between cortical surface structure and reading related skills Meaghan Perdue1,2, Joshua Mednick1, Kenneth Pugh1,2,3, Nicole Landi1,2,3; 1University of Connecticut, 2Haskins Laboratories, 3Yale University Research using functional and structural magnetic resonance imaging (MRI) research has identified areas of reduced activation and gray matter volume in children and adults with reading disability1,2,3, but associations between cortical structure and individual differences in reading skills in typically developing children remain underexplored. Furthermore, the majority of research linking gray matter structure to reading ability quantifies gray matter in terms of volume, and cannot specify unique contributions of cortical surface area and thickness to these relationships. The present study applied a The Society for the Neurobiology of Language Poster Session B continuous analytic approach to investigate associations between distinct surface-based properties of cortical structure and individual differences in reading-related skills. Structural MRI scans and standardized measures of reading and language skills were acquired from a sample of typically developing children (N=76; ages 4.679.5 years; 42 females, 34 males). Correlations between cortical structure (thickness and surface area) and reading-related skills (word identification, pseudoword decoding, phonological awareness, and rapid automatized naming) were conducted using a surface-based vertexwise approach in Freesurfer4. Separate models were created to assess associations between each behavioral measure independently for cortical thickness and surface area in each hemisphere. Results were evaluated at a cluster-corrected threshold of p = .05. Cortical thickness in the left superior temporal cortex, including Heschl’s gyrus, was positively correlated with word and pseudoword reading performance. No significant associations between cortical surface structure and phonological awareness or rapid naming were identified. No significant correlations with cortical surface area were identified. The observed positive correlation between cortical thickness in the left superior temporal cortex and word/pseudoword reading ability is consistent with previous reports of reduced gray matter volume in the left superior temporal cortex in reading disability1,3. Reduced cortical thickness in the left superior temporal cortex may reflect reduced neuroanatomical resources for spoken language processing that support the development of skilled reading5. References: 1. Eckert, M. A., Berninger, V. W., Vaden, K. I. J., Gebregziabher, M., & Tsu, L. (2016). Gray matter features of reading disability: A combined metaanalytic and direct analysis approach. ENeuro, 3(1), 1–15. 2. Maisog, J. M., Einbinder, E. R., Flowers, D. L., Turkeltaub, P. E., & Eden, G. F. (2008). A meta- analysis of functional neuroimaging studies of dyslexia. Annals of the New York Academy of Sciences, 1145(1), 237–259. 3. Richlan, F., Kronbichler, M., & Wimmer, H. (2013). Structural abnormalities in the dyslexic brain: A meta-analysis of voxel-based morphometry studies. Human Brain Mapping, 34(11), 3055–3065. 4. Dale, A. M., Fischl, B., & Sereno, M. I. (1999). Cortical surface-based analysis I. Segmentation and surface reconstruction. NeuroImage, 9, 179–194. 5. Clark, K. A., Helland, T., Specht, K., Narr, K. L., Manis, F. R., Toga, A. W., & Hugdahl, K. (2014). Neuroanatomical precursors of dyslexia identified from pre-reading through to age 11. Brain, 137, 3136–3141. B80 Characterizing the spatiotemporal pattern of neural activity and word representation during visual word recognition Laura Long1,2, Minda Yang1, Michael Sperling3, Ashwini Sharan3, Bradley Lega5, Alexis Burks5, Greg Worrell6, Robert Gross7, Barbara Jobst8, Kathryn Davis9, Kareem A. Zaghloul10, Sameer Sheth1, Joel Stein9, Sandhitsu Das9, Richard Gorniak3, Paul Wanda4, Michael Kahana4, Joshua Jacobs1, Nima Mesgarani1,2; 1Columbia University, 2Mortimer B. Zuckerman Mind, 125 Poster Session B Brain, & Behavior Institute, 3Thomas Jefferson University Hospital, 4 University of Pennsylvania, 5University of Texas Southwestern, 6 Mayo Clinic, 7Emory School of Medicine, 8Dartmouth Medical Center, 9Hospital of the University of Pennsylvania, 10National Institute of Neurological Disorders & Stroke Visual word recognition (VWR) is the process of mapping the written form of a word to its underlying linguistic item, and is crucial for successful written communication. Characterization of healthy VWR neural mechanisms holds promise for improving treatment of reading deficits and deepening our understanding of literacy. While noninvasive neuroimaging studies have identified putative brain regions (fMRI, PET) and event-related potentials (EEG, MEG) involved in VWR, the spatiotemporal flow of information through the brain remains unclear. In this study, we analyzed high gamma neural activity of more than 13000 electrodes from more than 140 intracranial neurophysiology patients as they read visually-presented words. We find that over 3000 electrodes show a task-sensitive response, with the most task-sensitive electrodes in occipital lobe, followed by frontal lobe, then parietal and temporal lobes. We observe a difference in the excitatory/inhibitory balance between task-sensitive electrodes in different lobes: occipital lobe has the highest proportion of excitatory responses, followed by temporal lobe, parietal lobe, and finally frontal lobe with an almost even excitatory/suppressive balance. Latency analyses reveal that on average, occipital lobe responds most quickly, followed by temporal lobe, with the slowest responses from frontal and parietal lobes. Middle occipital gyrus, cuneus, and fusiform gyrus (which contains the visual word form area) are among the fastest areas on average. By clustering the responses of all tasksensitive electrodes, we identified a variety of response types including excitatory and inhibitory responses, onset and offset responses, and responses sustained for the duration of the word presentation. Furthermore, we investigated the neural representation of the stimuli’s visual, phonemic, lexical, and semantic features. From each feature set, we predicted each electrode’s response and investigated the properties of the best-predicted electrodes. Visually-predicted and phoneme-predicted electrode groups had mostly low latencies and excitatory peaks. Visually-predicted electrodes were spread between occipital and frontal lobes, while more phoneme-predicted electrodes were in temporal lobe. Lexically-predicted and semantically-predicted electrode groups included more suppressive responses, with linguistically-predicted electrodes being distributed across frontal, temporal, and occipital lobes, and a plurality of semantically-predicted electrodes in frontal lobe. We further investigated the encoding of these electrode groups of the feature sets over time using representational similarity analysis, which revealed that visual features peak quickly after word onset, followed by phonemic, then linguistic, then semantic features, with lexical features having a later onset. Further investigation into individual word features shows that low-level features like word length and bitmap were represented early, with other features such as frequency represented later. Together, these results provide a 126 SNL 2019 Program high-resolution look at the spatiotemporal pattern and representation of neural activity during visual word recognition in the human brain. B81 Brain activation related to individual differences in natural reading speed: A fixation-related fMRI study Fabio Richlan1, Sarah Schuster1, Stefan Hawelka1, Martin Kronbichler1, Florian Hutzler1; 1Paris-Lodron-University of Salzburg Introduction: Learning to read requires the development of brain systems capable of integrating orthographic, phonological, and lexico-semantic features of written words. In the neuroimaging literature, to date, artificial reading tasks and unnatural presentation modes are prevailing, thus limiting the validity of the findings of these studies. Therefore, a new technique - fixation-related fMRI - has been developed, allowing the investigation of natural reading via a combined analysis of eye movement and brain activation data. Methods: The present study used fixation-related fMRI in 56 healthy adults during self-paced silent sentence reading. Individual differences in reading speed were defined as words read per minute during fMRI scanning. Brain activation related to individual reading speed was identified by using it as a predictor for the fMRI data. Results: Sentence reading compared with fixation baseline resulted in activation of the typical reading network including bilateral occipital, parietal, temporal, and frontal language regions. Faster individual reading speed was associated with higher activation in the bilateral occipitoparietal cortex associated with visual-attentional processing, in the bilateral middle and inferior temporal cortex associated with lexico-semantic processing, and in the right temporoparietal cortex associated with phonological processing. Conclusion: This study is a first step in the identification of the brain systems related to natural reading speed. It extends the knowledge gained from previous studies presenting isolated reading material. Furthermore, it opens new possibilities for studying individual differences in natural reading speed in impaired readers, such as children with developmental dyslexia or neurological patients with acquired reading problems. B82 Morphological decomposition in the ventral stream Clare Lally1, Simon Fischer-Baum2, Tibor Auer1, Kathleen Rastle1; 1Royal Holloway University of London, 2Rice University, Houston Texas Morphology reflects one of the few systematic relationships between orthography and semantics, as letter combinations known as affixes are able to systematically modify the word meaning (e.g. un-: unlock, unable, unsure). Morphological information is accessed rapidly during skilled reading, and research has shown that morphological analysis arises at different levels, based on orthographic and semantic overlap (Gold & Rastle, 2007). Previous fMRI research has indicated that morphological analysis shows progressive abstraction in the ventral stream; an area extending from the visual cortex, through the left occipital temporal cortex to the superior temporal gyrus (Gold & Rastle, 2007). We used representational similarity analysis (RSA) to localise and The Society for the Neurobiology of Language SNL 2019 Program  characterise neural representations of morphologically complex words within the ventral stream. Participants silently read words which varied in orthographic, morphological and semantic overlap while we recorded neural responses using fMRI. We constructed a priori prediction matrices based on the morphological properties of words, which were expected to elicit different patterns of activation at different levels of the hierarchically organized pathway. The stimuli consisted of words which allowed between-stem (late, plate, relate, lately, lateness) and between-affix (regret, refuse, refund, redial) comparisons. Words with high orthographic overlap were expected to show similar patterns of activation in posterior regions, whereas words that were close in meaning were expected to show similar patterns of activation in anterior regions as representations became more abstract. Most importantly, we expected to observe an intervening shift in the representations of words based on their morphological properties; for example whether a word contained not only plausible stem but also a viable affix, and further, whether this affix provided a semantic connection to the stem. Our analyses included region of interest (ROI) analyses, using orthographic ROIs based on Vinckier et al (2007) and morphological/semantic ROIs based on Gold & Rastle (2007). We also conducted whole-brain searchlight analyses to explore morphological representations throughout the ventral stream. ROI analyses identified representations based on orthographic overlap in the most posterior ROI within the ventral occipital cortex (-18 -96 -10), whereas representations based on orthographic segmentation of morpheme units were found further anteriorly in the left ventral stream (-36 -80 -12). These results are consistent with findings from Vinckier et al. (2007), who found graded activation based on increasingly large orthographic units, such as frequent bigrams and quadigrams, and extend to indicate that this shallow orthographic processing also applies to morphemes, regardless of whether the unit appears in a morphologically viable word. Representations based on shared semantics were found in the middle temporal gyrus (-55 -42 -5), replicating the findings of Gold & Rastle (2007). Overall, RSA informs our understanding of the transformation of form to meaning in visual word recognition, and provides evidence of a graded hierarchy of abstraction within the ventral stream. B83 The role of outline-shape during orthographic processing in deaf readers: Evidence from ERPs Eva Gutierrez-Sigut1,2,3,4, Marta Vergara-Martínez2, Manuel Perea2; 1 University of Essex, 2University of Valencia, 3DCAL, University College London, 4ICN, University College London Research on orthographic processing has shown that skilled hearing readers can quickly access the abstract representations of the letters within a word. Indeed, word recognition times are not affected by peripheral visual features such as the word’s outline-shape (Lavidor, 2011; Perea & Panadero, 2013). Deaf readers can also process orthographic representations quickly during word recognition (see Gutiérrez-Sigut, Vergara-Martínez, & Perea, 2019, for electrophysiological evidence with The Society for the Neurobiology of Language Poster Session B the masked priming technique). However, previous behavioural experiments suggest that deaf readers rely to a greater degree on visual information during word recognition and spelling than their hearing counterparts. For instance, in an analysis of spelling errors in deaf readers, Padden (1993) found a high rate of confusions among letters of the same height (t, d, and b) or among letters with descenders (p, q, and g), reflecting attempts to reproduce the outline shape of words (see also Perea, Marcet, & Vergara-Martínez, 2015, for perceptual-visual effects in deaf readers). In addition, Barca et al. (2013) found a larger lexicality effect in adult deaf signers than in hearing non-signers, which they interpreted as reflecting enhanced reliance on whole-word visual processing. In the present experiment, we used ERPs to investigate the time course of the influence of the outline-shape of words during word recognition in a group of congenitally deaf readers and in a group of hearing readers. Participants made lexical decisions to words and pseudowords created by replacing one consonant from a high-frequency word (e.g., violin). For half of the pseudoword targets, an ascender consonant (e.g., l) was replaced by another ascending consonant (consistent-shape pseudoword: e.g., viotin). For the other half, the replacement resulted in an inconsistent-shape pseudoword (e.g. viocin). Behavioural data showed a different pattern of results for deaf and hearing readers. Deaf readers were less accurate responding to consistent-shape pseudowords (viotin) than to real words (violin). However, there were no significant differences between inconsistent-shape pseudowords (viocin) and words. Hearing readers, however, were less accurate responding to either type of pseudoword that to real words. The ERP results also showed different effects for deaf and hearing readers. For deaf readers, inconsistent-shape pseudowords (viocin) elicited larger negativities than words between 290 and 600 ms post target onset. In contrast, ERPs to consistent-shape pseudowords (viotin) did not differ from words. This suggest that the overlap in perceptual-visual features between words and pseudowords (outline-shape) overcomes lexical status. In hearing readers, both types of pseudowords elicited larger negativities than words, but each with a different time-course: the lexicality effect for the inconsistent-shape pseudowords (viocin) peaked earlier and lasted longer (340 and 600 ms) than for the consistent-shape pseudowords (viotin) (450 and 600 ms). Thus, the present ERP findings strongly suggest that deaf readers rely to a greater degree on visual characteristics during word recognition than their hearing counterparts. B84 Reading acquisition of Chinese as a second language using pinyin and writing strategies Jieyin Feng1, Hoiyan Mak1, Qing Cai1; 1Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai Does reading in different writing systems involve the same neural mechanisms? While most recent studies supported the universality hypothesis, we believe that each language and writing system has its own specific characteristics, which should be reflected in the brain. 127 Poster Session B Chinese, compared to alphabet writing systems, is unique in its phonology-orthography mapping, character structure, and mediation of pinyin spelling and character writing during reading acquisition, even when we only consider single-word reading. In this study, we investigated the mechanisms underlying reading acquisition of Chinese as a second language (L2), using functional magnetic resonance imaging and behavioral assessments. We aimed to understand (1) the effects of pinyin and writing strategies during reading acquisition; (2) whether and how the knowledge about semantic radicals (character components carrying semantic information) could be generalized; and (3) the process of visual-auditory integration in word recognition, and its differences between the two strategies. Our results showed that (1) after a seven-day training period, the recognition rate of trained one-character words increased significantly across modalities (visual, auditory, visualauditory), no matter which strategy (pinyin only or joint pinyin-writing) was applied. Functional MRI data showed increased activations in the dorsal pathway in visual and visual-auditory modalities after training, in both groups. (2) The recognition rate and reaction times for untrained one-character words also improved significantly. While pinyin-only group showed increased activations in right occipital and parietal areas, this generalization involves bilateral parietal regions in joint pinyin-writing group. (3) The conjunction analysis across modalities showed visual-auditory integration of trained one-character words in both groups, but is differently lateralized. Pinyin-only group involved bilateral parietal regions, while joint pinyinwriting group activated the left parietal and middle frontal gyrus. Therefore, we suggest that while pinyin and writing strategies can facilitate word recognition, different neural bases are involved: learning with pinyin shows a similar pattern to that of alphabetic systems, which mainly relies on top-down phonological information retrieval; whereas writing strategy rather involves regions underlying visualspatial processing, motor perception and integration. At the same time, readers can benefit from and generalize their knowledge about semantic radicals, specifically when they are trained with the writing strategy. Finally, pinyin and writing strategies have different contributions to visual-auditory integration, the process of which plays an important role in word recognition. B85 Word-chunk or Words-chunk? Two-stage Chunking Operation in Reading of Complex Text Jinbiao Yang1,2, Qing Cai3,5, Xing Tian3,4; 1Max Planck Institute for Psycholinguistics, 2 Centre for Language Studies Nijmegen, Radboud University, 3 NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, 4Division of Arts and Sciences, New York University Shanghai, 5Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University A sentence usually consists of many letters that are organized in a hierarchical structure of text chunks: short as a morpheme, long as an entire sentence, and in the middle are words and phrases. Readers incrementally segment a sentence into smaller chunks 128 SNL 2019 Program and analyze these building blocks directly, efficiently and separately. This process is termed chunking. Effective chunking during reading facilitates disambiguation and enhances efficiency for comprehension. However, the mechanisms of chunking remain elusive, especially in reading given that information arrives simultaneously yet the written systems may not have explicit cues for labeling boundaries such as Chinese. In this study, we investigated the mechanism of chunking operation that mediates reading of text that contains multiple levels of hierarchical information, and focused on the following questions: Which level(s) of chunks in the text hierarchy would serve as building blocks during chunking? If multiple levels of chunks were potential building blocks, which level has priority during the reading process? In this study, we used Chinese four-character strings to investigate the chunking operation in reading. Chinese written system is a good model to investigate the multilevel chunking because each Chinese character is a basic lexical unit with similar length and four characters can form two levels of chunks -- chunks with two characters (local level chunks) and a chunk with four characters (global level chunk). The lexicality was manipulated at both levels so that four types of stimuli were included (phrase, idiom, random words, and random characters). A line under two characters or four characters indicate the target chunk. We conducted a behavioral experiment (lexical decision task) to investigate the interaction between global and local chunks processes. Moreover, an EEG experiment (passive reading) was carried out to test the dynamics of multi-level chunking operation. The behavioral results showed that the lexical decision of lexicalized two-character local chunks was influenced by the lexical status of the four-character global chunk, but not vice versa, which suggest that the judgement of global chunks possesses priority over the local chunks. EEG results further revealed that nested chunks at both levels were detected simultaneously but further processed in different temporal orders -- the onset of lexical access for the global chunks was earlier than that of local chunks, followed by parallel processes for chunks at both levels. These consistent behavioral and EEG results suggest a workflow for processing multiple-level information in reading: The segmentation occurs in an early and short time window within which possible chunks at all levels are detected based on the familiarity of lexical-orthographic features (detection stage). The chunks at each level are further processed with distinct temporal characteristics (processing stage). Specifically, the processing of global chunks possesses priority over the local chunks, while the processing of local chunks can launch before the finish of global chunk processing. Such a partially temporal overlap that enables interaction across levels before final integration. B86 An analysis of the dual route theory of reading using neural decoding Aidan Curtis1,2, Oscar Woolnough2, Cristian Donos2, Patrick Rollo2, Nitin Tandon2,3; 1Rice University, University of Texas Health Science Center at Houston, 3Memorial Hermann Hospital, Texas Medical Center 2 The Society for the Neurobiology of Language SNL 2019 Program  The dual route theory of reading continues to be the most credible description of the neural organization to enable the mapping of word orthography to its pronunciation. The dual route model originates from data on patients who, due to brain lesions, rely exclusively on one of two routes - A phonological route for grapheme-to-phoneme conversions or a lexico-semantic route for direct lexical access. However, in healthy subjects both routes are engaged simultaneously – this implies that network level representation rather than focal neural activation in any individual region would be more informative of overall processing. To evaluate the validity of the dual route model, we combined the high spatiotemporal resolution of intracranial recordings with a logistic regression model, to examine the dynamic nature of the phonological and lexical streams of reading. 35 intractable epilepsy patients implanted with depth or subdural electrodes to localize the focus of epilepsy, were asked to read word stimuli comprising of (i) regular words, (ii) exception words (orthographically irregular), (iii) pseudo-homophones (orthographically novel but phonologically familiar) and (iv) pseudowords (orthographically and phonologically novel). A logistic regression temporal neural decoding model was used to track the spatial distribution of task-relevant information over time. Features including filtered time courses, analytic envelopes of activation, and pairwise connectivity were used as input into a linear neural decoding model, that was designed to maximize classification accuracy of stimuli from the time-localized brain state. Model hyperparameters and input feature mappings were selected via a Bayesian hyperparameter optimization algorithm to maximize stimuli distinguishability. We found high word type classification accuracy between words and pseudowords in the first 250 ms following stimulus onset. Analysis of the decoding model’s coefficients at this time showed a high contribution to the decoding accuracy from electrodes in the anterior superior temporal gyrus and mid-fusiform cortex. Around the time of articulation, we saw higher word vs. pseudoword accuracy (both pre- and post-articulation) than when aligned to stimulus onset. Decoding performance around the time of articulation was mainly influenced by the inferior frontal gyrus, and a high gamma differential was found between words and pseudowords in this region. This word-pseudoword distinguishability lends credence to the dual route theory, which proposes that these two classes of words utilize separate mechanisms for word identification and verbalization. Temporal neural decoding enables the tracking of task-relevant information without imposing prior assumptions about anatomical or oscillatory features in the data and can give insight into which cortical regions and nonlinear feature mappings are useful for identifying differences between trial classes. The Society for the Neurobiology of Language Poster Session B B87 The nature of the contextual diversity effect: Evidence from an incidental vocabulary learning task Eva Rosa1,2, José Luis Tapia1, Francisco Rocabado3, Marta VergaraMartínez2, Manuel Perea2; 1Universidad Católica de Valencia San Vicente Mártir, 2ERI-Lectura, Universitat de València, 3Universidad Nebrija One of the more robust and replicated effects in psycholinguistics is the word-frequency effect: Highfrequency words are more easily accessible than lowfrequency words—this has been obtained using a variety of methods, from behavioural to electrophysiological. However, recent research has shown that contextual diversity (the proportion of contexts in which a word occurs) is a better predictor than word-frequency in word recognition tasks (e.g., Adelman, Brown, & Quesada, 2006; Perea, Soares, & Comesaña, 2013) and during sentence reading (e.g., Plummer, Perea, & Rayner, 2014; see also Chen, Huang, Bai, Xu, Yang, & Tanenhaus, 2017). While contextual diversity and word-frequency are highly correlated (i.e., high-frequency words tend to be words that occur in many contexts), their actual influence on word processing points towards different mechanisms. Vergara-Martínez, Comesaña, and Perea (2017) replicated the N400 word-frequency effect (highfrequency < low-frequency) in an ERP lexical decision experiment. Critically, the N400 contextual diversity effect was in opposite direction (high contextual diversity > low contextual diversity). Vergara-Martínez et al. (2017) concluded that higher contextual diversity leads to semantically richer representations. Importantly, contextual diversity also plays a role when learning new words. Rosa, Tapia, and Perea (2017) manipulated the number of semantically distinctive contexts in which a new word appeared during the incidental acquisition of vocabulary (many contexts [fables, math texts, and science texts] vs. one context [fables, math texts, or science texts). Results showed a facilitative effect of contextual diversity in four tasks: free recall, recognition, multiple-choice test, and pictogram matching. This finding suggests that the strength of the memory trace of a newly learned word increases when there is a change between the different contexts in which it is experienced (Semantic Distinctiveness model: Jones, Johns, & Recchia, 2012) (see Zhang, Ding, Li, & Yang, 2018, for electrophysiological correlates of contextual diversity with word learning in the same direction as in the Vergara-Martínez et al., 2017, experiment). Here were focused on the nature of the contextual diversity effect by manipulating a purely perceptual element: whether the newly learned words benefit from being spoken by different voices. Grade 3 children had to incidentally learn 12 very low frequency words, while listening to three short fables (155 words). Each fable contained 4 experimental words. In the low diversity condition, the fables were read by the same voice. In the high diversity condition, each fable was read by different voices. The number of times the children listened to each word was kept constant across both conditions. The experiment consisted of three training days (3 texts per day) and a final assessment. Results showed a facilitative contextual diversity effect in two 129 Poster Session C tasks that evaluated memory, orthographic and semantic integration: a multiple-choice test and a pictogram matching task. These findings suggest that the SD model needs to be expanded by considering not only the semantic characteristics of each context, but also their perceptual cues (e.g., voice). Poster Session C Wednesday, August 21, 2019, 10:45 am – 12:30 pm, Restaurant Hall Control, Selection, and Executive Processes C1 Individual alpha frequency has greater impact on the interaction of prediction and interference in language comprehension than age Pia Schoknecht1, Dietmar Roehm1, Matthias Schlesewsky2, Ina Bornkessel-Schlesewsky2; 1University of Salzburg, 2University of South Australia Interference and prediction have independently been identified as crucial influencing factors during language processing. However, their interaction remains severely underinvestigated. As both might be influenced by interindividual differences and cognitive changes across the lifespan, we conducted an ERP experiment that examined the interaction of interference and prediction during language processing and the influence of age and individual alpha frequency (IAF) on the underlying processes. IAF has been found a more crucial factor for performance during sentence comprehension than age within an older adult group [1]. We used the N400as-model-updating account [2] that is based on the neurobiologically well-established predictive coding framework [3,4] for the theoretical framing of our study. This account proposed that negativities like the N400 are related to the degree to which prediction errors lead to internal model updating [2]. Eighty-three healthy, righthanded, native speakers of German (10-13 years (N=28), 18-35 years (N=31), 61-72 years (N=24)) read sentence pairs word-by-word. A context sentence introduced two noun phrases (NPs). A target sentence referred back to one of the NPs; the article was our critical word. Interference was manipulated via a gender match/ mismatch between the NPs in the context and the article. In high interference conditions, the article is ambiguous; in low interference conditions, there is only one compatible NP in the context. Prediction was measured via offline cloze probability. Sentences were truncated before the article itself to obtain its cloze values. We further obtained cloze values for the following noun (with sentences then truncated after the article), because we hypothesized that article processing would also be influenced by prediction generation/confirmation/falsification related to the noun. We analyzed mean single trial EEG activity in the N400 time window with LMMs and found that interference, predictability of article and noun, IAF and age interact. During processing of a low-cloze article which was followed by a highly predictable noun (i.e. the noun became highly predictable at the position of the article), the following effects were observed: At anterior electrode sites, the article elicited a negativity in low IAF subjects 130 SNL 2019 Program (of all ages) for high interference versus low interference conditions, while high IAF subjects (only young/older adults) showed a negativity for the low interference versus high interference conditions. We assume that the low IAF subjects were not aware that the article under high interference was ambiguous and treated it as a confirmation of their noun prediction (probably shaped by verb bias). In contrast, the high IAF subjects only engaged in model updating when the article was unambiguous (low interference conditions). This suggests that IAF modulates the strategy that individuals use to update their internal predictive models during language processing. We conclude that interference should be included in predictive coding-based accounts of language and, in addition, that IAF may have a stronger influence than age on inter-individual differences in how prediction and interference interact in language. [1]BornkesselSchlesewsky et al. (2015). Front. Aging Neurosci. [2] Bornkessel-Schlesewsky and Schlesewsky (2019). Front. Psychol. [3]Friston (2005). Philos Trans R Soc Lond B Biol. [4]Bastos et al. (2012). Neuron. C2 Verbal executive function tests capture language and control processes in chronic post stroke aphasia Rahel Schumacher1, Ajay D. Halai1, Matthew A. Lambon Ralph1; 1MRC Cognition and Brain Sciences Unit, University of Cambridge There is growing awareness that aphasia following a stroke can include deficits in other cognitive functions, for instance in executive functions, and that these are predictive of certain aspects of language function, recovery and rehabilitation. A variety of tests measuring different aspects of executive functions have been developed, but many commonly used tests contain linguistic stimuli and require speech output. Therefore, they are usually not administered in patients with aphasia. However, given that the interrelations between language and executive functions remain a matter of debate, and that non-aphasic dysexecutive patients’ performance in semantic tests parallels that of patients with semantic aphasia, aphasic patients’ performance on verbal executive tests might provide valuable insights, both from a theoretical and a clinical perspective. We administered three commonly used verbal executive tests (Letter & Category Fluency, Hayling, Stroop) to a sample of up to 32 patients with varying aphasia severity. Their performance in these tests and in a broad range of background tests regarding their language and nonverbal cognitive functions, as well as information about their lesion volume and location were taken into account to elucidate: 1) if verbal executive tests measure anything beyond the language impairment in this patient group; 2) how performance in such tests relates to performance in language tests and nonverbal cognitive functions; 3) the neural correlates associated with performance on verbal executive tests. Separate principal component analyses of the performance in the three verbal executive tests showed a remarkably stable pattern of dissociation between core language demands and more general control abilities for the Hayling and Stroop tasks. This dissociation was further corroborated by significant The Society for the Neurobiology of Language SNL 2019 Program  correlations between factor scores on the language components with patients’ overall verbal impairment, as well as significant correlations of the factor scores on the control components with patients’ nonverbal impairment. The components underlying the Fluency tasks showed a less clear-cut separation of a language component but yielded also two main components to do with word generation and switching abilities. Importantly, the generation component was associated with patients’ overall verbal as well as nonverbal impairment. Furthermore, lesion analyses revealed distinguishable clusters for the two components for each test. Our findings thus extend our clinical and theoretical understanding of dysfunctions beyond language in patients with aphasia. Development C3 Gesture-Speech Integration in the Adolescent Brain Salomi Asaridou1, Özlem Ece Demir-Lira2, Susan GoldinMeadow3, Steven L. Small4; 1University of California, Irvine, 2The University of Iowa, 3The University of Chicago, 4The University of Texas at Dallas Speakers frequently use gesture to disambiguate speech in everyday communication. In adult listeners, the left inferior frontal gyrus (IFG) and the left posterior middle temporal gyrus (pMTG) respond more strongly when co-speech gesture provides information missing in speech as opposed to redundant information already present in speech. We have shown in previous work that preadolescent children activate a wider network of areas than adults for the same contrast, including the bilateral IFG, the right MTG and left superior temporal gyrus (Demir-Lira et al., 2018). Based on these findings, we hypothesized that developmental changes in frontotemporal white matter networks will lead to increasingly adult‐like specialized and lateralized activation during gesture–speech integration. In the present longitudinal study we set out to test this hypothesis by looking at gesture-speech integration in adolescent children using the same paradigm used in adults (Dick et al., 2014) and preadolescent children. We collected fMRI data from 29 adolescents (mean age=13y;6m ±10m) while they watched videos of an actor narrating short stories. The stories could vary as a function of semantic ambiguity in speech (AMBIGUOUS, UNAMBIGUOUS) and use of gesture (GESTURE, NoGESTURE) resulting in four conditions: 1) UNAMBIGUOUS+GESTURE, in which both speech and gesture contained specific information relevant to the story (e.g., a story about a pet bird accompanied by a flapping gesture); 2) UNAMBIGUOUS+NoGESTURE, in which the speech contained specific information (e.g., a story about a pet bird); 3) AMBIGUOUS+GESTURE, in which the gesture contained specific information that was missing in speech (e.g. a story about a pet accompanied by a flapping gesture); 4) AMBIGUOUS+NoGESTURE, in which speech was non-specific with no disambiguating gesture present (e.g., a story about a pet). We assessed participants’ attention to the task with a post-scan recognition test. We also collected diffusion MRI data and performed The Society for the Neurobiology of Language Poster Session C tractography to identify white matter fronto-temporal connections. For the crucial comparison, we found that in adolescent children, the left IFG (pars triangularis) was significantly more active when gesture added information missing in speech than when it provided redundant information (AMBIGUOUS+GESTURE > UNAMBIGUOUS+GESTURE). Activity in this area significantly correlated with postscan recognition performance. Processing cospeech gestures (AMBIGUOUS+GESTURE > UNAMBIGUOUS+NoGESTURE and AMBIGUOUS+GESTURE > AMBIGUOUS+NoGESTURE) elicited significantly higher activity in the pMTG bilaterally. Interestingly, fractional anisotropy along the tract connecting the left IFG with the left pMTG positively correlated with post-scan accuracy. Our results are in line with our developmental predictions: Gesture-speech integration in adolescents engages a tighter network of areas, which includes the left IFG and the bilateral pMTG. With respect to frontal contributions to co-speech gesture processing, adolescents show increased activity in the left IFG when gesture disambiguated speech, resembling the pattern seen in adults. With respect to temporal contributions, unlike left lateralized activity in adults and right lateralized activity in children, adolescents show higher bilateral pMTG activation when processing gesture that disambiguates speech compared to speech without gesture. We posit that the strengthening of white matter connectivity between the left IFG and pMTG with development is partly responsible for the more adult-like activation patterns in adolescents. C4 Cortical thickness lateralization and its relation to language abilities in children Ting Qi1, Gesa Schaadt1,2, Angela D. Friederici1; 1Max Planck Institute for Human Cognitive and Brain Sciences, 2Universität Leipzig The two hemispheres of the human brain differ in their anatomy and function. This asymmetric anatomy and function can already be observed in the early stages of life and has been shown to further change with age. The development of the brain’s asymmetry has also been observed with respect to language, as one of the most lateralized cognitive functions, favoring the left hemisphere in the healthy adult brain. However, it still remains unclear how language-related structural asymmetry and its developmental changes are related to language abilities. We collected longitudinal structural magnetic resonance imaging data in children when they were 5 and 6 years old and assessed their language abilities when they were 5, 6, as well as 7 years old. Specifically, language abilities were assessed by a standardized language test, measuring general sentence comprehension abilities of children. Children were presented with a picture matching task, in which the child is auditorily presented with a sentence and three pictures from which the child has to choose the correct picture matching the presented sentence. Structural asymmetry was quantified by the laterality index of the cortical thickness of those language related regions frequently reported in previous studies (i.e., frontal and 131 Poster Session C temporoparietal regions). Our results showed substantial language performance changes across development (i.e., between ages 5 and 7), with only subtle changes of the brain’s cortical thickness asymmetry (i.e., between ages 5 and 6). Crucially however, changes of cortical thickness asymmetry in the inferior frontal cortex between the ages of 5 and 6 were associated with changes of sentence comprehension abilities, suggesting an improvement in language abilities to be influenced by larger cortical thinning in the left compared to the right inferior frontal gyrus. Further, the asymmetry of the inferior frontal cortex of the younger brain (i.e., at age 5 and 6 years) was associated with the children’s language performance at the age of 7. To our knowledge, this is the first longitudinal study to demonstrate that children’s improvement in sentence comprehension seems to depend on cortical thickness asymmetry changes in the inferior frontal cortex, further highlighting the crucial role of this region in language acquisition. C5 Many ways to read your vowels: The development of a Hebrew reading brain Upasana Nathaniel1, Yael Weiss2, Bechor Barouch1, Tami Katzir3, Tali Bitan1; 1Psychology Department, IIPDM, University of Haifa, 2Psychology Department University of Texas at Austin, 3Department of Learning Disabilities and The Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa Brain plasticity implies that readers of different languages can have different reading networks. Theoretical models suggest that reading acquisition in transparent orthographies relies on mapping smaller orthographic units to phonology, than reading in opaque orthographies. Hebrew has a transparent version of orthography (pointed) used for beginners, and an opaque version used for skilled readers. Our previous study in adult readers showed that reading pointed words increased processing demands on regions associated with mapping orthography to phonology (Weiss, Katzir & Bitan, 2015). In the current study, we examined how children’s brain shift from greater reliance on reading pointed words in early stages, to reading un-pointed words later on, and the neurolinguistics processes that underlie this shift from reliance on decoding small orthographic units, to greater reliance on familiarity. 14 2nd graders and 9 5th graders read aloud Hebrew nouns during fMRI. Words were presented in a transparent or opaque script (pointed or un-pointed), differed in length (3 or 4 consonants) and in the presence of a vowel letter (with or without). ROI analysis revealed in both groups an interaction of pointing by length in the Visual Word Form Area. Only in the un-pointed condition, short words elicited greater activation than long words, with no difference in the pointed condition. Consistent with our behavioral results, these findings suggest that un-pointed short words face more competition from orthographically similar words, and that even young children rely on visual word patterns to read the opaque orthography. Similar effects were found in both groups when negative correlation with word frequency was used as a measure of processing difficulty. For the opaque orthography short words were more 132 SNL 2019 Program difficult than long words, whereas the opposite effect was found for the transparent orthography in bilateral inferior frontal gyri (IFG) and superior temporal gyri. This suggests that while pointed words are processed by piecemeal decoding of small units, recognition of un-pointed words relies on processing whole words, in terms of both phonological and lexical retrieval. However, orthographic transparency also had a differential effect on the groups. Only younger children showed an increase in processing load for un-pointed over pointed words in bilateral supramarginal and middle temporal gyri, associated with mapping orthography to phonology, and access to semantics, respectively. Furthermore, while older children showed transparency effects in left IFG pars opercularis, involved in phonological segmentation, younger children showed this effect in the right hemisphere. This is also consistent with our finding of a developmental increase in left lateralization in the superior temporal gyrus, associated with phonological processing. This is the first fMRI study to examine the developmental processes associated with reading acquisition in Hebrew. Some of the identified neural changes are universal, i.e. developmental increase in asymmetry for phonological processing, while other changes are unique to Hebrew, i.e. shared and differential effects of transparency in the two groups. Overall, these results elucidate the developmental changes associated with the shift from piecemeal decoding of small units, to greater reliance on word familiarity associated with more proficient reading. C6 Naturalistic auditory narratives drive shared neural responses in visual cortices across congenitally blind individuals Marina Bedny1, Elizabeth Musz1, Rita Loiotile1, Rhodri Cusack2, Janice Chen1; 1Johns Hopkins University, 2Trinity College Dublin, the University of Dublin Studies of blindness provide insights into the contributions of nature and nurture to cortical function. Blindness enhances responses in ‘visual’ cortices to auditory and tactile stimuli (Sadato et al., 1996, Nature; Weeks et al., 2000 JoN, Roder et al., 2002 Euro J Neurosci). Whether visual cortices are reorganized systematically across different blind individuals, and if so, for what cognitive functions, remains debated. We used naturalistic stimuli to test the hypothesis that in blindness ‘visual’ cortices participate in higher-cognitive process, including plot-level semantic interpretation. Congenitally blind (CB; n = 18) and blindfolded sighted (S; n = 17) participants listened to three auditory narrative soundtracks excerpted from movies (6 minutes each) and one comedic routine (‘Pie-man’) while undergoing fMRI. Participants also heard versions of the ‘Pie-man’ story with parametrically degraded narrative and linguistic content: Pie-man with sentences presented in a scrambled order (intelligible language but no coherent storyline) and played backwards (i.e. uninterpretable). First, we correlated voxelwise timecourses across participants and stimulus types to test for timecourse synchrony across different cortical networks. Second, we examined spatial organization of information across the cortex by measuring the distinctive spatial patterns of activity elicited by each 10-second The Society for the Neurobiology of Language SNL 2019 Program  Poster Session C segment of auditory stimuli (movies, narratives, sentence shuffle and backwards) (Chen et al., 2017, Nat Neuro). Replicating prior work in sighted participants, across blind and sighted individuals, movie soundtracks and the naturalistic narrative (‘Pie-man’) synchronized neural timecourses and elicited distinctive spatial patterns of activity for different time segments, in both highercognitive fronto-parietal and temporal areas and in lowlevel auditory networks (Lerner et al., 2011 JoN; Chen et al., 2017 Nat Neuro). By contrast, meaningless timevarying stimuli (e.g. backwards speech) elicit synchrony and event-specific spatial patterns only in low-level auditory cortices. An intermediate degree of synchrony was observed for shuffled sentences in higher-cognitive networks. Crucially, only among blind individuals, movie soundtracks and narratives elicited temporal synchrony and time-segment-specific patterns of activity in ‘visual’ cortices. These effects were observed only for the meaningful and cognitively complex stimuli (i.e. movie soundtracks and narrative) but not for backwards speech. These findings suggest that in blindness 1) ‘visual’ cortices respond to semantic/higher-cognitive information contained in naturalistic auditory narratives and movies; 2) ‘visual’ cortex representations are spatially organized in a similar fashion across blind individuals; 3) spatial patterns of activity within ‘visual’ cortices are sensitive to cognitive content. These results support the hypothesis that human cortices are highly cognitively flexible at birth. magnetoencephalography (MEG) data were acquired. Data were analyzed for consistency across individual trials and the average signal in the low gamma range to correspond with phoneme-level processing. When listening to speech sounds passively, adults with dyslexia exhibited significantly poorer trial-by-trial consistency in the superior temporal gyrus (STG) compared to typicallyreading controls (unpaired, two-tail t-test, p = 0.0017). As has been reported previously during passive listening tasks, approximately half of the dyslexia sample drove this group difference. During the identification task, which required attention and access to phonological representations, there was no group difference in neural response consistency (p = 0.23). In the rapid speech task, we observed a behavioral effect where approximately 50% of adults with dyslexia fell to chance performance at rapid speeds when the sentences contained no semantic context, while only 20% of controls exhibited this decline in performance. There were no group differences when semantic information was available or during the noise condition. We also observed group differences on low gamma asymmetry in the STG, especially during the stimuli without semantic context. These findings further support the double-deficit hypothesis and demonstrate that within a population of individuals with dyslexia, the brain and behavioral deficits observed vary widely. Future research should investigate the link between these biological mechanisms and reading outcomes. Disorders: Developmental C8 Atypical neural sources to speech-sound changes in dyslexia Anja Thiede1, Lauri Parkkonen2, Paula Virtala1, Marja C7 Heterogeneous neural deficits in dyslexia during speech sound processing tasks Tracy Centanni1,2, Sara Beach2,3, Ola Ozernov-Palchik2, Sidney May2,4, John Gabrieli2; 1 Texas Christian University, 2Massachusetts Institute of Technology, 3Harvard University, 4Boston College In dyslexia, the double deficit hypothesis suggests that children who struggle to acquire reading exhibit deficits in phoneme processing, rapid automatized naming, or both. Auditory processing a critical skill for letter-sound matching, with speed on these tasks directly related to fluency. Recent work suggests that these two deficits may be caused by different dyslexia susceptibility genes and may manifest in unique patterns of brain activation. In the current study, we designed three tasks to differentiate between individuals with dyslexia who have increased neural variability to sound versus those who have difficulty processing rapid auditory stimuli. Two tasks consisted of a ten-step consonant continuum between the sounds /b/ and /d/. In the passive condition, participants listened to the randomized presentation of these sounds while pressing a button in response to nature scenes. In the attention condition, participants pressed one of two buttons to categorize each stimulus as /b/ or /d/. The third task presented three sets of auditory stimuli at 4 different speeds, with participants making sense/ nonsense judgements; sentences with semantic content, strings of consonant-vowel-consonant sounds with no semantic content, and amplitude modulated white noise. Adults with (N = 25) and without dyslexia (N = 22) were assessed and then tested on these tasks while The Society for the Neurobiology of Language Laasonen3,4,5, Jyrki Mäkelä6, Teija Kujala1; 1Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, 2Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 3Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, 4Department of Phoniatrics, Helsinki University Hospital, 5Department of Psychology and Speech-Language Pathology, University of Turku, 6BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Central Hospital Developmental dyslexia has been associated with structural and functional brain abnormalities that may explain the reading impairment. One underlying deficit in dyslexia has been suggested to be of phonological nature, supported by mismatch negativity (MMN) studies showing neural speech processing impairments in readers with dyslexia. However, there is a lack of studies assessing the MMN sources in more detail in adult readers with dyslexia. So far, weakened mismatch fields (MMFs) to sound-frequency changes have been found in the left hemisphere of participants with dyslexia. We expected to find diminished MMFs in source space to speechsound changes of a pseudoword in readers with dyslexia. We recorded magnetoencephalography (MEG) of 21 dyslexic and 22 non-dyslexic adult readers to a repeating pseudoword /tata/, the last syllable of which was occasionally replaced by a duration, frequency, or vowel change. During the MEG recordings, participants watched a silenced movie. Additionally, phonological processing, reading skills, intelligence quotient, and working memory 133 Poster Session C were assessed with a neuropsychological test battery, and structural magnetic resonance images (MRI) of the brain were obtained to improve source localization. Data analysis was performed in MNE-Python software package. First, deviant–standard difference curves were calculated for each change type and participant, and averaged over groups. Then, source modeling was performed using a cortically-constrained source space based on individual MRIs. Weighted minimum-norm source estimates were obtained, morphed to an average brain template, and averaged for each group. To obtain MMF contrasts between groups, a spatio-temporal cluster permutation test was applied. Time courses extracted from the resulting clusters were compared post-hoc using one-way ANOVAs with group (control, dyslexia) as a between-subjects factor. Our results show that MMFs were elicited by all speech-sound changes in both groups. The groups differed in MMF source strengths. Preliminary results from the cluster permutation analysis suggest that readers with dyslexia had diminished MMF amplitudes to frequency changes in the right auditory cortex (all clusters p < 0.05). To duration and vowel changes, clusters were found in right fronto-parietal and frontal-pole areas. MMF amplitudes to duration changes were larger in participants with dyslexia compared to controls in the right frontoparietal cortex (p < 0.05). MMF amplitudes to vowel changes were increased in participants with dyslexia in the frontal pole and diminished in a parietal area of the right hemisphere (both p < 0.05). These preliminary findings are consistent with previous studies showing sound-feature specific decreases or increases of MMN responses in dyslexia. Furthermore, they support the hypothesis of diminished MMF strengths in dyslexia for frequency changes, adding to previous EEG and MEG studies and reflecting impairments in processing speechrelevant auditory information in dyslexia. C9 Coarse print tuning in Chinese children with and without dyslexia: preliminary results from an EEG study Urs Maurer1, Fang Wang1, Ka Chun Wu1, Jianhong Mo1, Xiao Chang Zheng1, Jie Wang2, Chin Lung Yang1, Catherine McBride1, Carrey T. S. Siu2, Kevin K. H. Chung2, Patrick C. M. Wong1; 1The Chinese University of Hong Kong, 2Education University of Hong Kong Coarse print tuning occurs within the first 200 ms after stimulus presentation, as measured by a difference in the N1 component of the ERP between responses to familiar words and unfamiliar visual control stimuli, such as symbols or false fonts. Coarse print tuning has been suggested to reflect visual expertise for words or letter strings. In alphabetic languages, N1 print tuning has been shown to develop with learning to read and to be reduced in children with dyslexia. The questions for the present study were whether print tuning would occur for Chinese in young children and whether it would be reduced in Chinese dyslexia. Here we report data from a preliminary sample of Chinese 2nd and 3rd graders who were either dyslexic (N=53) or not (N=29). They were presented with familiar Chinese and unfamiliar Korean characters, while the EEG was recorded. ERP analyses 134 SNL 2019 Program focused on the N1 component at left and right occipitotemporal (OT) electrodes. We additionally computed a topographic analysis of variance (TANOVA) at each time point to test for effects that were not restricted to the N1 component and OT electrodes. Even though the N1 itself was rather small, the repeated measures ANOVA on N1 amplitudes at OT electrodes revealed a robust main effect of stimulus condition (p<0.001), reflecting a larger N1 in response to Chinese compared to Korean. No other main effects nor any interactions were significant. The point-topoint TANOVA revealed significant ERP map differences between Chinese and Korean characters starting from 150 ms on, but no interaction between stimulus condition and group in any time window that was long enough to survive a duration correction for multiple comparisons. The robust effects of print tuning in the N1 component are in agreement with many previous studies in alphabetic languages and a few studies in Chinese, suggesting that print tuning also occurs in Chinese, presumably reflecting visual expertise. The lack of group differences in print tuning between dyslexic children and controls suggests that print tuning may play a different role for the development of dyslexia in Chinese compared to alphabetic languages. C10 Fluency Effects of Novel Acoustic Vocal Transformations in People Who Stutter: An Exploratory Behavioral Study Rebecca Kleinberger1, George Stefanakis1, Satrajit Ghosh2,3, Tod Machover1, Michael Erkkinen4; 1MIT Media Lab, 2Massachusetts Institute of Technology, 3Department of Otolaryngology, Harvard Medical School, 4Brigham and Women’s Hospital, Department of Neurology INTRODUCTION: Persistent developmental stuttering is a complex motor-speech disturbance characterized by dysfluent speech and other secondary behaviors. Stuttering phenotypes and severity vary across and within individuals over time and are affected by the speaking context. In people who stutter, speech fluency can be improved by altering the auditory feedback associated with overt self-generated speech. This is accomplished by modulating the vocal acoustic signal and playing it back to the speaker in real-time. Most research to date has focused on simple delays and pitch shifts. New embedded systems, technologies, and software enable a re-evaluation and augmentation of the shifted feedback ideas. The current study seeks to explore alternative, novel modulations to the acoustic signal with the goal of improving fluency. METHODS: Fourteen adults who stutter were evaluated. Throughout the study, subjects wore a microphone and earphones connected to a digital signal processing system. Subjects underwent a baseline fluency evaluation consisting of several speech tasks, including spontaneous speech and oral reading. No alterations in auditory feedback were provided during baseline testing. Following this, subjects heard their speech played through the headphones in real-time while they completed a series of shorter speech tasks that included spontaneous speech and oral reading. A series of acoustic manipulations (“modes”) were applied to the speech signal in real-time, changing the The Society for the Neurobiology of Language SNL 2019 Program  vocal feedback the subjects received. This included modulations of the fundamental frequency, vocal timbre, and attack/decay characteristics. These modes produced changes in vocal self-perception, including whispering (“whisper”), choral effects with harmonies based on western scales (“harmony”), and changes in reverberation (“reverb”), among others. Stuttering rates (i.e., stutter-like dysfluencies per total spoken syllables) were calculated for each feedback mode and compared across modes. Several control modes were also tested: the pre-testing baseline, a “raw voice” mode (i.e., digital auditory feedback without signal transformation), and a posttesting conversation. Statistical comparisons were made using Wilcoxon signed rank tests. RESULTS: Ten out of eleven experimental modes (including “raw voice”) yielded improvement in fluency compared with the pre-testing baseline, suggesting most types of headphone-based auditory feedback to be fluency-inducing. Compared with the “raw voice” mode, three feedback modes were associated with statistically significant fluency benefits (“whisper”, “harmony”, and “reverb”), suggesting a fluency benefit of these acoustic transformations beyond that of merely providing feedback. These modes were more fluency-evoking than simple pitch shifts or delays. Post-testing speech was associated with significant improvements in fluency compared to the pre-testing baseline, suggesting the procedure itself to yield persistent short-term fluency benefits even in the absence of ongoing acoustic feedback. This post-testing fluency benefit was robust but significantly less than the modes “harmony”, “reverb”, and “whisper”. CONCLUSION: The study re-demonstrates the well-described fluency benefits of altered acoustic feedback and extends this finding to novel acoustic transformations with stronger effect sizes. While the temporal persistence of fluency in these modes remains uncertain (and requires longitudinal study), the identification of multiple fluency-evoking feedback modes may offer the potential to overcome the habituation effects and intolerable listening experiences that limit the effectiveness of existing feedback technologies. C11 Chameleon effects in autism: Social interactions are influenced by bodily mimicry Bahar Tunçgenç1,2, Carolyn Koch1, Inge-Marie Eigsti3, Stewart Mostofsky1,2; 1Kennedy Krieger Institute, 2Johns Hopkins University School of Medicine, 3 University of Connecticut The nonconscious mimicry of the movements, postures and facial expressions of an interlocutor has important effects on social communication (Chartrand & Bargh, 1999). Mimicry facilitates interpersonal liking and prosocial behaviours, and may serve to coordinate evolutionarily group actions (Schurmann et al., 2004), including during discourse (Eigsti, 2013). Individuals with autism spectrum disorders (ASD) are less likely to mimic emotional facial expressions, potentially reflecting broader attention and emotion recognition abilities. There is also growing evidence of autism-associated impairments in spontaneous mimicry of bodily actions; this work typically ascribes impairments to broader impairments in social communication. Examining different action types can The Society for the Neurobiology of Language Poster Session C elucidate the domain-general underpinnings of mimicry; including participants with ASD serves to provide a broader range of individual differences in these functions. This research examines the role of general cognitive and motor skills, and broader ASD symptomatology, in mimicry. Methods. Participants were forty children (n=22 ASD, n=18 typical development, TD), ages 8-12 years, matched on age and verbal and nonverbal IQ. Children watched a video of an actor narrating an ageappropriate story. At intervals, the narrator paused so participants could retell the story. This provided for two baseline blocks (where the narrator did not make any story-irrelevant gestures) and three test blocks (where the narrator performed four gestures per block: yawning, scratching her arm, rubbing her face, and drinking from a cup). Mimicry actions during “listen” and “retell” blocks were coded, collapsed across blocks. Data were recorded for the Development Coordination Disorder Questionnaire (DCDQ; motor coordination) and Social Responsiveness Scale (SRS-2; ASD symptomatology). Visual fixation on the video was recorded as an index of attentional engagement. Results. A repeated-measures ANOVA comparing the frequency of object-centered mimicry (Drinking), person-centered mimicry (Armscratching, Face-rubbing) and highly person-centered mimicry (yawning) revealed a main effect of mimicry type, p<.001, and a significant type by diagnostic group interaction, p=.03. Yawning occurred infrequently across groups [M(SD)=.45(.93)], with no group difference and no difference from baseline to test blocks. In contrast, across groups, children were significantly more likely to perform Arm-scratching and Face-rubbing actions at test vs. baseline blocks, indicating a mimicry effect, p’s=.03. These effects were driven primarily by the TD group; the ASD group’s actions did not differ significantly by block. Person-centered mimicry was weakly correlated with ASD symptomatology across the sample, r(37)=.29, p<.10. There were no significant correlations with motor skills (DCDQ), IQ, or attentional engagement. Discussion. This study introduces a naturalistic method for assessing spontaneous mimicry in social interaction. Findings suggest reduced mimicry of person-centered actions in ASD, weakly associated with broader social communication skills. Results did not indicate a significant role of motor skills, assessed via questionnaire; data were also collected for objective motor tasks, currently being coded. These findings might reflect the reduced salience of specific actions; reduced attention to interpersonal coordination in the ASD group; or a broader reduction in links between perception and action in ASD. These findings are consistent with the hypothesis that perception-action coupling plays a significant role in social coordination, both in this diagnosis and more broadly. 135 Poster Session C Language Genetics C12 Candidate genes for phonological processing disorders: A systematic review and expression analysis in Broca’s and Wernicke’s region Nina Unger1,2,3, Stefan Heim2,4,5, Dominique Hilger2, Sebastian Bludau2, Peter Pieperhoff2, Sven Cichon2,6,7, Katrin Amunts1,2,5, Thomas W. Mühleisen2,1,6; 1Cécile and Oskar Vogt Institute for Brain Research, Medical Faculty, Heinrich-Heine-University Düsseldorf, 2Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, 3Department of Neurology, Medical Faculty, Uniklinik RWTH Aachen, 4Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, Uniklinik RWTH Aachen, 5JARA-Brain, Jülich-Aachen Research Alliance, 6Department of Biomedicine, University of Basel, 7Institute of Medical Genetics and Pathology, University Hospital Basel Introduction: Disorders of phonological processing occur in dyslexia (DysL), dyscalculia (DysC), specific language impairment (SLI), and the logopenic variant of primary progressive aphasia (lvPPA). There is evidence that these disorders may have a common biological basis. However, it is not understood, which genetic factors are involved. To identify such factors, we performed a literature screening of the four disorders and sought for potentially overlapping candidate genes for phonological processing (“phonology genes”). Next, we statistically investigated expression profiles of these genes in the cortical areas of Broca’s and Wernicke’s region, the major language-related brain regions using an in-house software tool. We hypothesized that genes differentially expressed in Broca’s and Wernicke’s region are promising functional candidate genes for phonology. Methods: The PubMed screening included genetic linkage and association studies published between 2010 and 2018. A phonology gene had to be associated with at least two of the four disorders. Expression differences were tested in six left brain hemispheres using JuGEx that allows analysis of gene expression data from the Allen Human Brain Atlas in relation to cytoarchitectonic maps from the JuBrain Atlas (Bludau et al., 2018). Results: 43 studies were identified from which 13 candidate genes were selected for DysL/SLI, six for DysL/DysC, and two for DysL/DysC/SLI. The comparison between Broca’s region (Amunts et al., 2004) and Wernicke’s region (Morosan et al., 2005) showed a significantly higher expression in Broca’s region for ATP2C2 (p=0.0451) and a significantly higher expression in Wernicke’s region for DNAAF4 (p=0.0104); the remaining genes showed similar expression levels. Both genes showed significantly higher expression in the areal gray matter compared to brain white matter (p=0.0001). Discussion: Our literature analysis revealed 21 phonology genes for three of the investigated disorders. A reason that no gene overlapped with lvPPA could be that DysL/DysC/SLI are etiologically linked through impairments of phonological awareness and phonological working memory. lvPPA also shows an impaired phonological working memory; however, spontaneous speech and naming errors are primarily characterized by phonemic paraphasias (expressive symptom). Another reason could be that DysL/DysC/SLI 136 SNL 2019 Program show a strong neurodevelopmental component, while lvPPA is characterized by neurodegenerative processes in adulthood. Together with our expression analysis, two genes can be highlighted: ATP2C2 encodes a magnesiumdependent calcium transporter. Interestingly, low magnesium levels have been reported for DysL and other communication disorders. DNAAF4 is involved in nerve cell migration during neocortex development supporting the long-standing hypothesis of disturbed neuronal migration in DysL. Conclusions: The present study proposes new candidate genes for phonology. We assume that these genes do not only play a role for impaired phonological processing, but also for phonology in healthy individuals. The identified expression patterns may indicate functional specializations of the genes in Broca’s and Wernicke’s region. Evidence for their involvement in impaired phonological processing or phonology in the general population needs to be derived from large genetic association studies. To shed further light on the biological function of these genes, future studies should also examine whether they play a role in the development of Broca’s and Wernicke’s region in humans. Disorders: Acquired C13 Language outcome after anterior temporal lobectomy: insights from tractography Ekaterina Stupina1, Anna Artemova1, Oleg Bronov2, Elisaveta Gordeyeva1, Dmitry Kopachev3, Elena Kremneva3, Marina Krotenkova3, Nikita Pedyash2, Andrey Zuev2, Andrey Zyryanov1, Olga Dragoy1,4; 1National Research University Higher School of Economics, 2National Medical and Surgical Center named after N.I. Pirogov, 3Research Center of Neurology, 4Federal Center for Cerebrovascular Pathology and Stroke A growing literature shows a critical role of white matter tracts (Catani, Jones, & ffytche, 2005) in neural mechanisms of language processing. This study investigates a relation between tracts damage and language outcome after anterior temporal lobe resection (ATLR). This surgery involves removal of the anterior part of the temporal lobe and mesial structures – the amygdala and the hippocampus (Clusmann et al., 2004), and is associated with language deterioration after the surgery in the language-dominant hemisphere (Sherman et al., 2011). The structural consequences of ATLR on the adjacent white-matter tracts and their relation to postoperative language deficits are not fully understood. In this study, we use a comprehensive language battery to analyze preand postoperative language status in patients undergoing ATLR. We also investigate if changes in the adjacent associative white-matter tracts and the resection volume predict postoperative language outcome. 18 patients (age range 20-47, M=34.8 y.o.) who underwent left ATLR due to intractable left temporal lobe epilepsy participated in the study. Before and 2-8 days after (M=4.7) the surgery, all patients were tested with the Russian Aphasia Test (RAT; Ivanova et al. 2016), which assesses both comprehension and production at all levels of linguistic processing. Before and 1-11 days after (M=4.7) the surgery, diffusion-tensor imaging (DTI) sequences were acquired for all patients using a 3T scanner (64 directions, 2 mm isovoxel, b=1500 The Society for the Neurobiology of Language SNL 2019 Program  s/mm2). After preprocessing, three ventral tracts in the left and right hemispheres were manually reconstructed based on the deterministic tensor-based model: inferior longitudinal (ILF), inferior fronto-occipital (IFOF), and uncinate (UF) fasciculi. Moreover, resection volumes were calculated based on normalized postoperative structural MRI images. Presurgically, all patients performed relatively well on language tests and showed close-to-normal language abilities. Some patients had problems with naming and verbal memory as reflected by object naming and sentence repetition tasks. Postsurgically, production but not comprehension was significantly affected (t=6.4, df=17, p<0.001): half of the patients decreased their overall production score by 13% or more. Object naming and sentence repetition tasks were affected the most. In 5 patients a different surgical approach was implemented, resulting in smaller resection of the anterior temporal lobe and better preservation of the ventral tracts: only in these patients UF and ILF could be reconstructed after the surgery using the same regions of interest as before the surgery. Taken as a binary predictor (all tracts preserved or not), ventral tracts preservation was associated with significantly less worsening in language production scores (by 9.6%, p=0.008). At the same time, resection volume and changes in volumes of each tract separately did not correlate with the extent of language worsening. Overall, our results suggest that preservation of ventral tracts is an important factor in language outcome in patients undergoing ATLR. However, larger samples are needed to separate the contribution of each ventral associative tract to postoperative language worsening. The study was supported by RFFI (grant no. 18-012-00829). C14 Comparing multi-dimensionality in post-stroke aphasia and primary progressive aphasia using principle component analysis Ruth Ingram1, Gorana Pobric1, Ajay Halai2, Seyed Sajjadi3, Karalyn Patterson2, Matthew Lambon Ralph2; Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, 2Department of Clinical Neurosciences, University of Cambridge & MRC Cognition & Brain Sciences Unit, 3Department of Neurology, University of California, Irvine 1 Language impairments caused by stroke (post-stroke aphasia) and neurodegeneration (primary progressive aphasia) have overlapping symptomatology and nomenclature and are both divided into categorical subtypes. Despite the apparent similarities, the few direct comparisons between primary progressive aphasia and post-stroke aphasia to date have often been limited either in the subtypes included (e.g., only fluent subtypes), or the cognitive/linguistic domains tested (e.g., only grammatical comprehension). The aim of this study was to compare a full range of linguistic and cognitive impairments in a cohort of post-stroke aphasia and primary progressive aphasia who were recruited without selection criteria for specific subtypes. We applied principal component analysis to explore the underlying structure of the variance in behavioural scores. Varimax rotation was applied to aid cognitive interpretation of the extracted components. Similar phonological, semantic and The Society for the Neurobiology of Language Poster Session C fluency-related components were found for post-stroke aphasia and primary progressive aphasia. A combinedprincipal component analysis across post-stroke aphasia and primary progressive aphasia highlighted varying degrees of overlap within and between these groups on all extracted components. Classification analysis was employed to quantify the ability to separate subtypes based on a potential categorical boundary within the principal component analysis-extracted behavioural ‘space’ for primary progressive aphasia and post-stroke aphasia respectively. Semantic dementia patients were most similar to the traditional idea of a diagnostic category (i.e. within group homogeneity and distinct between group differences), whereas the considerable overlap within and between other subtypes of primary progressive aphasia and post-stroke aphasia indicates a lack of categorical boundaries. Less overlap on the fluency component could suggest differences in the application of the fluent/non-fluent scale in PSA and PPA. The finding of graded variations between and within categories of both post-stroke aphasia and primary progressive aphasia suggests that a multidimensional rather than categorical classification system may be a better conceptualisation of both primary progressive aphasia and post-stroke aphasia. This may prove to be a productive alternative approach for relating behaviour to lesion/atrophy correlates and the underlying pathology, as has been demonstrated for post-stroke aphasia. The similar, graded language dimensions observed for primary progressive aphasia and post-stroke aphasia indicate comparability of the disorders despite very different types of damage. C15 A prospective morphometric study on the protection of bilingualism against dementia Lidon Marin2, Victor Costumero1,2, César Ávila2, Maria Antonia Parcet2; 1Center for Brain and Cognition, University Pompeu Fabra, Barcelona, 2 Neuropsychology and Functional Neuroimaging Group, Universitat Jaume I, Castelló de la Plana, Spain INTRODUCTION: Evidence from previous studies suggests that bilingualism contributes to cognitive reserve because bilinguals manifest the first symptoms of Alzheimer’s disease up to five years later than monolinguals. Other cross-sectional studies demonstrate that bilinguals show greater amounts of brain atrophy and hypometabolism than monolinguals, despite sharing the same diagnosis and suffering from the same symptoms. However, these studies may be biased by possible preexisting between-group differences. The aim of this study was to investigate the protective effect of bilingualism against dementia cross-sectionally and prospectively. METHODS: We compared global parenchymal measures of atrophy and cognitive tests using a sample of bilinguals and monolinguals in the same clinical stage and matched on sociodemographic variables. RESULTS: Our results suggest that the two groups did not differ on their cognitive status at baseline, but bilinguals had less parenchymal volume than monolinguals, especially in areas related to brain atrophy in dementia. In addition, a longitudinal prospective analysis revealed that monolinguals lost 137 Poster Session C more parenchyma and had more cognitive decline than bilinguals in a mean follow-up period of 7 months. CONCLUSION: Together, our results suggest that bilingualism promotes both cognitive reserve and brain reserve. The combination of these two factors provides a neural framework to explain the nature and origin of the bilingual advantage in the delay of dementia. These results provide the first prospective evidence that bilingualism may act as a neuroprotective factor against dementia and could be considered a factor in cognitive reserve. C16 Prediction of early post-stroke aphasia from initial language severity Alberto Osa Garcia1,2, Simona Maria Brambati2,3, Amélie Brisebois1, Marianne Désilets-Barnabé1,2, Elizabeth Rochon4,5,6, Alex Desautels1,2,7, Karine Marcotte1,2; 1Centre de recherche du Centre intégré universitaire de santé et de services sociaux du Nord-de-l’Île-de-Montréal, Hôpital du SacréCoeur de Montréal, 2Université de Montréal, 3Centre de recherche de l’Institut Universitaire de Gériatrie de Montréal, 4University of Toronto, 5Toronto Rehabilitation Institute, 6Heart and Stroke Foundation, Canadian Partnership for Stroke Recovery, Ontario, 7 Centre d’études Avancées en médecine du sommeil, Hôpital du Sacré-Cœur de Montréal Introduction: Post-stroke aphasia (PSA) is an acquired impairment in producing or understanding language generally caused by a perturbation of cerebral blood flow in the brain language network. It is generally due to an ischemic stroke in the left middle cerebral artery (MCA) territory. Several factors related to the patient, such as initial language impairment severity, and to the lesion, such as the integrity of white matter, have been proven to predict long-term recovery well (from 3 to 6 months). However, the greatest degree of recovery takes place in the first weeks following a stroke, and few studies have investigated behavioral changes during the acute phase. Also, the influence of white matter structures related to language (e.g., the arcuate fasciculus) in early recovery is still largely unknown. This study aims to identify which are the best predictors of early recovery in post-stroke aphasia. Methods: Twenty patients who presented with an ischemic stroke of the left MCA were recruited. An MRI and a language assessment were performed 3 days and 10 days (on average) after stroke onset for each patient. Mean values of fractional anisotropy (FA) and mean diffusivity (MD) were extracted in the arcuate fasciculus (AF) in each hemisphere, as well as lesion volume in the acute phase. Aphasia severity was measured using a Composite Score (CS) at both time points. The CS was designed using several subtests from different well validated language tests for patients with aphasia, including tasks of comprehension, naming and repetition. A stepwise regression analysis using age, education, lesion size, FA and MD from the AF in each hemisphere and initial CS was performed with achieved recovery as the outcome. Results: A significant improvement of the CS was observed between the initial assessment and the follow-up. A regression model including initial CS, lesion size and FA from AF in both hemispheres proved to account for 53% of the variance of the achieved recovery. Initial aphasia severity emerged as the best predictor of all 138 SNL 2019 Program (R2 = 36%) in the univariate analyses. Interestingly, albeit not significant, the FA of the AF in the right hemisphere accounted for more variance than the FA of the AF in the left hemisphere. Conclusion: This study has demonstrated that early recovery of PSA has a stronger relationship with initial language severity than with any other factors that were examined. The present results highlight the importance of language assessments in the early stages of post-stroke aphasia. Future studies including other biomarkers and long-term follow-ups will provide a more comprehensive understanding of aphasia recovery. A better understanding of early spontaneous recovery is an essential benchmark for the development of new early treatments. C17 Eye Gaze as a Predictor of Visual Confrontation Naming Impairment in Progressive Disorders of Semantic Memory Maurice Flurie1, Molly Ungrady2, Bonnie Zuckerman1, Daniel Mirman3, Jamie Reilly1; 1Temple University, 2University of Pennsylvania, 3University of Edinburgh Introduction: Progressive naming impairment (i.e., anomia) is among the most debilitating and prominent symptoms of Alzheimer’s Disease (AD) and Primary Progressive Aphasia (PPA). Patterns of naming impairment and analyses of naming errors in these clinical populations have provided integral converging evidence for the contribution of anterior and inferolateral temporal lobe regions to the neurobiology of confrontation naming. AD and PPA patients typically experience temporal lobe pathologies that differ from that of stroke aphasia. In turn, it is believed that patterns of naming in AD and PPA reflect a stronger correlation between ‘naming and knowing’ than is evident in the classical cortical aphasia taxonomy. That is, anomia in PPA tends to have a semantic basis rooted in a fundamental lack of ‘knowing’. As such, anomia in these populations tends to be a powerful proxy measure for semantic memory and its systematic deterioration. Here we examined gaze patterns during visual confrontation naming in patients with PPA or AD over the span of two years. We hypothesized that semantic impairment associated with temporal lobe pathology would compromise core elements of visual feature integration. Thus, evidence for semantic impairment would emerge almost instantaneously prior to lexical access. Methods: We tracked eye movements using a portable tablemounted infrared eye-tracker (RED-M 120 Hz, SMI, Inc) as patients (N=10) named familiar people and objects over the span of two years. We windowed gaze analyses to 2500-3000ms. Responses were classified as either correct or incorrect. We eliminated trials characterized by excessive blinks, off-screen looking, or unscorable responses. These data cleaning procedures yielded 4,110 discrete naming trials. We conducted a logistic mixed effects analysis predicting item accuracy (1 or 0) with diagnosis (AD, svPPA, lvPPA, or bvPPA) and MoCA score included as control variables and eye-tracking measures as fixed effects of interest. The unique effects of the eye-tracking measures were evaluated based on improvement in model fit (change in log-likelihood, aka, likelihood ratio test) when the measure was added to a The Society for the Neurobiology of Language SNL 2019 Program  model that already included the two control variables and the other three eye-tracking measures. The model also included crossed random effects of participants and items. Results: In accordance with our hypothesis, picture naming accuracy was significantly associated with eye movement behavior: fixation count (χ2(1)=10.7, p < 0.01), fixation dispersion (χ2(1)=4.8, p < 0.05), saccade count (χ2(1)=21.2, p < 0.001), and saccade velocity (χ2(1)=26.5, p < 0.001). Specifically, compared to correct responses, incorrect responses were associated with fewer fixations and more fixation dispersion, and more saccades and greater saccade velocity. Conclusion: Abnormal gaze patterns index the PPA patient’s inability to rely on their semantic storage of concepts for confrontation naming. With a reduced number of fixations and increased number of saccades in incorrect responses, individuals were not able rely on their extant knowledge to focus on salient information for automatized naming responses, but rather, labored responses reflecting bottom-up processing. Taken together, these findings provide empirical support for reduction of semantic representation, specifically in the early stages of processing external information. C18 Sparse canonical correlation analysis reveals anatomical correlates of naming errors in Primary Progressive Aphasia (PPA) Rose Bruffaerts1,2, Jolien Schaeverbeke1, An-Sofie De Weer1, Natalie Nelissen1, Eva Dries2, Karen Van Bouwel2, Mathieu Vandenbulcke3,4, Rik Vandenberghe1,2; 1Laboratory for Cognitive Neurology, Department Neurosciences, KU Leuven, 2Neurology Department, University Hospitals Leuven, 3Laboratory for Translational Neuropsychiatry, Department Neurosciences, KU Leuven, 4Psychiatry Department, University Hospitals Leuven Primary Progressive Aphasia can manifest as a nonfluent variant (NF) in which speech production is impaired, a semantic variant (SV) with impaired comprehension and a logopenic variant (LV) with word finding difficulties. The majority of PPA patients demonstrate impaired picture naming albeit for different reasons: picture naming entails correct object identification, access to semantics, phonological retrieval and speech production. Identification of the regions implicated in generating different error types can elucidate pathological changes. Whereas univariate analysis can detect voxels that contribute significantly, multivariate analysis can be used to determine the maximal extent of regions that predict errors. Sparse canonical correlation analysis (sCCA, Avants et al., 2010) is an extension of principal component analysis aimed at finding linear relationships in a lowdimensional space derived from e.g. neuroimaging data. Here, we use sCCA to identify regions in which grey matter atrophy correlates with different types of naming errors. Audio recordings of the 60-item Boston Naming Test were scored in 41 PPA patients (19 NF, 13 SV, 9 LV; age range 49-79 y.o., 23 female). Errors were classified using the scheme developed for SV by Woollams et al. (2008), including omissions, semantic errors, superordinate errors and circumlocutions. We added speech production errors (distorted speech) as an error class. Segmentation was performed using SPM12 The Society for the Neurobiology of Language Poster Session C and CAT12 (voxel size 1.5x1.5x1.5mm³) and images were smoothed (8x8x8mm³ Gaussian kernel). The dataset was split into a training set (n=21) and a testing set (n=20). In the training dataset, sCCA (http://github. com/stnava/sccan) determined a subset of grey matter voxels, in which atrophy linearly correlated with error type frequency (cluster threshold: 200 voxels). This subset of voxels from the training dataset was then validated using the independent test dataset and the maximal sparseness at which a correlation between atrophy and errors was found (P<0.05) was determined. Scanner type, total intracranial volume and age were used as nuisance variables. Correct responses were given in 47% of trials for SV, 59% for NF and 69% for LV, with omissions in 19%; 10% and 4% respectively. Speech production errors occurred in 1% of trials for SV, 14% for NF and 4% for LV. Semantic errors were produced in 9% of trials for SV, 8% for NF and 8% for LV. Superordinate errors were generated in 5% of trials for SV, 1% for NF and 3% for LV. Circumlocutions were found in 11% of trials for SV, 2 % for NF and 6% for LV. Using sCCA, speech production errors correlated with bilateral atrophy of the premotor regions (sparseness: 0.001 meaning 0.1% of grey matter, 443 voxels). Semantic errors correlated with atrophy in the left ventral stream (sparseness: 0.008, 0.8% of grey matter, 2152 voxels). Superordinate errors correlated with bilateral frontooccipitotemporal atrophy (sparseness: 0.059, 5.9% of grey matter, 14483 voxels). Speech production errors in PPA correlated to atrophy in premotor regions. Semantic errors correlated with atrophy in the left ventral stream. Superordinate errors reflected widespread atrophy. The anatomical correlates of naming errors reflect impaired language and speech processing at different scales. C19 Oral narrative productions in Alzheimer’s Disease related to memory systems and brain volume measures Lilian Cristine Hubner1,2, Alexandre Nikolaev3, Anderson Smidarle1, Gislaine M. Jerônimo1, Yawu Liu4, Lucas P. Schilling5,6, Fernanda Loureiro7, Alexandre R. Franco8,9, Ricardo B. Soder5,10, Ellen C. G. Siqueira1, Ariane Gonçalves1, Bárbara L. C. Malcorra1, Antônia A. Gazola10; 1School of Humanities (Linguistics Department), Pontifical Catholic University of Rio Grande do Sul (PUCRS), Brazil, 2National Council for Scientific and Technological Development (CNPq), Brazil, 3University of Helsinki, 4Departments of Neurology and Clinical Radiology, University of Eastern Finland, 5 Brain Institute (InsCer), Brazil, 6São Lucas Hospital (PUCRS) (Neurology Service), 7São Lucas Hospital (PUCRS) (Speech and Language Service), 8Center for Biomedical Imaging and Neuromodulation, The Nathan S. Kline for Psychiatric Research, New York, 9Center for the Developing Brain, Child Mind Institute, New York, 10School of Medicine, Pontifical Catholic University of Rio Grande do Sul (PUCRS), Brazil The performance in narrative production has been explored as a complementary tool in the diagnoses of cognitive decline and dementia, such as in Alzheimer’s Disease (AD). Narrative production, an important daily life ability, relies not only on linguistic aspects but also on other cognitive elements, such as memory systems. The aim of this research was to investigate the level of interdependence between oral production of two types 139 Poster Session C of narratives (retelling a short story and narrating a story based on scenes) and two types of memory (working and episodic), associated to brain volume measures, in an AD group compared to a healthy aging group. Twelve healthy elderly controls (HC) (age mean=72.9, SD=5.6, range=63-82, schooling mean=5.4, SD=1.7, range=3-8) and thirteen AD participants (age mean=75.9, SD=4.8, range = 68-84; schooling mean=6, SD=2, range=3-8) participated in the study. They were asked to produce two short stories (to retell a narrative they heard and to tell a story based on a sequence of scenes), both followed by comprehension questions. The story orally produced were transcribed and analyzed based on their informative content and on micro (local) and macro (global) text structures. Two tasks were administered, the Digit Span (DS) to test working memory and a verbal learning task to assess episodic memory (EM). Voxel-based morphometry was used to obtain brain data. A mixed-effect analysis included five fixed-effect predictors (years of education, total EM score, total DS score, group type (HC x AD) and brain volume from different regions), as well as possible interactions between predictors. Data analyses showed a significant effect of the group type (b=3.1, t=3.6, p=0.002) and the EM task (b=0.13, t=3.7, p=0.001) in the oral narratives. Regarding brain volume analyses, group type (AD vs HC) was the predictor that interacted mostly with left hippocampus (b=-7.1, t=-2.1, p=0.049), total gray matter volume (b=-14, t=-2.4, p= 0.025), and gray matter subcortical volume (b=-13.4, t=-2.2, p=0.038). Working memory interacted with the 4th ventricle (b=-0.78, t=2.4, p=0.025), left ventral diencephalon (b=-0.78, t=-2.4, p=0.025), while the posterior cingulate cortex (b=-5.59 t=2.6, p=0.017) explained significantly narrative production. The results reflect the intrinsic articulation between memory systems, oral narrative production, and brain regions, factors to be considered in a comprehensive neurolinguistic analysis, especially when investigating clinical groups such as those with AD. C20 Types of naming errors in aphasia: The effect of phonemic and semantic cueing Georgios Papageorgiou1, Dimitrios Kasselimis1,2, Georgia Angelopoulou1, Dimitrios Tsolakopoulos1, Eleftherios Kavroulakis3, Argyro Tountopoulou4, Eleni Korompoki4, Ioannis Zalonis5, Sofia Vassilopoulou4, Constantin Potagas1; 1Neuropsychology and Language Disorders Unit, 1st Neurology Department, Eginition Hospital, Faculty of Medicine, National and Kapodistrian University of Athens, 2 Division of Psychiatry and Behavioral Sciences, School of Medicine, University of Crete, 3Department of Radiology, School of Medicine, University of Crete, 4Stroke Unit, 1st Neurology Department, Eginition Hospital, Faculty of Medicine, National and Kapodistrian University of Athens, 51st Neurology Department, Eginition Hospital, Faculty of Medicine, National and Kapodistrian University of Athens Introduction Even though the available aphasia literature remains inconclusive with regard to the underlying mechanisms of anomic disorders, specific aspects of the anomic behavior have been extensively studied (Schwartz et al. 2009; Fridriksson et al. 2010). Two of these aspects are type of errors and the effect of cueing. 140 SNL 2019 Program Several studies suggest that semantic cues may facilitate naming by adding semantic information to the target word whereas phonemic cues may aid in accessing phonological representations (Howard & Gatehouse, 2006; Jefferies & Ralph, 2006). However, there are research findings indicating that this dichotomy may reflect an arguable, yet oversimplified correspondence between type of cueing and component of the naming process, which is hypothesized to involve, among other, access to phonological and semantic representations (Li & Williams, 1991). Since the available data on the efficacy of phonemic and semantic cueing in conjunction with the type of naming errors are somewhat sparse, we argue that studies in this direction would contribute to the elucidation of this quite complex issue. The objective of this study is to assess the type of naming errors, and post-cueing successive attempts - regardless of their overall performance on a given task- in relation to type of cue in aphasia. Methods Fifteen patients with acquired aphasia due to a left single stroke participated in this study. Language assessment included the Boston Diagnostic Aphasia Examination and the Boston Naming Test. Number and types of errors were annotated and coded in three distinct categories: Semantic, phonemic, and neologisms. Number of correct answers following phonemic or semantic cueing as well as autocorrections were also annotated and coded into corresponding categories. MRI and/or CT scans were acquired for all patients and lesion loci were specified by a neuroradiologist. Results Our results indicated a clearcut linear positive association between correct answers following phonemic cueing and frequency of semantic errors: patients who predominantly produced semantic errors showed significant improvement with phonemic cueing, but not with semantic cueing. No significant correlations were found between frequency of phonemic errors and correct answers following either semantic or phonemic cueing. With regard to the relationship between lesion topology and naming error types, posterior lesions were associated with elevated frequency of semantic errors, while lesions extending more anteriorly were associated with higher frequency of phonemic errors. Conclusion Our study provides insight to the efficacy of two different cues, widely implemented in clinical practice during naming assessment, in conjunction to predominant error types in aphasic patients, independently of overall performance. Our results indicate that phonemic cueing is more beneficial for patients who show a tendency towards more frequent semantic errors. We therefore suggest that aphasic patients who produce semantic paraphasias suffer from a deficit in word finding, associated to access and not storage. Lesion correlates of naming difficulties, as well as possible relationships between sites of brain damage and post-cueing correct responses are also discussed. The Society for the Neurobiology of Language SNL 2019 Program  Language Therapy C21 Phonologically related EEG as a more reliable diagnostic tool in the recovery of aphasia in the different stages after stroke. Miet De Letter1, Elissa-Marie Cocquyt1, Nils Knockaert1, Pieter van Mierlo2, Arnaud Szmalec3,4, Wouter Duyck3, Patrick Santens5; 1Department of Rehabilitation Sciences, Ghent University, 2Medical Image and Signal Processing Group, Department of Electronics and Information Systems, Ghent University, 3Faculty of Psychology and Educational Sciences, Department of Experimental Psychology, Ghent University, 4 Psychological Sciences Research Institute, Université catholique de Louvain, 5Department of Neurology, Ghent University Hospital Introduction Phonologically evoked potentials seem to give more information about neural plasticity than behavioral investigation in people with aphasia after stroke1. Ongoing research on semantic and grammatical evaluation yields similar results. In order to validate the role of EEG in clinical aphasia diagnosis and rehabilitation, the current study investigates if phonologically evoked potentials 1. are more sensitive than behavioral data for detecting impairments in phonological input 2. can be considered as a prognostic and therapeutic indicator for people with aphasia in the different stages after stroke. Methods Eight right-handed, Dutch individuals with aphasia after stroke (6 men, 2 women, age 46;671;4) were included in this study. Four patients were evaluated in the subacute stage, 4 in the recovery stage. Both a behavioral and an electrophysiological language registration of phonological input were acquired at two evaluation moments (T1 and T2). Between T1 and T2, all patients received phonologically based therapy during three weeks with a maximum of two therapy sessions of 30 minutes a week. Neurophysiological findings in all patients were compared to the previously developed2 normative data. Peak amplitudes and latencies, both at T1 and T2, were compared to the normative values based on their age groups. Effect sizes were calculated to estimate the difference between the values of the patients and the normative values. Results The results of the current study confirm that neurophysiological evaluation (EEG) is more sensitive than behavioral evaluation for detecting phonological input impairments, especially when ceiling effects were reached in behavioral testing. Neurophysiological and/or behavioral indicators predicting the recovery of aphasia cannot be determined from our data, but the amplitudes and latencies of the EP’s can be helpful in guiding and monitoring therapeutic interventions. Conclusion The current results confirm previous results1,3 on the added value of EEG in the follow-up of patients with aphasia. We advocate to use EEG as a standard tool in aphasia diagnosis and followup. Therefore the development of a user friendly EEG device would be welcomed. 1. Aerts, Annelies, Batens, K., Santens, P., van Mierlo, P., Hartsuiker, R., Hemelsoet, D., Duyck, W., Raedt, R., Van Roost, D., & De Letter, M. (2015). Aphasia therapy early after stroke : behavioural and neurophysiological changes in the acute and postacute phases. APHASIOLOGY, 29(7), 845–871. 2. Aerts, Annelies, van Mierlo, P., Hartsuiker, R., Hallez, H., Santens, The Society for the Neurobiology of Language Poster Session C P., & De Letter, M. (2013). Neurophysiological investigation of phonological input: aging effects and development of normative data. BRAIN AND LANGUAGE, 125(3), 253–263. 3. Khachatryan, E., De Letter, M., Vanhoof, G., Goeleven, A., & Van Hulle. (2017). Sentence context prevails over word association in aphasia patients with spared comprehension : evidence from N400 event-related potential. FRONTIERS IN HUMAN NEUROSCIENCE, 10. Meaning: Combinatorial Semantics C22 Alpha Oscillations mark the Interaction between Language Processing and Cognitive Control Operations during Sentence Reading René Terporten1, Anne Kösem3, Jan-Mathijs Schoffelen2, Eleanor Callaghan1, Karin Heidlmayr1, Bohan Dai2, Peter Hagoort1,2; 1Max Planck Institute for Psycholinguistics, 2Donders Centre for Cognitive Neuroimaging, 3 Lyon Neuroscience Research Center A sentence’s context dynamically and flexibly constraints the way in which semantics are processed by the reader. It is suggested that this process engages an interaction between brain mechanisms that predict upcoming linguistic input and mechanisms that control information flow based on the amount of information provided by past sentential context. In a series of experiments, we put this to the test by focusing on the functional role of alpha oscillations as a marker for the effects of sentence context constraints onto brain dynamics. In a magnetoencephalography study, participants read a word-by-word presentation of sentences. These sentences belonged to linguistically matched lists of three levels of context constraints (high, medium and low constraints), defined by the sentences’ cloze probability. Prior to a sentence’s congruent target word, alpha oscillatory dynamics were investigated. The results indicated that alpha power was non-monotonically related to the degree of context constraint. Source reconstruction localized this effect to parietal areas, which connected to left and right frontal and left temporal areas differently, depending on the degree of constraint. Because of the non-monotonic alpha power modulation and its connectivity profile, these results were interpreted in light of cognitive control operations during language processing. To further investigate this interpretation, a subsequent electroencephalography experiment made use of the same paradigm, but introduced the probability of target word congruency as an additional factor. Groups of participants were compared that read either blocks of sentences with mainly (80%) congruent or (80%) incongruent sentence endings. We predicted that the congruency probabilities have an effect on alpha band dynamics during sentence processing. Specifically, we expected that cognitive control operations (involved in target word prediction) will suppress linguistic predictions more strongly in the 80% incongruent sentence ending condition, because the predictions that result from past sentential context are made mostly irrelevant in this task setting. This should be reflected by an effect on alpha power, as we hypothesized that alpha oscillations reflect cognitive control operations within this task setting. In sum, this line of research highlights the role of alpha 141 Poster Session C SNL 2019 Program oscillations and its role as a functional marker of context constraints during language processing. In addition, it emphasizes the dynamic interaction of what is commonly defined as a core language network and networks that support cognitive control operations. However, once representations are verified, they may be inhibited to focus on other processing, leading to downstream consequences of impoverished representations for predicted information (Rommers & Federmeier, 2018). C23 Early Neural Signals of Prediction During Language Comprehension Are Modulated by Constraint and Expectancy Ryan Hubbard1, Kara Federmeier1; 1University of C24 Goal Bias in the Visual World Illinois, Urbana-Champaign Numerous studies of language comprehension have identified that individuals can utilize contextual information of a sentence in order to predict or preactivate upcoming features of words, both with behavioral and electrophysiological measurements (Federmeier et al., 2007). While this research has been fruitful and has led to greater understanding of prediction, the measures utilized are relatively coarse – behavioral responses give little insight into the specific neural processes occurring, and event-related potentials (ERPs) may primarily index more global low-frequency neural changes while missing subtle changes in patterns of activity. Neural representations of linguistic and semantic information are likely distributed across networks in the brain (Maess et al., 2006), and thus multivariate analysis techniques that can detect subtle changes in patterns of activity may be effective for understanding the specifics of what features are preactivated during comprehension, and when they are activated. To this end, we implemented Representational Similarity Analysis (RSA), a multivariate analysis technique that calculates similarity between neural representations over time or space. We used RSA to examine the similarity of neural representations of expected or unexpected sentence final words and immediately preceding words in high and low constraint sentences (data from Federmeier et al., 2007). If the preceding word cues pre-activation of features of the final word, then representational similarity may be increased in more predictive contexts, but decreased when the final word is unexpected. Time-based RSA revealed a peak of similarity between pre-final and final words 100-250 ms post-onset. The magnitude of this peak was graded based on constraint and expectancy, with strong constraint expected endings having the greatest similarity. Space-based RSA of this time-window showed that representational similarity was greatest over occipital sites, suggesting the pre-final word cued pre-activation of visual features of the sentence final word. We extended the time-based RSA analysis with a generalization approach to examine similarity between early activity following the pre-final word and later activity (in the N400 time window) of the final word. This analysis revealed a striking pattern – namely, similarity between early pre-final word activity and late final word activity was lowest for strongly expected endings, and was in fact negative, suggesting dissimilarity or repulsion (Chanales et al., 2017). These findings suggest that prediction during language comprehension is multi-dimensional, resulting in pre-activation of not just semantic but also visual features, and can occur rapidly, allowing for fast matching of incoming information to pre-activated representations. 142 Kyra Krass1, Gerry Altmann ; University of Connecticut 1 1 Throughout our lives, we are taught that setting goals is vital to success; this helps us plan for our next steps. However, this bias towards goals is more pervasive and ingrained than one might think. Previous literature has shown that individuals have a goal bias that is demonstrated by better recall of a goal than a source, more descriptive language about a goal than a source, and a focus on the goal when planning an action (e.g., Cohen & Rosenbaum, 2004; Lakusta & Landau, 2012; Papafragou, 2010). We were interested if individuals also showed a goal bias in the Visual World Paradigm. Our first experiment measured anticipatory eye movements toward the initial and end states when hearing sentences containing a change-of-state action; we used reversible verbs (e.g., open/close), verbs of destruction (e.g., eat), and verbs of creation (e.g., bake). For the reversible items such as “The pedestrian will open the umbrella”, individuals looked more to an open umbrella (i.e. the goal) than a closed one. This pattern was replicated in the creation items. For example, when hearing “The friend will knit the sweater”, individuals looked more to the end state of a sweater than an initial state of a ball of yarn. However, no differences were found for destruction items. Despite our robust effect for reversible items, one possible confound was that the results could be interpreted as a bias towards the goal (an open umbrella is the goal of “opening”) or a bias to interpret the verb as an adjective (looking to an open object when hearing “open”). Therefore, we ran a second study where half of the items used a verb that could be interpreted as an adjective (e.g., open the umbrella) and half could not (e.g., land the plane). The results again showed that there were more looks to the end than the initial state. We in fact found that there were numerically more looks to the end state in the non-adjective condition (“land”) than the condition where the verb can be interpreted as an adjective (“open”). This confirms that the goal of the action was driving looks. This second study also included destruction and creation verbs with more items. The creation items (e.g., knit) still showed a larger proportion of trials with looks to the end state than the initial. There were no differences found for the destruction items. These results suggest that destruction verbs show a different pattern of eye movements possibly because of the lack of a depicted goal (i.e., an empty plate is not the goal of eating). Together, these studies demonstrate that adults exhibit an anticipatory goal bias in the visual world paradigm when given reversible and creation verbs. Whereas previous studies (e.g. Altmann & Kamide, 1999) have been interpreted as eye movements reflecting object affordances, we believe that the present results require modification of this account, with The Society for the Neurobiology of Language SNL 2019 Program  Poster Session C anticipatory eye movements reflecting, at least in these cases, not what actions objects afford, but rather what goals those actions afford. study provides insights into how structure binding processes across syntactic environments translate to different ERP components. C25 Distinct neural mechanisms underlying structure building in different syntactic environments Isabella Fritz1, Meaning: Discourse and Pragmatics Anne Marte Haug Olstad1, Giosuè Baggio1; 1Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim Structure building operations enable us to combine words into more complex structures. In the current study, we investigated these operations using event-related potentials. We designed an experiment with Norwegian (Bokmål) stimuli where the same sequence of two words can either form part of a noun or verb phrase requiring composition (syntactic and semantic) or where the phrasal/clausal boundary falls between these two words (see examples below). Moreover, we manipulated whether structure building occurs in noun phrases (AdjectiveNoun) or verb phrases (Verb-Noun), to determine whether the neural signatures are similar or different across different syntactic environments. Examples: AdjectiveNoun combination Cut condition: Noen bukser er blå] [jakker kan være svarte. (Literal translation: Some trousers are blue] [jackets can be black.), Adjective-Noun combination Compose condition: Svarte bukser og [blå jakker] kan brukes sammen. (Literal translation: Black trousers and [blue jackets] can be used together.), VerbNoun combination Cut condition: Musikk er ofte det du trenger] [barna kan slappe av med det. (Literal translation: Music is often what you need] [the children can relax with it.), Verb-Noun Combination Compose condition: Musikk er morsomt men da [trenger barna] mye trening for å lykkes. (Literal translation: Music is fun but then [need the children] much training to succeed.). ERPs were timelocked to the target nouns (jakker/barna), up to which the sentences were syntactically unambiguous: composition was either required (Compose condition) or a new phrase/ clause had to be introduced (Cut condition). We then compared the ERPs of the Cut and Compose conditions in both syntactic environments (Adjective-Noun and Verb-Noun). For the Adjective-Noun combinations, we found an N400 effect with a more negative deflection in the Cut Condition and strongest over centro-parietal sites. For the Verb-Noun combinations, we found a more negative deflection in the Compose condition starting at around 200 msec post-noun onset and peaking at around 300 msec. The effect is strongest over anterior-left recording sites. The temporal and spatial profiles of this effect are compatible with a left anterior negativity (LAN). In contrary to previous M/EEG and neuroimaging work, these distinct ERP signatures entail different composition processes depending on the syntactic environment the target noun is embedded in. Studies so far have focused on the comparison between structure building vs word lists, often assuming that structure binding operations are the same across syntactic environments. The current The Society for the Neurobiology of Language C26 Neural correlate of social comunication Diola Elizabeth Valles Capetillo1, Magdalena Giordano1; 1Departamento de Neurobiología Conductual y Cognitiva, Instituto de Neurobiología UNAM Campus Juriquilla, Querétaro, México Social communication is fundamental for successful social interactions and involves the use of pragmatic language. The most difficult pragmatic form to communicate is verbal irony (Angeleri et al., 2008). Verbal irony is a statement that transmits the opposite meaning to its literal counterpart (Grice, 1975); it is relevant, but it is inappropriate, and the intention of the speaker is that the listener recognizes her true intention (Attardo, 2000). The statement can be interpreted as ironical, literal, or as a white lie (Pexman, 2008); if the listener does not detect the relation of the statement to the context, the statement can be interpreted as absurd. The brain areas that have been associated with verbal irony comprehension are the dmPFC and rmPFC (BA 9), IFG (BA 44), and STG (BA 22; Reyes-Aguilar, et al., 2018), these results are based on 12 studies, many of them in Japanese. The purpose of the present study was to expand these results and evaluate the neural and cognitive correlates of ironic comprehension in a Spanish speaking sample. We created 14 contexts per condition (i.e. ironic, deceitful, literal and absurd), and 14 statements that could be associated with all conditions. The linguistic proprieties were evaluated by 120 young adults (mean age = 22.91). Then, 30 young adults (15 M and 15 F, age mean = 22.73) classified the intention of the statements according to the context. To control for individual differences in reading speed, the stimuli were recorded. For an initial fMRI study, the task was presented using the text and the auditory modalities. We selected 8 participants (4 M), half of them performed the task in the text modality. A 3 Tesla GE MR750 scanner with a 32-channel head coil was used, BOLD signal was examined during the statement. Preprocessing was performed with the fmripre pipeline (Esteban, et al., 2018). The data analysis for first, second and third level was performed using FSL 5.0.9 (Jenkinson, et al., 2012 ). We computed the mean for each condition and contrasts between conditions. The analysis was corrected at the cluster level at p > 0.05 to predict the BOLD signal. For the text modality, the mean activation for irony and deceit was found in the left supramarginal gyrus with extension to the postcentral gyrus. For the audio modality, both irony and deceit were associated with bilateral activation in Heschl’s gyrus, planum temporale, and posterior superior temporal gyrus. The same results were found when calculating a joint analysis, text plus audio. Differences by contrasts were not found. In conclusion, the contexts and statements had the desired qualities and that they were well understood by the participants. The initial fMRI pilot study showed that irony and deceit, in both modalities, recruited areas involved in semantic integration of social contexts. We did not find differences between these 143 Poster Session C two conditions, likely due to the small sample used. A larger sample would allow us to determine if there are differences in the areas involved in the comprehension of irony versus deceit. C27 Shared situation models between production and comprehension: fMRI evidence on the neurocognitive processes underlying the construction and sharing of representations in discourse Karin Heidlmayr1,2, Kirsten Weber1,2, Atsuko Takashima1,2, Peter Hagoort1,2; 1Max Planck Institute for Psycholinguistics, 2Donders Institute for Brain, Cognition and Behaviour, Radboud University To communicate a complex issue or situation, speakers and listeners usually exchange information in the form of larger discourse, such as narratives or expository texts. Importantly, to understand the meaning of a larger discourse, information needs to be integrated over an extended period of time to build a coherent situation model, i.e., a mental representation of the situation denoted in a text (Zwaan & Radvansky, 1998; Zwaan & Singer, 2003). To process a situation model, links between successive utterances within the text, but also with world-knowledge beyond the content of the discourse have to be made. It is not yet clear to which degree the different neurocognitive processes are shared between the speaker and the listener to assure the listener’s comprehension of the situation model, in other words the parity of representations between production and comprehension (Pickering & Garrod, 2004, 2006). The goal of the present study was to identify the neural bases reflecting how situation models are constructed and shared between speakers and listeners. We designed an fMRI pseudo-hyperscanning study using a variant of the ambiguous text paradigm (Dooling & Lachman, 1971; Bransford & Johnson, 1972), i.e., conceptually ambiguous expository texts were presented with preceding contextual information that in some cases did and in others did not facilitate the extraction of a coherent situation model. Fifteen speakers produced ambiguous texts preceded by a highly informative title in the scanner and 27 listeners subsequently listened to these texts, preceded by either a highly informative, intermediately informative (highly/ intermediately informative title condition) or no title at all (absent title condition). The interlocutors’ brain activity related to situation model processing was expected to vary with comprehension-relevant contextual information, even though linguistic and sensory information was equal across conditions. Conventional BOLD activation analyses in listeners, as well as inter-subject correlation (ISC) analyses (Hasson et al., 2004) between the speakers’ and the listeners’ hemodynamic time courses were carried out. Independent of the information provided by the title, discourse processing was associated with activation in the left-dominant fronto-temporal language network (Hagoort, 2017). However, only the processing of coherent discourse involved (shared) activation in bilateral lateral parietal and in medial prefrontal (mPFC) regions. More specifically, in listeners, the bilateral lateral parietal cortex (supramarginal gyrus, angular gyrus) showed higher BOLD activation in the highly informative than the 144 SNL 2019 Program intermediately informative and absent title conditions. Moreover, the listeners’ rating of text comprehension was positively correlated with speaker-listener ISC in the mPFC. Hence, the constitution of large-scale conceptual representations, i.e., situation models, and their sharing between interlocutors seems to draw on activation in high-level convergence zones in the lateral parietal cortex and on inferencing and episodic memory retrieval in the medial prefrontal regions. This shared pattern of brain activation between the speaker and the listener suggests that the process of memory retrieval and the binding of retrieved information in overlapping areas between speakers and listeners enables the communication of complex conceptual representations. C28 Electrophysiology of inferential processing in visual narratives Neil Cohn1; 1Tilburg University Inference has been a primary focus of studies of discourse of both verbal and visual narratives, like comics or picture stories. Most of this work has emphasized backwardlooking processes, where a reader must infer absent information from the subsequent information that is explicitly provided. However, comics have conventional forms where inference is demanded, but a narrative unit is still provided. “Action stars” depict a star-shaped “flash” the size of a full panel that suggests a climactic event, while leaving that event unspecified, meaning that a reader knows that the events are not depicted at that moment, rather than events being omitted entirely (Cohn, 2013). Action stars therefore allow us to ask: what are the neurocognitive correlates of inference generation when a narrative unit explicitly signals omitted information? We thus measured the event-related brain potentials (ERPs) to 60 visual narratives one image at a time to sequences that either depicted an explicit event, or that replaced it with an action star or a “noise” panel of non-representational scrambled lines (20 in each condition). A prior study of 101 participants showed that the climactic event was recognized as missing when it was omitted (mean rate of recognition: .71) thus confirming the inferential quality of the sequences. EEG was recorded from 24 participants using a 32 channel BrainVision ActiCHAMP. At the critical panel, in the 300-500 ms epoch, action stars and noise panels both generated large N400s compared to explicit-events, but did not differ from each other. These N400s reflect the cost of accessing semantic memory given the relative lack of bottom-up cues, but show that participants indeed attempted to find meaning in these images, despite their impoverished representations. These panels then diverged in their processing between 500 and 900ms. Action stars evoked sustained anterior negativities indicative of further interpretive processes (Baggio, 2018), while also eliciting a posterior P600, reflecting the update of this information into a growing mental model. Meanwhile, noise panels, which are not conventional representations, evoked a late frontal positivity (LFP) indicative of their low probability as substitutions for a visual event. This divergence at the critical panel suggests that participants attempted to integrate the symbolic meaning from action stars into a The Society for the Neurobiology of Language SNL 2019 Program  mental model of the visual discourse, while noise panels were simply recognized as incongruous. Nevertheless, as both action stars and noise panels omit event information, the subsequent panels evoked sustained negativities from 500-1100ms to both types of panels relative to those after explicit events. Such negativities suggest working memory processes further interpreting the missing event information (Baggio, 2018). These findings support that inferential processing in visual narratives evokes cascading mechanisms that differ depending on the (un) conventionality of the incoming information, and these mechanisms appear to overlap with those in language processing. Baggio, G. (2018). Meaning in the Brain. Cambridge, MA: MIT Press. Cohn, N. (2013). The visual language of comics: Introduction to the structure and cognition of sequential images. London, UK: Bloomsbury. C29 Brain Processing of Socio-Pragmatic Conventions in a Second Language: Cross-Linguistic Perspectives Haining Cui1, Hyeonjeong Jeong1, Kiyo Okamoto1, Daiko Takahashi1, Ryuta Kawashima1, Motoaki Sugiura1; 1Tohoku University Although previous neuroimaging studies have investigated the cross-linguistic influence on second language (L2) processing (Jeong at al., 2007; Kotz, 2009), it remains unknown whether linguistic differences have an impact on L2 socio-pragmatic convention processing. Thus, this study investigated how Chinese learners of Japanese process a socio-pragmatic convention of Japanese honorific expressions that is linguistically different from Chinese. Japanese honorific expressions are called grammaticalized honorifics since they require that speakers of lower social status apply different inflected verb forms (i.e., honorific or humble) toward interlocutors with higher social status. In comparison, Chinese has neither a verb inflected system nor the humble and honorific form distinction. By manipulating conventional and unconventional expressions by both lower and higher social status speakers, we examined common and different brain mechanisms underlying the processing of socio-pragmatic conventional expressions between Chinese learners of Japanese (L2) and native speakers of Japanese (L1). We hypothesized that social cognition areas would be involved in the processing of conventional expressions in both groups but that the degree of involvement in the language-related regions might differ between L2 and L1 speakers due to the cross-linguistic differences. The participants were 31 healthy right-handed Chinese learners of Japanese and 33 native speakers of Japanese. During fMRI scanning, they performed a socio-pragmatic judgment task (auditory sentences accompanied by interlocutors’ images), which had a 2 ✕ 2 factorial design with two levels of ‘Conventionality’ (conventional and unconventional expressions) and two levels of ‘Social Status’ (lower and higher social status speakers’ expressions). The lower social status speakers’ expressions required a distinction between honorific and humble forms, but the higher social status speakers’ expressions did not. In the first-level analysis, four regressors with correct The Society for the Neurobiology of Language Poster Session C trials were created to model the hemodynamic response for each subject. In the second-level analysis, by using a flexible three-way ANOVA implemented in SPM12, we tested the main effect of Conventionality, Social Status (i.e., grammaticalized honorifics), and the interaction of Group ✕ Conventionality, and Group ✕ Social-status (FWE p < 0.05 at the voxel-level). For the main effect, conventional expressions showed greater activation in the bilateral anterior temporal lobe (ATL) than unconventional expressions, and the right insula produced significant activation for lower social status than higher status in both groups. For the interaction effect, the medial prefrontal cortex (mPFC) was more activated when the L1 group processed the conventional expressions more than those of the L2 group. Similarly, the L1 group showed higher activation in the left inferior frontal gyrus (LIFG) while processing the lower social status’ expressions than those of the L2 group (p < 0.001 unc.). Taken together, we argue that the ATL and the right insula may be involved as common brain areas that function to integrate the linguistic information along with socio-pragmatic conventions irrespective of language proficiency. The interaction results suggest that both the social cognition (mPFC) and language-related regions (LIFG) are taken in the processing of grammaticalized honorifics in L1. However, the grammaticalized honorifics may not be readily accessible for L2 speakers due to the crosslinguistic differences. Meaning: Lexical Semantics C30 fMRI Representational Similarity Analysis Reveals the Information Structure Underlying Word Semantics Leonardo Fernandino1, Jia-Qing Tong1, Colin Humphries1, Lisa Conant1, Jeffrey Binder1; 1Medical College of Wisconsin The question of how word meaning is encoded in the brain remains a central problem in the neurobiology of language. Most prominent models favor one of three organizational principles: experiential/embodied features, taxonomic relationships, or word co-occurrence statistics. Here, we used representational similarity analyses (RSA) of fMRI data to evaluate several models in terms of their predictions for the degree of semantic similarity between words. Given a distributed pattern of fMRI activation for each word in the stimulus set, the degree of correspondence between the pairwise similarities computed from the fMRI activation patterns and the pairwise similarities predicted by a given model is a measure of the extent to which neural activity patterns encode the type of information on which that model is based. This approach allowed us to directly compare, for the first time, models of semantic representation based on different types of information in terms of their predictions for neural activation patterns elicited by words. We evaluated a model based on experiential feature ratings (CREA; Binder et al., 2016), a model based on taxonomic relationships (WordNet), and 3 models based on word co-occurrence statistics (LSA, HAL, and Word2Vec). Methods: Participants were 7 right-handed, native speakers of English. They performed a semantic task 145 Poster Session C (familiarity rating) on a set of 300 English nouns including concrete and abstract concepts. Words were presented on a computer screen, one at a time, in pseudo-random order. The entire stimulus set was presented 6 times over the course of 3 scanning sessions. Preprocessing and GLM analyses were conducted in each participant’s original coordinate space. A GLM was used to generate a wholebrain activation map for each word. Nuisance regressors included word length and RT for each trial. The resulting maps were subsequently masked to include only cortical regions involved in semantic word processing. We used two independent semantic masks: one encompassing all voxels that were modulated by word concreteness in a separate GLM analysis, and the other defined by an ALE meta-analysis of 120 fMRI studies of word semantics (Binder et al, 2009). Both masks consisted mainly of heteromodal cortical regions in temporal, parietal, and frontal lobes. From each semantic model, we computed the dissimilarity matrix (DSM) for the 300 words included in the task. We then computed the Pearson correlation between each model-based DSM and the two fMRIbased DSMs (one for each semantic mask). Statistical significance was estimated via the Mantel test with 10,000 permutations. Results: The two semantic masks produced a similar pattern of results, with the concreteness-defined mask generating higher correlation coefficients for all models. All models predicted fMRI activation patterns in this mask (CREA: r=.24; WordNet: r=.16; HAL: r=.14; LSA: r=.12; Word2Vec: r=.11; all p < .0001). In the ALE-defined mask, the RSA was significant for all models except LSA. Importantly, the experiential CREA model was superior to all other models in both masks (all p < .001). These results strongly suggest that the neural representation of word meaning primarily encodes information about experiential features, rather than taxonomic or word co-occurrence information. C31 Representational analysis of abstract word meaning Karen Meersmans1, Rose Bruffaerts1, Simon De Deyne2, Gert Storms3, Patrick Dupont1, Rik Vandenberghe1; Laboratory for Cognitive Neurology, Department of Neurosciences, KU Leuven, 2Computational Cognitive Science Lab, University of Melbourne, 3Laboratory of Experimental Psychology, KU Leuven 1 Concrete and abstract word are processed differentially in the semantic brain network, with concrete words activating bilateral posterior regions, reflecting their association with the visuoperceptual system, and abstract words activating left inferior frontal and temporal regions, reflecting their relation with the symbolic system. To further clarify the representation of meaning of abstract and concrete words, we conducted an event-related fMRI-experiment and investigated how semantic similarity between concrete and abstract words is coded in the brain. Twenty-four subjects performed a word repetition task on visual and auditory stimuli (32 abstract and 32 concrete nouns, matched for valence, arousal, frequency, age-of-acquisition, prevalence). Consonant strings and contorted speech (rotated spectrogram) were included as control stimuli. The experiment comprised 8 runs of 80 146 SNL 2019 Program pseudorandomised trials each (64 stimulus and 16 control trials) with an intertrialinterval of 8.25s. Every stimulus was presented 4 times in each modality. Structural and functional data were acquired using a 3T Philips Achieva, equipped with OptoActive-II ActiveNoise Cancellation sytem (OptoAcoustics Ltd.). Functional images were subsequently realigned, slice-timing corrected, normalised to MNI-space and smoothed. A univariate analysis using SPM12 showed activation for abstract relative to concrete nouns in left lateral anterior temporal pole extending into anterior superior temporal sulcus (aTP/aSTS; -48,8,22 kE=171), in left rostroventral inferior parietal cortex, extending into posterior middle temporal gyrus (rvIPL/ pMTG; -54,-43,26; kE=255), bilaterally in medial superior frontal gyrus (mSFG; RH 15,47,5, kE=61; LH 63,-13,-4, kE=92) and in right cerebellum (22,-52,-43, kE=176; all voxel-level uncorrected p<0.001, cluster-level FWEcorrected p<0.05). Before Multivariate Pattern Analysis, pMTG and rvIPL were separated into two unconnected regions by excluding white matter voxels (inclusive masking with average across subjects grey matter map; threshold>0.5). The activated regions (minus the cerebellum) were then superimposed with the subjectspecific grey matter probability maps (threshold>0.5). fMRI-similarity matrices were constructed by calculating the cosine similarity between the trial-specific response patterns defined as the integral of the BOLD-response 2-8 seconds post-stimulus onset. These matrices were averaged across trials, modalities and subjects in order to obtain a stimulus-specific fMRI-similarity matrix, which was correlated with a semantic similarity matrix obtained from word association data. For abstract words, this yielded a significant semantic similarity effect in rvIPL (Spearman’s rho=0.14, p=0.0006), pMTG (rho=0.11, p=0.006) and mSFG (rho=0.10, p=0.01) but not aTP/ aSTS (rho=0.07, p=0.06). No semantic similarity effect for concrete words was found in these regions (p>0.1). Our observation of increased BOLD-amplitude in lateral aTP corresponds to the previously described conretenessrelated gradedness of this region from dorsolateral (social/abstract concepts) to ventromedial (concrete object information). However, our similarity analysis, verging on significance, does not unequivocally confirm the representational nature of the anterior temporal effect. Abstract words are characterised by means of social, event, and introspective features and rely on thematic relations. Both mSFG and rvIPL are part of the Default Mode Network, involved in automatic semantic processing. pMTG/rvIPL have previously been implicated in the processing of thematic associations, event and relational semantics. We hypothesise that pMTG and DMN regions support the retrieval of abstract semantics by providing thematic, event and relational information. C32 Neural Correlates of Semantic Processing for Abstract Verbs Emiko Muraki1, Alison Doyle1, Andrea B. Protzner1, Penny M. Pexman1; 1University of Calgary Introduction: It has been well documented that different types of nouns are associated with behavioural and neural differences. In contrast, studies of verbs are less common The Society for the Neurobiology of Language SNL 2019 Program  and have tended to focus on concrete verbs that refer to specific physical actions (e.g., kick, run, etc.) The neural correlates of, and potential distinctions between, abstract verbs (verbs not associated with sensorimotor experience, e.g. think, dissolve, etc.) have yet to be systematically addressed. Although abstract verbs tend to be treated as a homogeneous category, there may be distinct types of abstract verbs. We used EEG recording during a syntactic classification task (SCT) to determine whether modality-based distinctions amongst abstract verb types correspond to different dimensions of underlying semantic representation in their associated brain responses. Methods: We analyzed the data for 36 participants who completed a go/no-go SCT (is it a verb?) during EEG recording. Stimuli for the SCT consisted of four verb types; those that represent 1) abstract mental states, 2) abstract emotional states, 3) abstract, external, non-bodily states, and 4) concrete verbs, in addition to a large set of nouns. A multivariate, data-driven analysis technique, partial least squares (PLS) analysis, was employed to investigate differences in neural activity between the four different verb types. Results: The PLS analysis identified one significant latent variable (p = .009), which highlighted a distinction between the external, non-bodily state verbs compared to the mental state and concrete verbs (emotional words did not contribute to this pattern). Bilateral frontocentral electrodes showed sustained negativity at 400 – 600 ms post-stimulus onset, with larger negative amplitudes for the external, non-bodily state verbs relative to concrete and mental state verbs. This same effect appeared bilaterally as sustained positivity at parietal and occipital electrodes, suggesting centrally located sources. Lastly, the non-bodily state/concrete and mental state verb differentiation was more dominant in anterior electrodes on the left, and posterior electrodes on the right. Discussion: Verbs differ in the modalities referenced by their meanings, and our results suggest that these differences are evident in their neural correlates. The abstract, external state verbs appear most different from concrete and mental state abstract verbs in terms of neural source generators. These findings underscore the idea that abstract verbs are not a homogenous category and that different types of abstract verbs may be associated with representational differences. C33 Memory benefits of expectation violations Laura Giglio1,2, Peter Hagoort1,2, Kara D. Federmeier3, Joost Rommers1,2; 1 Max Planck Institute for Psycholinguistics, 2Donders Institute for Brain, Cognition and Behaviour, Radboud University, 3Beckman Institute, University of Illinois Expectations can facilitate the processing of predictable input, but it is less clear whether they have downstream consequences, in particular when the input violates those expectations. On the one hand, expectation violations could be beneficial because the resulting prediction error signals may promote learning and drive memory encoding of the input. On the other hand, expectations may be so strong that they interfere with processing of the input, impairing encoding and yielding worse memory. In the present EEG study, we presented expectation violations The Society for the Neurobiology of Language Poster Session C in sentences, followed by a surprise memory test that allowed us to look at the consequences for memory and to link these back to electrophysiological signatures of sentence comprehension. Forty-two participants were presented with unexpected but plausible words following either a strongly constraining context wherein they violated a likely expectation (“Be careful, the stove is very dirty”, where “hot” was expected) or a weakly constraining control context that did not afford a strong expectation (“He is surprised, because the second object is very dirty”). Additional filler sentences ended in predictable words. Participants then provided old/new judgements on the two types of unexpected words and matched new words. Constraint affected sentence reading: relative to the weakly constraining condition, expectation violations were characterized by a larger late parietal positivity, as well as beta power decreases prior to and following the expectation violation. Behaviourally, we found better recognition memory for expectation violations than weak constraint controls, suggesting that expectation violations promote memory for the input. In addition, we found downstream effects of prediction disconfirmation during the memory test, with a smaller (more positive) N400 and a larger late positive complex (LPC) for expectation violations than new items. The N400 has previously been associated with priming, and the LPC with explicit recollection; both of these memory processes seemed to be influenced here. Finally, back-sorting items based on subsequent memory responses showed that the late parietal positivity observed during reading was larger in subsequently remembered than subsequently forgotten items, although there was no interaction with constraint. Beneficial effects of expectation violations on subsequent memory may in part originate from processes indexed by this positivity, as it was larger in successfully remembered items as well as in the condition characterised by better memory. Overall, these results show that, although expectation violations are likely costly during on-line processing, they can ultimately be beneficial for memory of the input. C34 Dissociating Grammatical Categories in Temporal and Perisylvian networks: Evidence from Neurodegenerative Disease Sladjana Lukic1, Valentina Borghesani1, Wendy Shwe1, Elizabeth Weis1, Zachary Miller1, Jessica Deleon1, John Neuhaus1, Bruce Miller1, Maria Luisa GornoTempini1; 1Memory and Aging Center, Department of Neurology, University of California, San Francisco INTRODUCTION. A frequent symptom across neurodegenerative disorders is difficulty with naming, which differentially can affect grammatical categories such as nouns (i.e., naming objects) or verbs (i.e., naming actions). Successful naming relies on multiple cognitive processes supported by brain networks which can be selectively disrupted in different disorders. It is currently debated whether different patterns of impairment in naming nouns and verbs can be related to distinct patterns of atrophy in neurodegenerative diseases. This study aims to investigate the neural correlates of naming impairments in a large group of patients with 147 Poster Session C frontotemporal dementia-spectrum disorders (FTD) and Alzheimer’s disease (AD). METHODS. One hundred sixtyfour subjects (30 nonfluent/agrammatic, 30 semantic and 18 logopenic variants of primary progressive aphasia (PPA); 40 behavioral variant FTD; 16 AD; 30 healthy controls) underwent an extensive neuropsychological battery, an experimental noun and verb naming test, and structural MRI scanning. We compared overall and category-specific naming performances across clinical groups to investigate syndrome-specific patterns of naming deficits. We then correlated naming scores with cortical thickness across all subjects using wholebrain surface-based morphometry in SPM. Age, gender, and disease severity were included as covariates in all analyses. Significance was set at p < 0.05 corrected for multiple comparisons. RESULTS. Results indicated that all clinical groups had significantly lower overall naming performance relative to healthy controls. Semantic PPA patients were more impaired than other FTD and AD groups. Semantic PPA showed a grammatical category effect by naming significantly fewer nouns than verbs, whereas nonfluent/agrammatic PPA showed the opposite dissociation by naming fewer verbs than nouns. Neuroimaging results showed that overall naming significantly correlated with cortical thickness in the left temporo-parietal areas. Performance on nouns correlated with left anterior temporal regions, while performance on verbs correlated with the left inferior frontal, inferior parietal (supramarginal gyrus) and posterior temporal regions. We then performed a post-hoc analysis on behavioral data to investigate the cognitive processes underlying noun and verb naming deficits. Based on the neuroimaging results, we hypothesized that noun and verb naming scores would correlate with measures of semantic and lexico-syntactic processes respectively. We found that scores on nouns and verbs significantly correlated with lower semantic abilities, while verb naming only correlated with lower syntactic abilities. CONCLUSION. Taken together, these findings suggest that different neuro-cognitive mechanisms support naming of specific grammatical categories in neurodegenerative diseases. Specifically, nouns appear to rely on semantic processes occurring in the left anterior temporal lobe, while verbs rely on syntactic and lexical processes in the left perisylvian regions. These findings improve our fundamental understanding of the neural basis for spared and impaired naming processes. C35 A functional gradient for semantic cognition: graded transitions from default mode to executive cortex Beth Jefferies1, Xiuyi Wang1, Daniel Margulies2, Jonathan Smallwood1; 1University of York, 2Centre National de la Recherche Scientifique (CNRS) UMR 7225, Frontlab, Institut du Cerveau et dela Moelle Épinière, Paris Semantic cognition recruits regions of default mode network (DMN) alongside semantic control and multiple demand network (MDN) regions. However, it is unclear how these networks are differentially engaged to support flexible patterns of semantic retrieval that are appropriate to the circumstances. In a verbal semantic feature 148 SNL 2019 Program matching task (which involved matching concepts on the basis of colour, shape or size features), we parametrically manipulated the global semantic similarity of the words to create a ‘psychological gradient’ varying in the need to constrain retrieval. On some trials, task demands were well-aligned with long-term memory, since the words to be matched on a specific feature, such as COLOUR, also had strong general conceptual overlap (STRAWBERRYTOMATO). On other trials, participants were required to match words which were largely unrelated (e.g. COLOUR: STRAWBERRY-POST BOX). We tested the hypothesis that neural recruitment within this task would change lineally along a connectivity gradient from DMN to MDN as the concepts to be matched shared fewer features. In line with this hypothesis, we found semantic responses changed along the cortical surface in a graded fashion, depending on the requirement for constrained retrieval. This functional gradient within semantic cognition was apparent in data from individual runs within individual brains, suggesting it is not a product of spatial averaging. To examine whether these graded transitions capture the layout of networks involved in semantic processing, we defined DMN and MDN using non-semantic localizer tasks and established that the peak response for semantic control in previous studies fell midway between DMN and MDN in temporal and frontal cortex. Furthermore, there was an orderly decrease in values corresponding to both the connectivity gradient and the effect of the psychological gradient on the BOLD response, captured by the transition between networks from DMN, through the semantic control network, to MDN. These findings show gradual transitions between multiple networks supporting semantic cognition, which are organised along a functional gradient – with activation towards the DMN reflecting patterns of conceptual retrieval that closely align with the structure of long-term knowledge, and activation towards MDN reflecting more adaptive coding of current conceptual demands. C36 Electrophysiological Evidence for the Processing of Predicative Metaphors George Spanoudis1; 1Psychology Department, University of Cyprus Despite the growing literature on metaphor comprehension, very few studies attempted to investigate the electrophysiological basis of predicative metaphor processing. Predicative metaphors elicit the creation of a semantic link either between the subject and the verb or between the verb and its object. We used eventrelated potentials (ERPs) to examine the time-course of processing metaphorical and literal sentences. Nineteen healthy participants (sixteen female, three male, 19-28 years) read sentences silently and judged by pressing one of two buttons whether the sentences had a metaphorical or literal meaning. ERPs were measured to mid-sentence targeted verbs (TVs) as participants read familiar predicative metaphors (‘the boy hides his feelings’) or literal sentences (‘the boy hides the cakes’) of the same form. Reading metaphors in contrast to literal sentences revealed a robust N400 effect; TVs in the metaphorical, in comparison to the literal, sentences evoked an early The Society for the Neurobiology of Language SNL 2019 Program  localized N400 effect that was around 400 ms after TV onset, signifying that, by this time, their metaphorical meaning had been obtained. These findings are consistent with electrophysiological studies of nominal metaphor indicating that predicative metaphor comprehension share a common electrophysiological activity with other metaphors. C37 Language Background and Lexical Representations during Naturalistic Reading: An RSA Analysis of Fixation Related fMRI Data on L1 and L2 Readers of English Benjamin Schloss1, Friederike Seyfried1, Chun-Ting Hsu1,2, Ping Li Li1; 1Pennsylvania State University, 2Kyoto University During reading, lexical access happens at the early stage of text comprehension, leading to later stages of coherent mental representations such as situation models. When this input is missing or impoverished (small vocabulary size), reading comprehension suffers. Although bilinguals reading in their L2 also show poorer reading comprehension than monolinguals, it is unclear to what degree deficits in their high-level text representations are caused by impoverished low level input versus difficulties in later stages of processing. In the current study, we simultaneously obtained fMRI and eye tracking data from 52 monolingual and 56 bilingual participants (L1 = Chinese); of the bilinguals, 28 living in the US (immersed), 28 living in China (non-immersed) while they read five English texts on five scientific topics in a selfpaced reading paradigm. We modeled brain responses for each of 346 content words occurring in the texts from 12 brain regions along the left ventral “what” stream and conducted a representational similarity analysis between each group’s data and ELMo, a deep learning model which uses a recurrent neural network to capture both sentential context as well as general word meaning and has been shown to outperform traditional language models that only capture general word meaning on a variety of natural language processing tasks. Representations of the words were extracted by estimating double gamma (DG) or initial dip (ID) hemodynamic response which coincided with fixation on one of the words. This, in turn, was input to a singular value decomposition (SVD) or independent component analysis (ICA). The final RDM (representational dissimilarity matrices) for each brain region in each group was then calculated by either computing individual level RDMs for each individual and averaging them, or hyperaligning the extracted neural features and averaging the representations, from which the final RDM was created. We observed that the ID model of the HRF provided better fit to the ELMo model for monolinguals compared to bilinguals (Z=-3.32, p=.00045). This is in line with the difference between monolinguals and bilinguals in reading speed. We also observed evidence for a gradient along the posterior-toanterior axis of the ventral stream (excluding V1 which was an outlier) that explained 13.9 % of the variation (rank correlation=.37) in the observed cosine similarities with the model, but did not reach significance due to insufficient data. Finally, we found partial evidence in support of language background affecting similarity with The Society for the Neurobiology of Language Poster Session C ELMo. While immersed bilinguals did show consistently higher similarity to the model than non-immersed bilinguals, monolinguals neural representations were systematically more dissimilar than those of both bilingual groups. Further research is needed to understand the mechanisms underlying this effect. Context-dependent models have not been studied in detail with regard to how well they capture representations in L2-learners and in native speakers. It is possible that quicker fixation/reading pace (shorter inter-stimulus intervals) as well as more experience with words (possibly less detailed processing of low-level lexical information) lead to worse performance of these models for monolinguals. C38 Implicit and explicit access to partial word knowledge in school-aged children Brittany Sharp1, Alyson Abel1, Chanel Konja1; 1San Diego State University Vocabulary is one area of language that continues to develop through the school years and continues into adulthood. Children in 3rd through 9th grade learn an average of 3,000 words per year, about 8 per day. Vocabulary learning switches from direct teaching to incidental learning without explicit instruction around 3rd grade. Incidental learning requires multiple exposures to the new word and its associated meaning. However, one challenge in the study of word learning is how to determine whether a word has been learned. It has been identified that pre-familiarization of word form alone versus word form plus meaning differentially aided in learning for 4th-6th grade students. Similarly, it has been identified that in 5-9 year old children, novel words learned with meaning were retained better in long-term memory compared to novel words learned without meaning. In this study, we examine whether partial word knowledge can be differentially accessed using implicit (EEG) and explicit (behavioral) methods. Thirty-five (35) children aged 8 to 11 (M=9;4) completed two tasks while their EEG was being recorded: 1) Word Learning task and 2) Recognition task. In the Word Learning task, participants listened to sentence triplets with the same nonword at the end; some of the novel words had meanings attached (Meaning condition) and some did not have a related meaning (No Meaning condition). The Recognition task directly followed the Word Learning task and asked the children to identify, by indicating yes or no via button press, if they had previously heard the word in the Word Learning task with three conditions (yes with Meaning attached, yes but with No Meaning attached and New not previously heard). This study focused on the Recognition task results. Behaviorally, the children demonstrated below chance performance (A’: Meaning=0.41, No Meaning=0.42, New=0.41) for identifying words they had/had not heard previously. EEG analysis produced significant findings for both the N400 and P200 ERP components. The N400, associated with semantic processing, was significant for meaning and had a more positive amplitude for Meaning vs No Meaning and New conditions. The P200 component revealed a significant interaction, though no significant effects of condition or location were found. Follow up univariate analysis revealed that the P200 was 149 Poster Session C SNL 2019 Program more positive for the Meaning and No Meaning vs. New conditions, consistent with a previously determined old/ new effect. However, the P200 component is not yet associated with indexing a specific semantic process, though it has been proposed to be related to memory. Very few studies containing auditory stimuli have been conducted leading to significant P200 ERP components so additional studies are needed. Together, these results indicate that children can access word form and meaning together implicitly, even if they are unable to explicitly indicate word recognition. In addition, the results reveal N400 and P200 ERP components differentially access word form and meaning for recently learned words. Signed Language and Gesture C39 Neural correlates for comprehending spatial language in American Sign Language and English Karen Emmorey , Stephen McCullough , Christopher Brozdowski ; San Diego State University 1 1 1 1 In American Sign Language (ASL) spatial relationships are conveyed by the location of the hands in space. To express “The candle is on the box,” a 1-handshape representing the candle is positioned on top of a flat handshape representing the box. To understand perspectivedependent expressions (e.g., “The candle is to the right of the ball”), a 180° mental transformation is required for face-to-face signing. In contrast, English expresses spatial relationships with prepositional phrases, and no linguistic spatial transformation is required. Previous research has shown that the production of spatial language differs for ASL and English, with greater involvement of bilateral superior parietal cortex for ASL (Emmorey et al., 2002; 2005; 2013). We investigated whether the neural regions involved in the comprehension of spatial language differ for ASL signers and English speakers. In an event-related fMRI experiment, 14 deaf ASL signers and 14 hearing English speakers viewed ASL or audio-visual English descriptions of either a perspective-independent relation (in, on, below, above) or a perspective-dependent relation (left, right, behind, in front of) between two objects. The control condition was non-spatial descriptions of the colors of two objects (e.g., “The candle is blue and the ball is red”). After 20% of trials, a picture of two colored objects was presented that either matched or mismatched the spatial configuration or the colors described in the preceding sentence. Two 6-minute scans with 24 trials in each condition (perspective-dependent; perspectiveindependent) were presented. Trials consisted of an ASL 4 second video/English 3 second video, 2 second fixation ISI, and a picture (2 seconds; 20% of trials), followed by variable fixation periods (2 – 10 seconds). Accuracy and response times for the sentence-picture matching task did not differ between signers and speakers. In contrast to the non-spatial control, perspective-dependent expressions engaged the superior parietal lobule (SPL) bilaterally for both ASL and English. This result is consistent with Condor et al., (2017) who reported bilateral SPL activation during comprehension of English spatial expressions using a similar experimental design. For perspectiveindependent expressions, activation in SPL was more 150 right-lateralized for ASL and more left-lateralized for English. Right parietal regions may support the required visual-spatial mapping in ASL between the position of the hands in signing space and a mental representation of the location of referent objects. The direct contrast between ASL spatial expressions revealed greater SPL activation for perspective-dependent expressions, while the direct contrast for English revealed no difference in activation between spatial expression types. Increased SPL activation for perspective-dependent expressions in ASL may reflect the cognitively demanding 180° mental transformation required to understand these expressions (Brozdowski et al., 2019). Overall, the results suggest both overlapping and distinct neural regions support spatial language comprehension in ASL and English. Methods C40 Using MPVA of intertrial phase coherence of neuromagnetic responses to words to classify lexical, semantic and morphosyntactic processes in young vs. older participants Mads Jensen1, Rasha Hyder1, Yury Shtyrov1; Aarhus University 1 Background: Passive auditory designs have been successfully applied for tracking neural activity associated with different language processes in the human brain, making them a potentially useful tool for a variety of applications in those cases when subjects are unable to cooperate with an active assessment task. Using multivariate patterns analysis (MVPA) of MEG data acquired in a passive listening paradigm, we have previously shown that this technique could successfully classify meaningful words from meaningless pseudowords, correct from incorrect syntax, and semantic differences. This was done in in healthy young participants. However, individuals with neurological disorders that compromise active assessment tasks are typically of older age, which necessitates investigation of applicability of our novel approach to older participants and comparing younger and older participants’ classification results. Methods: Spoken stimuli (500 ms duration) that diverged lexically (words/pseudowords), semantically (action-related/abstract) or syntactically (grammatically correct/ungrammatical) were presented in a non-attend pseudorandom equiprobable sequence while MEG was recorded. Raw data were bandpassfiltered into five bands (α: 8-12Hz, β: 13-30Hz, γ-low: 30-45Hz, γ-medium: 55-70Hz, & γ-high: 70-90Hz), epoched from -100ms pre- to 900ms post-word onset and downsampled to 500 Hz. Inter-trial phase coherence (ITPC) was calculated for Hilbert transformed data in sensor space using planar gradiometers. In order to assess statistical significance of differences between the ITPC extracted for different stimulus types, we applied MVPA to each group and conditions across the different time points and frequency bands independently across subjects. Results: Using MVPA on the ITPC data we find a difference in the decoding accuracy across the different groups and different frequency bands. Decoding the semantic condition yielded the best results with a The Society for the Neurobiology of Language SNL 2019 Program  significant difference between the two age groups in the β & γ-medium bands with the older participants having a better classification around the divergence point compared to young participants. In the γ-medium range, there was a late (from 722ms until the end) difference, with better classification in young participants than older participants. For the correct-incorrect syntax differentiation, the γ-high band showed a long-lasting difference from 150 ms after the divergence point until the end of the trial, with better classification scores in younger participants. In the β band, there was a difference between groups around 200ms after stimulus onset with young participants having a higher classification score than older participants. We found no significant difference for meaningful words from meaningless pseudowords classification in the older group, unlike the previous young participants’ results. Discussion: We show that by combining passive auditory equiprobable paradigm with multivariate analysis of phase data, we can classify the type of linguistic information automatically processed by the brain. Furthermore, we show that the decoding time over time is not the same for different age groups, potentially indicating the decline of neurocognitive linguistic ability and/or compensatory mechanisms in older age. More generally, this shows that changes in language processing can be detected using MVPA and that this method may therefore provide information not attainable with more conventional ERF/EPR peak amplitude analyses. C41 Towards an Optimized Paradigm for N400-Effect Elicitation in Single-Subject Applications Carsten Eulitz1, Anna-Maria Waibel1; 1University of Konstanz, Germany Event-related potentials (ERPs) provide insight into cognitive processing without requiring an overt response from the subject. This supports, for example, clinical applications in non-responsive as well as noncommunicative, low-responsive patients. For patients with disorders of consciousness, research has shown that the N400-effect might be predictive of recovery. To this end, the data have to be interpretable on a single-subject basis. Only a few studies have investigated N400-effects in single subjects. Passive paradigms are particularly suitable for this target group, due to an inability to follow commands. However, using passive paradigms, detection rates in healthy control subjects have been reported to be only around 50 %. The present study was aimed to maximize a possible N400-effect by controlling linguistic factors in the stimulus materials as well as to further enhance the detection rate by using two different types of expectancy violations (high cloze sentence-final nouns and antonyms). The final words of semantically congruous and incongruous German sentences were matched on linguistic parameters (length, word class, countability, phonological properties, concreteness, animacy). The final set of sentences was then selected in cloze probability and acceptability rating studies. To determine the sensitivity in an ERP study, 19 healthy subjects passively listened to the sentences while watching a film without sound. In an initial data analysis using t-tests and ANOVAs The Society for the Neurobiology of Language Poster Session C on a single-subject basis, we identified clusters of at least three electrodes showing a significant difference between the congruous and incongruous conditions in the N400latency range. We were able to measure an N400-effect for at least one of the two types of expectancy violations in 79 % of the healthy subjects. This is a considerable improvement of the detection rate compared to previous studies. In addition, measurements with brain-damaged but communicative patients have demonstrated the design’s applicability in a clinical setting. More sophisticated single-subject analysis methods will be explored to possibly even further enhance the reported detection rate for N400-effects in single subjects. Syntax C42 Towards a functional interpretation of sustained anterior negativities Aura A L Cruz Heredia1, Bethany Dickerson2, Ellen Lau1; 1University of Maryland, Linguistics, University of Massachusetts, Amherst 2 |INTRODUCTION| Prior work in neuroscience has implicated persistent neural activity as an underlying mechanism engaged during working memory (WM) tasks. Nevertheless, what exactly is being encoded by this sustained activity remains a topic of much research and debate. In the language-domain, previous ERP work has observed a sustained anterior negativity (SAN) during the processing of filler-gap dependencies – which are widely regarded as being taxing on WM resources – relative to sentences with no such dependencies. While the SAN have been interpreted as an index of WM, exactly what mechanism drives the response is still underspecified. One popular proposal has been that SANs index the carrying forward of filler-related information until a gap is encountered. Here, we explore this hypothesis, as well as one that emphasizes syntactic predictions, across three EEG and one MEG experiment, in order to replicate and evaluate the response’s functional profile and neural generators. |METHODS| All experiments use RSVP of the sentences at a rate of 500-600ms per word. For experiments involving matrix WH-questions, subjects were tasked with evaluating whether a follow-up was a good response to the given question or sentence. In the embedded-WH experiment, subjects answered traditional comprehension questions. For EEG, SANs were evaluated as an interaction between condition and anterior electrodes such that these appear more negative in the dependency’s time-window relative to controls. |RESULTS| Across our experiments we found that: (1) SANs can be observed for short dependencies in fairly simple sentences. In two EEG experiments (n=14, n=22) we observed significant sustained negativities for simple (WH) object questions during the dependency region (‘What did *the cover of the magazine* feature?’) as compared to a yes/no control. (2) SANs do not seem to be a simple function of syntactic dependency or syntactic prediction. In a third EEG experiment (n=25), we failed to observe a SAN for similar, but embedded, object questions, relative to a complement clause control (‘John asked *what…’ vs ‘John asked *whether…’) (see also Sprouse et al. in prep. for similar results). We also 151 Poster Session C failed to observe a SAN in response to a different kind of syntactic prediction – that of an additional clause after a subordinating adverb relative to a temporal adverb (‘Although...’ vs. ‘Today...’) – while still observing the response to the matrix object questions (n=22). Finally, we are currently collecting MEG data to determine the neural generators of the SAN response as a way to constrain the hypothesis-space. An early analysis of the first set of participants (n=12) reveals a significant cluster localized to left inferior/middle frontal cortex. |CONCLUSION| These results motivate several new research questions: if the SAN does not index syntactic WM per se, then what does it index? And what governs its presence or absence across different paradigms? In line with a suggested non-syntactic component for the SAN from Yano and Koizumi (2018), we suspect that the SAN may be related to interpretive processes, such as creating a richer situation model in order to more rapidly answer an open-ended question, or to understand complex scenarios. C43 The oscillatory mechanisms supporting syntactic binding in healthy aging Katrien Segaert1,2, Charlotte Poulisse1, Linda Wheeldon3, Ali Mazaheri1,2; 1School of Psychology, University of Birmingham, 2Centre for Human Brain Health, University of Birmingham, 3University of Agder, Kristiansand, Norway Older adults frequently display differential patterns of brain activity compared to young adults when performing the same task. These age-related changes occur alongside widespread neuroanatomical decline. The differential functional activity patterns in older adults are commonly interpreted as being compensatory in nature (e.g., Wingfield & Grossman, 2006; Cabeza et al., 2002). However, most language studies on healthy ageing have not directly examined whether there is a relationship between the changes in neural activity patterns and behavioral performance levels. In the current study, we investigated the relationship between functional neural activation as measured using EEG and behavioural performance during a syntactic binding task. We used a two-word phrase task to minimize contributions of semantics and working memory (Segaert, Mazaheri, Hagoort, 2018). 41 healthy older adults (26 women, mean age: 69, SD: 3.37, 15 men, mean age: 69, SD: 5) listened to two-word phrases that differentially load on morphosyntactic integration: correct syntactic binding (morphosyntactically correct; e.g. “I dotch”); incorrect syntactic binding (morpho-syntactic agreement violation; e.g. “they dotches”) and no syntactic binding (minimizing morpho-syntactic binding; e.g. “dotches spuff”). Syntactic comprehension performance, assessed in a syntactic judgement task for the correct and incorrect syntactic binding conditions, was characterized by high interindividual variability, with accuracy ranging from 58100%. Syntactic processing, assessed as the difference in oscillatory activity between the correct- and no binding condition, was associated with a smaller theta (4-7Hz) power increase and a larger decrease in both alpha (812Hz) and beta (15-20Hz) power in the correct-, relative to the no binding condition (cluster-based permutation tests, 152 SNL 2019 Program Maris & Oostenveld, 2007). There were no ERP differences between the syntactic binding and no binding condition. Similar to the behavioural syntactic performance levels, there was large individual variability in the oscillatory signatures for syntactic processing. However, we found no evidence for a relationship between behavioural comprehension performance and the neural signatures of syntactic processing (also not when accounting for variability in age, gender, processing speed and working memory in the regression models, as motivated by Poulisse, Wheeldon and Segaert, 2019). In conclusion, the neural signatures of syntactic processing in older adults at the group-level are qualitatively different from young adults, who show a differential alpha and beta power increase, instead of a decrease, in the same task (Segaert et al., 2018). In the absence of evidence for a relationship between the neural signatures for syntactic processing and behavioural performance, our findings do not support the predictions of compensatory models of language and aging. Morphology C44 Form, meaning, and morphology in Arabic masked priming: An ERP study Ali Idrissi1, Tariq Khwaileh1, Eiman Mustafawi1, John Drury1; 1Qatar University INTRODUCTION. Transposed letter priming (jugdeJUDGE) and other orthographic/form priming found in Indo-European languages (Forster et al. 1987; Ferrand & Grainger 1992; Perea & Rosa 2000; Brysbaert 2001) have not been reliably found for Semitic (Frost et al. 2005; Valen & Frost 2009, 2011; Perea et al. 2010). Coupling this with demonstrations of Semitic root priming (Frost et al. 2000; Boudelaa & Marslen-Wilson 2005) has led to the suggestion that lexical memory for Hebrew/Arabic may be qualitatively different in organization. However, non-word primes derived from real word targets by single root letter replacement have been shown to yield priming in Arabic (Perea et al. 2014), dovetailing with literature questioning the status of the consonantal root as a morphological unit of lexical organization in Semitic (Idrissi 2018). PRESENT STUDY. Our masked priming ERP study manipulated prime duration (40/120 ms) between-participants and examined six conditions: (i) identity; (ii) root/meaning identity ([Saliib - maSluub] “cross – crucified”); (iii) root without meaning identity ([Saliib - Salaaba] “crosshardness”), (iv) transposed real root orthographic overlap ([Saliib - baSal] “cross - onions”); (v) semantic relatedness ([Saliib – qasaawisa] “cross - pastors”); (vi) unrelated prime-target pairs. Condition-(iv) involved either local/ adjacent root letter transpositions (e.g., tri-consonantal roots with 123 order preceded by real-word primes with identical consonants in 213 or 132 orders), or nonadjacent transpositions (e.g., 123 targets with 321, 312, or 231 primes). Real root primes like these have been previously found not to yield priming in Arabic (Perea et al. 2010; note other studies have used non-word primes). METHODS. Trials consisted of #-marks (#######; 500 ms), prime (40 or 120 ms), target (300 ms; using a larger font than the primes), and response/blink prompts. The task was go/no-go semantic categorization: half of the The Society for the Neurobiology of Language SNL 2019 Program  pairs (180) used names of animals/objects. Participants responded only if they saw animal name. Critical trials (30 items in each of (i)-(vi) = 180 critical pairs) did not require behavioral response. EEG was continuously recorded from 24 scalp electrodes (250 Hz EEG; 25 Ag/AgCl electrodes; Ground: AFZ; left-mastoid reference, re-referenced to linked mastoids offline. EEG pre-processed with a 0.1–30 Hz BPF). ERPs were examined within a 1000 ms epoch time-locked to prime onset (-200 to 0 ms baseline). Preliminary results reported here are based on 29 Qatari Arabic native-speaker adult participants (N=15 for shortprime (40 ms); N=14 for long-prime (120 ms)). RESULTS & DISCUSSION. When prime duration was short (40 ms) priming reduced ERP amplitudes for all conditions except semantics (v). Importantly, this includes our root letter transpositions/(iv), contra expectations based on most previous literature. With long prime exposure (120 ms) all conditions (including semantics/(v)) showed priming but with differences effect size. Identity showed the largest effects, followed by root priming (ii/iii), with semantic primes and root-transpositions (iv/v) showing (equivalently) the smallest effects. We argue (1) that there may not, in fact, be such qualitative cross-linguistic differences with respect to orthographic priming, and (2) previously observed Arabic consonantal root effects in masked priming may be only due to orthographic rather than morphological overlap. Multilingualism C45 How do phonological competitors affect gender processing in heritage Spanish speakers? Alisa Baron1; The University of Rhode Island 1 Spanish has a rich system of inflectional morphology in relation to English. Articles are especially important because they precede nouns in most contexts, are required to maintain grammaticality of a sentence, and are therefore used with a high frequency in all aspects of language. Grammatical gender is an important morphosyntactic cue to identify words and build syntactic representations in real time (i.e. Foucart & Frenck-Mestre, 2011; Hopp, 2013; Wicha, Moreno, & Kutas, 2004). Prior gender information provided by an article can reduce the search space in the lexicon to only those elements with a particular gender (Friederici & Jacobsen, 1999). Thus, gender cues help listeners keep track of the references in a sentence (Bates et al., 1996) and facilitate the interpretation of speech. To understand what heritage Spanish speakers attend to during comprehension of gendered articles, 36 heritage Spanish speakers participated in a visual world paradigm. Three groups of experimental stimuli were prepared; one group with informative (different-gender) articles, one group with uninformative (same-gender) articles, and one group with incorrect (ungrammatical) articles. A target noun was preceded by the correct (or incorrect) gendered article surrounded by a phonological competitor and two distractors with the same or different gendered articles. The Bilingual Input Output Survey was administered to all participants in order to calculate current language use. Like their monolingual Spanish- The Society for the Neurobiology of Language Poster Session C speaking peers, as a group, heritage Spanish speakers in this experiment showed sensitivity to gender. There were earlier fixations to the target in the informative trials than the uninformative trials and uninformative trial fixations were earlier than the ungrammatical trials. To evaluate the hypothesis that gender-sensitivity variability may be due to language experience, looks to the phonological competitors were analyzed. In regard to the phonological competitor within the informative trials, adults with more current Spanish use did not look at the phonological competitor more than the other distractors while the adults with less current Spanish use did so. All participants fixated on the phonological competitor within the uninformative trials more than the other distractors. All participants were slower on the ungrammatical trials and spent more time looking at the phonological competitor as it was the most viable option with the same article and initial consonant and vowel as the auditory stimulus. As the rest of the target noun unfolded auditorily, the target noun needed to re-enter the competitor set. In turn, participants were significantly slower in settling on the target noun as the correct response as it may have already been discarded from the competitor set earlier. Thus, current Spanish use appears to mediate if a phonological competitor enters or exists the competitor set during gender processing. C46 Was the ship thinking or was the sheep sinking? A tDCS study in speakers of English as a second language Katy Borodkin1, Tamar Gassner1; 1Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University Background. Second language speakers find non-native contrasts (e.g., ship/sheep or think/sink) challenging to perceive and subsequently pronounce correctly. However, musical training can improve these skills (Slevc & Miyake, 2006). Neuroimaging studies suggest that music and speech processing may be linked through a shared neural substrate, located in the left planum temporale (Elmer, Meyer, & Jäncke, 2011). The present study was devised to further examine the neural mechanisms mediating the effects of musical training on second language speech perception and production using transcranial direct current stimulation (tDCS). Method. The training experiment included 20 participants (10 men), aged 26-38. They were native Hebrew speakers who learned English mainly in school and did not have extensive musical training. Participants were randomly assigned to the experimental group (active stimulation, n = 10) or control group (sham stimulation, n = 10). The participants and the experimenter were blinded to group assignment (double-blind design). All participants underwent a 20-min feedback-based musical training combined with anodal (1.5 mA) or sham tDCS over the left posterior superior temporal gyrus (pSTG) (electrode site CP5 according to the EEG 10–20 system). The reference electrode was placed above the contralateral orbit (both electrodes size: 35 cm2). Prior to and following the training, musical skills were tested as well as discrimination and pronunciation skills in English. In 153 Poster Session C a follow-up experiment (listeners experiment), traininginduced changes in English pronunciation was assessed by 30 native English speakers (16 men, aged 18-61) using an identification task. Participants in the listeners experiment were unaware of the training experiment procedures. Results. All participants, regardless of tDCS groups, showed an improvement in musical skills following training, as evidenced in reduced reaction times. In contrast, tDCS had a differential effect on speech processing skills in English. Following training, participants in the active stimulation group, but not the control group, showed better discrimination skills (as indexed by increased accuracy) and pronunciation skills in English (as manifested by reduced identification times of listeners who were English native speakers). Discussion. We conclude that musical skills were affected by musical training but not by tDCS stimulation over the left pSTG, since improvement was observed in both tDCS groups. On the other hand, discrimination and pronunciation skills in English as a second language were improved only in active stimulation but not sham stimulation group. These findings do not provide evidence that the left pSTG (of which the planum temporale is a part) is the shared neural substrate for processing music and speech sounds. They do suggest that this region plays an important role in perception and production of speech sounds and that tDCS can be utilized to improve these skills, which are often difficult to master, in second language speakers. References Elmer, S., Meyer, M., & Jäncke, L. (2011). Neurofunctional and behavioral correlates of phonetic and temporal categorization in musically trained and untrained subjects. Cerebral Cortex, 22(3), 650-658.‫ ‏‬Slevc, L. R., & Miyake, A. (2006). Individual differences in secondlanguage proficiency: Does musical ability matter? Psychological Science, 17(8), 675-681.‫‏‬ C47 Phrase-Final Lengthening as a Word Segmentation Cue in French: A Bilingual ERP Investigation Annie C. Gilbert1,2, Inbal Itzhak1,2, Max Wolpert1,2, Jasmine Lee1,2, Shari R. Baum1,2; 1McGill University, 2Centre for Research on Brain, Language and Music, Canada The literature on word segmentation has demonstrated that different languages rely on different strategies to isolate lexical items from the speech stream. For instance, English listeners tend to rely on lexical stress to locate word onsets, whereas French listeners tend to rely on phrase final lengthening to locate word offsets. Interestingly, later work demonstrated that English listeners also use phrase-final lengthening as a word offset cue in artificial language learning paradigms. Therefore, one might expect English-L1 listeners to use phrase-final lengthening to segment words in French, as French-L1 listeners do. To test this, we developed an EEG task involving a cross-modal (audiovisual) priming paradigm and tested 23 English-L1 / French-L2 (hereafter French-L2) listeners and 22 French-L1 listeners. The audio stimuli involved sentence pairs produced by a native speaker, built around syllable strings that can be interpreted as either two words (le vendeur d’or loge …) or as one bisyllabic word (le vendeur d’horloge …). 154 SNL 2019 Program Visual stimuli involved picture-prompts representing the interpretation of either the first syllable by itself (or), the two syllables as one word (horloge), or an unrelated word. Picture presentation was time-locked to the onset of the second syllable of the string, and each picture was presented with each sentence of a pair. EEG signals were pre-processed and artefact-free ERP trials were time-locked to the onset of picture presentation. Mean amplitude for electrodes Fz, FCz and Cz were extracted from a 100ms time window between 350ms to 450 ms after picture onset for each participant. Linear mixed effects models (LMEs) of N400 amplitudes triggered while listening to sentences from the two-word condition (or loge) revealed significant effects of picture type in both French-L1 and French-L2 listeners (although with slightly different magnitudes). Images of the first syllable alone (or) yielded the smallest N400s, followed by pictures of the two-syllable words (horloge); unrelated pictures yielded the largest N400s. These results suggest that both listener groups processed the lengthened first syllable as a marker of word offset, which impeded the activation of the bisyllabic interpretation, without blocking it completely. A picture type effect was also found in the one bisyllabic word condition (horloge), but here the picture of the first syllable (or) and the unrelated picture yielded similar N400 amplitudes, while the picture of the two-syllable word (horloge) yielded the smallest N400 amplitude. These results suggest that listeners had already parsed the two syllables as belonging to the same word, leading them to treat the other two pictures as unrelated to the sentence. Taken together, the results show that, as a group, these French-L2 listeners can rely on phrase-final lengthening to locate word offsets in French, leading to native-like segmentation patterns. Further analyses will investigate the impact of individual differences in language experience on performance. C48 Early brain changes while learning a second language - Relearning how to listen, read, and learn, from words to sentences Tomás Goucha1, Helyne Adamson1, Alfred Anwander1, Matthias Schwendemann1, Angela D. Friederici1; 1Max Planck Institute for Human Cognitive and Brain Sciences Second language (L2) acquisition has repeatedly been considered a puzzle in recent years, especially in terms of brain plastic changes (1). This is mainly due to the reported involvement of a multitude of brain regions inconsistently across different studies in different phases of learning (2) and an inherent difficulty to attribute these changes to specific functions, in particular in crosssectional studies. Here, we observed changes in white matter microstructure along the first three months of learning German as a second language in an intensive course administered to Arabic native speakers (N=54), in an immersive context. Concurrently, we acquired measures of language learning together with wellestablished measures of language aptitude (3), and executive function (e.g., cognitive control). Finally, the participants also took part in an fMRI experiment on word and sentence-level processing. In this first phase of second language acquisition, the participants start The Society for the Neurobiology of Language SNL 2019 Program  by learning in an item-based fashion, by chunks, only acquiring more creative skills including the internalisation of productive rules towards the end. This phase is rarely taught in an immersive environment so that frequently no strong changes are observed. Another often ignored aspect in the initial phase of second language learning that is rarely observable at later periods is the acquisition of a new sound system, and the re-acquisition of word chunking in speech as well as of reading and writing in a foreign language (which was particularly important in this study due to the change of script). Concerning longitudinal effects in the first three months of L2 learning, we find changes in cortical regions and underlying white matter, in subcortical grey matter structures, and in the brain stem, with a predominance of the right hemisphere. We find both changes in typical language areas and their right hemisphere homologues, but also in primary sensory and motor areas. In agreement with these results, the fMRI task also presents less lateralised brain activations than in typical L1 processing. We also found changes in areas responsible for cognitive control, shown mainly for early bilinguals or bilinguals in immersion, together with more basal areas in the auditory and visual pathways, as well as dopaminergic regions involved in reward learning. When considering the behavioural measures and their correlations with brain plastic changes, we observed that already speaking a previous language fluently was associated with higher word learning skills, being a good predictor of learning success. Results converged to show the preponderance of this type of learning in this early phase. We finally pinpoint that more naturalistic measures of the spontaneous language production of the participants were the best indicators of brain changes, especially in typical language-related brain regions. References: (1) García-Pentón, L., et al. (2016). The neuroanatomy of bilingualism: how to turn a hazy view into the full picture. Language, Cognition and Neuroscience; (2) Pliatsikas, C. (2019). Understanding structural plasticity in the bilingual brain: The Dynamic Restructuring Model. Bilingualism: Language and Cognition; (3) Rogers, V., et al. (2017). Examining the LLAMA aptitude tests. Journal of the European Second Language Association. Multisensory or Sensorimotor Integration C49 The role of motor activation during action verb processing: an EEG study Ana Zappa1,3, Dierdre Bolger1,4, Cheryl Frenck-Mestre1,2,3,4; 1Aix-Marseille Université, 2Centre National de Recherche Scientifque, 3Laboratoire Parole et Langage, 4Institute of Language, Communication and the Brain Neuroimaging and behavioral evidence points to the recruitment of sensorimotor systems during linguistic processing. However, the timing and functionality of this activation remains elusive. In the current study we used an ACE paradigm to manipulate motor and semantic compatibility while measuring participants’ cortical activity using EEG. Participants listened to third-person action sentences indicating a movement away from or towards the subject of the sentence. (Emilie a pris son verre de vin et l’a bu [Emilie picked up her glass The Society for the Neurobiology of Language Poster Session C of wine and drank it]). In a sensibility judgment task, they accepted sentences by performing a compatible or incompatible action, i.e. by moving their hand either away from or towards their body. We measured motorrelated cortical activity, as reflected by desynchronization in the μ frequency bands (8-12 Hz), and ERP language related components during the auditory processing of action sentences at frontal, central and centro-parietal electrodes. Contrary to previous studies using EEG to measure the neurophysiological correlates of the ACE, Erp analyses showed a greater negative deflection of the N400 for compatible versus incompatible trials, suggesting an inhibitory effect of compatible motor processes on action verb comprehension. We are currently investigating whether our results also provide evidence of action-related μ suppression at centro-parietal sites during action-sentence processing, and whether this activation varies as a function of condition (Compatible/ Incompatible). Greater action-related μ suppression during action sentence processing for compatible versus incompatible action-language sentences would bolster the claim that action language and motor processes use shared neural circuits. On a large scale, this study adds to an important new vein in cognition research which, rather than focusing on the embodiment vs. disembodied debate, prioritizes determining the exact role of motor activation in cognition. Language Production C50 The left Frontal Aslant Tract supports sentence planning: Evidence from direct electrical stimulation and longitudinal diffusion MRI in brain tumor patients Benjamin Chernoff1, Webster Pilcher2, Bradford Mahon1,2; 1Carnegie Mellon University, 2University of Rochester Medical Center Sentence planning unfolds at multiple levels of processing, including planning of message, syntactic, and morphophonological elements. Patients with damage to a recently discovered white matter pathway in the brain connecting the inferior frontal gyrus (IFG) to the pre-supplementary motor area (SMA), the left Frontal Aslant Tract (FAT), exhibit impaired sentence production and dysfluent speech in the absence of impairments to semantic processing, lexical access, articulation, or nonspeech motor function (e.g., limb or orofacial apraxia). However, the specific role of the FAT has not been situated within existing neurocognitive models of language processing. Here, we test mechanistic hypotheses about the language computations supported by the FAT, using two different approaches. The first approach examined how sentence production is impaired by FAT damage compared to damage to other white matter pathways connecting the IFG, through longitudinal case series of patients with low-grade tumors affecting the frontal aslant tract, the arcuate fasciculus, and the uncinate fasciculus. We studied these patients pre- and post-operatively using structural and functional MRI, as well as standardized neuropsychological tests of speech production. We found that impairments for verbal fluency, picture naming, and 155 Poster Session C sentence repetition are respectively associated with damage to the left FAT, uncinate fasciculus, and arcuate fasciculus. Damage to these pathways was quantified using both Diffusion Tensor Imaging and functional connectivity. In the second approach, we designed a novel sentence production task to test our hypothesis that the left FAT is a key pathway for integrating syntagmatic and positional-level planning during sentence production. We refer to this as the ‘Syntagmatic Constraints On Positional Elements’ (SCOPE) hypothesis. A core prediction made by the SCOPE hypothesis is that disruption of the FAT should specifically disrupt sentence production at phrasal boundaries, with no impairment for articulation. We test this prediction by measuring sentence production latencies in a patient undergoing direct electrical stimulation (DES) mapping of the frontal aslant tract during an awake craniotomy to remove a left hemisphere brain tumor. The patient produced cued sentences such as ‘The red square is above the yellow circle’, and we measured the intra-word and inter-word durations as a function of stimulation (on, off and location relative to the tract). We found that stimulation prolonged inter-word pauses before the start of the noun phrases and at the verb, while inter-word durations internal to noun phrases were, if anything shorter in the context of stimulation compared to without stimulation. Stimulation of the frontal aslant tract had no effect on articulation time. These results provide initial support for the SCOPE hypothesis, and motivate novel directions for future research to explore the functions of this recently discovered component of the language system. C51 Identifying the cognitive components of the morphological fluency task through neurocognitive correlations Galit Agmon1,2, Maya Yablonski1, Michal Ben-Shachar1; Bar-Ilan University, 2The Hebrew University of Jerusalem 1 Verbal fluency tasks assess the speed of lexical access based on a predefined criterion. In a semantic fluency task, participants are given one minute to produce as many words as possible that belong to a given semantic category (e.g., animals); in a phonological fluency task, the criterion is an opening sound (e.g., words that begin with /f/). We focus here on another variant, the morphological fluency task, in which the criterion for word production is morpheme-based. Participants are presented with a spoken target word, and are requested to produce as many words as possible that share the same root morpheme as the target word. The morphological fluency task involves morphological decomposition and morpheme-guided lexical search. Other cognitive components are shared across the three fluency tasks: lexical access, retrieval, production, inhibition and switching. In this study, we combined fMRI with behavioral assessment to identify the distinct contributions of different fluency components in explaining the cortical responses during a morphological fluency task. Fortyfive native Hebrew speakers (29 females, ages 20-35y) performed a covert morphological fluency task in fMRI, and completed the morphological-, semantic- and phonological-fluency tasks in a separate behavioral 156 SNL 2019 Program session. In fMRI, participants were presented with written Hebrew roots, and were asked to covertly generate words that incorporate that root (Siemens 3T, TR=2000ms, 2mm isotropic voxels). Roots were presented in blocks (4 roots in each 12s block), interleaved with fixation blocks (10s each). Baseline blocks consisted of phase scrambled words, to control for visual responses. Responses to the morphological fluency task (vs. baseline) involved multiple regions, including the left inferior frontal gyrus, left caudate nucleus and the left inferior parietal lobule (p<0.001, corrected). Next, we identified brain regions in which the activation for the morphological fluency task is modulated by individual performances on each of the behavioral fluency tasks. Significant clusters were identified based on a simulation of noise-only brain activity. One cluster in the left middle frontal gyrus (LMFG) was positively correlated with participants’ scores on the behavioral morphological fluency task (1816 cubic mm; p<0.01 corrected). This finding supports the involvement of the LMFG in morphological tasks (Bick et al., 2008). A second cluster in the right cerebellum was positively correlated with participants’ scores on the behavioral semantic fluency task (3040 cubic mm; p<0.01 corrected). This finding converges with studies showing the involvement of the right cerebellum in language production (e.g., Jansen et al., 2005). No significant correlations were found with phonological fluency scores. The results separate the contribution of shared fluency components from that of morphological fluency. At a broader level, we propose that neurocognitive correlations between general fMRI contrasts and selective cognitive measures assessed outside the scanner are a promising tool for studying the neurobiology of language. This approach capitalizes on the considerable individual variability characteristic of psycholinguistic measures, which increases the power of neurocognitive correlation analyses. By assessing behavioral sensitivity carefully and selectively in the psycholinguistics lab we can decompose complex patterns of brain activation based on the contribution of separate, well defined cognitive processes. C52 Similarity of cortical semantic representations during language production and comprehension Hiroto Yamaguchi1,2, Tomoya Nakai1,2, Shinji Nishimoto1,2; 1CiNet (NICT), 2 Osaka University [Introduction] We use language to send and receive messages that convey semantic meanings. By using an encoding modeling analysis, previous studies have revealed the semantic representation of language in the brain while subjects listened to radio stories (Huth et al., 2016; de Heer et al., 2017). However, it remained unclear if the revealed representation is recruited in other conditions including language production. To address this issue, we conducted functional MRI (fMRI) experiments under language comprehension (reading and listening) and language production (speaking and thinking) conditions. We performed encoding modeling analysis to estimate the semantic representation in the brain under each condition and compared the modeled representations between conditions. [Methods] We recorded whole brain activity The Society for the Neurobiology of Language SNL 2019 Program  using fMRI (Siemens MAGNETOM Prisma) under the following two experiments. In language comprehension experiment, Japanese monologues were presented to 5 Japanese participants (age 22-29; all right-handed) under two conditions. Under the reading condition, participants read transcribed narratives. Under the listening condition, they listened to spoken narratives. We presented the total of three-hour narratives for each condition, respectively. In language production experiment, in each trial, a random word or picture was presented to participants. Using the presented content as a hint, participants spontaneously constructed a sentence of up to 4 seconds and articulated it within 4 seconds after a cue (speaking condition). After the articulation, they subvocalized the same sentence without making any movement (thinking condition). Each participant performed 900 trials of sentence productions. In this abstract, we only report a result on the speaking condition. To estimate the semantic representation during narrative comprehension, we firstly transformed presented or produced sentences into semantic vectors using Wikipedia2Vec model (Yamada et al., 2018). Secondly, we modeled the brain activity in each voxel as a weighted linear sum of the semantic vectors using L2-regularized linear regressions. The regressions were performed independently for reading, listening, and speaking conditions. To validate the estimated representation, we calculated model prediction accuracy of brain activity for held-out test dataset of each condition. For the significantly predicted voxels, we evaluated the similarities of acquired semantic representations between conditions by calculating the correlation coefficient between the estimated weights. [Results] The trained model for each condition provided significantly accurate prediction in broad cortical areas including frontal, temporal, and parietal regions. By comparing the weights between reading and listening conditions, we found significantly similar semantic representations in frontal, temporal, and parietal regions. Between the comprehension and production conditions, more constricted regions such as inferior frontal sulcus, superior frontal sulcus, superior temporal sulcus, and intraparietal sulcus showed significantly high similarities. Those areas were a subset of the areas similar between reading and listening. [Conclusion] Our results showed that broad brain regions represented the meaning of the words during story comprehension in a modality-invariant way. A subset of those regions had similar semantic representations during sentence production, even though the semantic contents and the degree of linguistic complexity were different between the production and comprehension conditions. The current study revealed shared semantic representations for language comprehension and production. C53 How aging affects the neural basis of phonological and semantic neighborhood density Michele T Diaz1, Victoria H. Gertel1, Hossein Karimi1, Sara B.W. Troutman1, Abigail L. Cosgrove1, Carla B. Fernandez1, Haoyun Zhang1; 1Pennsylvania State University The Society for the Neurobiology of Language Poster Session C Although many aspects of language remain stable with age, aging is associated with declines in language production. For example, compared to younger adults, older adults experience more tip-of-the-tongue states, show decreased speed and accuracy in naming objects, increased errors in spoken and written production, and more pauses and fillers in speech, all of which indicate age-related increases in retrieval difficulty. Moreover, prior work has suggested that these retrieval difficulties may be phonologically based. In contrast to language production, semantic aspects of language are relatively preserved: healthy older adults generally have larger and more diverse vocabularies and demonstrate comparable performance to younger adults on semantic tasks. Age differences in phonological processes, contrasted with the relative sparing of semantic processes, suggest a fundamental difference in the organization of these two abilities. In the present picture naming study, we investigated the influence of lexical factors (phonological and semantic neighborhood densities, PND & SND) on the neural basis of word retrieval across the lifespan (N=91, ages 20-75). Prior work has demonstrated that words with large phonological neighborhoods are produced faster, while words with large semantic neighborhoods are produced slower. However, the neural bases of these effects remain unknown. Behavioral results revealed that as PND increased, naming times decreased, and accuracies increased, ps < .001. In contrast, as SND increased, naming became more difficult, as reflected in increased naming times and decreased accuracies, ps < .001. Interestingly, we did not see a significant effect of age on RT, although the effect was trending in the expected direction, p = .07. Consistent with the behavioral analyses, fMRI analyses showed that decreasing PND was associated with increases in activation throughout the left hemisphere language network, as well as its right hemisphere homologues (e.g., bilateral superior temporal, inferior frontal, insula). As age increased, decreases in PND were associated with increases in activation in left superior temporal gyrus and left posterior middle temporal gyrus (MTG) – regions that are important for phonological encoding and retrieval. In contrast to our phonological findings, both increases and decreases in SND were associated with changes in activation. Increasing SND was associated with activation in bilateral lateral occipital cortex, suggesting increased visual competition. Decreasing SND was associated with activation in left orbital frontal gyrus, left middle frontal gyrus (MFG), and bilateral MTG – which suggests that weaker semantic neighborhoods engage both domain general cognitive control regions as well as semantic regions. Age effects showed that as SND decreased, reflecting sparser semantic networks, there were age-related increases in right middle and superior frontal gyri. Overall, these results suggest that increasing phonological selection demands engaged core language regions, and engagement of these phonological regions increased with age. While increasing semantic selection demands engaged both language- 157 Poster Session C specific and domain general control regions, and that engagement of these domain general control regions increased with age. C54 Modulating verbal fluency performance in healthy adults with transcranial direct current stimulation over the left prefrontal cortex Jana Klaus1, Gesa Hartwigsen1; 1Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences Previous studies in healthy populations have provided equivocal evidence as to whether the application of transcranial direct current stimulation (tDCS) over the left prefrontal cortex can improve performance in verbal fluency tasks, with some reporting increased fluency rates following anodal compared to sham stimulation and others finding no effect. Critically, some methodological aspects may have confounded efficacy. First, previous studies have used sample sizes which were too low to reliably detect the small effect sizes associated with behavioural changes induced by tDCS. Second, the electrode montage used in previous studies may not have been successful in effectively targeting the left prefrontal cortex. Simulation studies have shown that the strongest electric field is in fact evoked between the electrodes, and not under the “active” electrode, which is routinely placed over the cortical area of interest. Third, typically no task relevant to the studied function is administered during the stimulation period, although this additional functional recruitment may increase the effectiveness of the ongoing stimulation. Fourth, previous studies used single-blind designs, which may have introduced (unconscious) experimenter bias, thus either over- or underestimating the efficacy of tDCS. In the current study we aim to resolve these methodological caveats. We are currently collecting data from 44 healthy, native German speakers who perform a phonemic and categorical fluency task after having received 20 minutes of 2 mA anodal or sham tDCS over the left prefrontal cortex. The montage is based on electric field simulations to optimally target the left prefrontal cortex, with the anode placed between FC5 and C5 and the cathode placed over AF3 (electrode sizes: 5x5 cm, current density: 0.08 mA/cm²). During stimulation, participants perform a picture naming task for 13 minutes, which is expected to increase neuronal activity in the targeted regions as inferior frontal regions have been shown to be reliably involved in language production. Contrary to previous studies, we will use a double-blind design to ensure both participant and experimenter blinding. If this improved approach effectively decreases between-participant variability with respect to the response to tDCS, we expect higher fluency rates following anodal compared to sham tDCS. By contrast, if tDCS is not effective in modulating verbal fluency performance, no differences should be found between the two tDCS conditions. Thus, the study will provide important fundamental insights into the potential of tDCS to improve higher cognitive functions by investigating the relevance of parameters which may increase the efficacy of this method. Ultimately, these findings will be useful for clinical applications (i.e. language rehabilitation after brain 158 SNL 2019 Program damage) by shedding light on whether tDCS can alleviate language function loss. The study and accompanying hypotheses have been preregistered at the Open Science Framework (https://osf.io/4qmxs/). C55 Development of the brain network supporting handwriting in middle childhood Marieke Longcamp1, Palmis Sarah1, Habib Michel1, Anton Jean Luc2, Nazarian Bruno2, Sein Julien2, Velay Jean Luc1; 1Laboratoire de Neurosciences Cognitives, CNRS UMR 7291, Aix-Marseille University, 2Institut de Neurosciences de la Timone, CNRS & AMU Introduction: Handwriting is among the finest movements of our repertoire and requires years of training to be perfectly mastered. The brain network supporting handwriting has previously been defined in adults but its organization in children has never been investigated. Behaviorally, the progressive acquisition of the motor patterns is characterized by a switch in the control mode. Children proceed through an online adjustment of the trajectory. Adults switch to a fully proactive, automatized control mode. We measured the changes in the handwriting network between age 8/11 and adulthood. In adults, the network is formed of 5 key regions (left dorsal premotor cortex, superior parietal lobule, fusiform and inferior frontal gyri, and right cerebellum). We hypothesized that the automated writing of adults would rely on more focal and stronger activations in this network. We also expected that the more controlled writing of children would recruit extra visual, somatosensory and prefrontal regions. Methods: 65 right-handed native French speakers (23 adults, 42 children (8/11 years-old)) were instructed to write the alphabet, the days of the week and to draw loops in consecutive 16s blocks, while being scanned. The writing kinematics were recorded on an MRI-compatible digitizing tablet. The velocity profiles and the number of stops were analyzed. MRI data were acquired on a 3-Tesla MRI Scanner (MagnetomPrisma, Siemens, Erlangen, Germany). We acquired a high-resolution T1 volume, a fieldmap, and BOLD images (gradient-echo EPI, 335 volumes in a single session). The images were processed with SPM12. They were corrected for head motion and for distortions, and normalized using a group template. Head motion and physiological noise were further accounted for by using the TAPAS toolbox with extra regressors of no-interest in the individual statistical models. Second-level analyses were carried out using the GLMflex toolbox with factors condition (letters vs loops / words vs loops) and group (adult vs children). We focus on the main effect of group. Results: The tablet recordings confirmed the presence of behavioral effects within the scanner, with a higher velocity and lower number of stops in adults. The handwriting network previously described in adults was also strongly activated in children. A quantification of the coordinates of the local maxima in the 5 key regions indicated that activations in children were more diffuse than in adults, except in the right cerebellum. The left fusiform activation was more anterior in children. In addition, the primary motor cortices and the right anterior lateral cerebellum were more strongly activated in adults. Finally, we found that contrary The Society for the Neurobiology of Language SNL 2019 Program  Poster Session C to adults, children recruited prefrontal regions (anterior cingulate cortex, inferior frontal gyrus pars orbitalis). Conclusions: This study constitutes the first investigation of the handwriting network in typical children. Our results suggest that the network supporting orthographic and motor processing is already established in middlechildhood. Its elements are less focalized in children than in adults. Our results also highlight the major role of prefrontal regions in learning this complex skill. Finally, they confirm the importance of the motor cortices and anterior cerebellum in the performance of automated handwriting. semantic, and greater left-lateralized activity for episodic memories. These results suggest that semantic memory access (Known condition), search (ToT condition) and facial recognition (Familiar condition) recruit a more widespread memory network than comparable episodic memory conditions, further elucidating the underlying neuroanatomical circuitry involved in verbal recall. C56 Tip-of-the-Tongue: A window into neural interactions between memory and language systems James Hinkley1, Danielle Mizuiri1, Michael Lauricella1, Susanne Honma1, Valentina Borghesani1, Corby Dale1, Wendy Shwe1, Ariane Welch1, Zachary Miller1, Maria Luisa Gorno-Tempini1, John Houde1, Srikantan Nagarajan1,2; 1University of California, San Francisco, 2 UC Berkeley-UCSF Graduate Program in Bioengineering Hartzell1, Pedro Paz-Alonso1; 1Basque Center on Cognition Brain and Language (BCBL) Verbal recall is a complex and only partly-understood brain function. In particular, it is still unclear why on many occasions we quickly and accurately remember a person’s name or the label for an object or place, yet on other occasions we experience variable delays in recall. During Tip-of-the-Tongue (ToT) experiences, neurologically healthy individuals transiently fail to recall target names or words yet nevertheless recall context, eventually accessing the target via associations and circumlocutions. ToTs provide an excellent window into the interaction of neural circuitries typically associated with mnemonic and language processes: ToTs can occur for both newly-learned episodic target word memories or for long-established semantic target-word memories. To investigate the ToT phenomenon, we therefore contrasted recall of target words from both episodic and semantic memories in a behavioral-fMRI study with 30 healthy young adult participants. On Day 1 participants learned 200 new face-name pairs using testing effect methods. On Day 2 they were scanned using fMRI while viewing the same newly learned faces randomly intermixed with 200 famous faces. Participants responded with a buttonpress indicating whether face-names were Known, ToT, Familiar (recognized without knowledge of the name), or Unknown, and were asked to confirm their in-scanner responses both verbally and with multiple choice tests during a behavioral session immediately following the fMRI session. Preliminary results showed overlapping networks underpinning successful, delayed, and partial retrieval of both episodic and semantic memories for face-name pairs. Successful (Known) semantic-memory face-name recall recruited a broad bilateral frontoparietal network linked to the hippocampal formation, while successful episodic face-name recall engaged a subset of regions within the same network. Semantic-memory ToTs recruited a similar network to that for semantic-memory Known with the additional involvement of basal ganglia, thalamic nuclei, insula, and cingulate cortex, together with bilateral precuneus. Episodic-memory ToTs recruited a subset of regions of the semantic-ToT network, with less widely engaged frontoparietal circuits. Both semantic- and episodic-memory Familiar engaged similar frontoparietal networks with greater right-lateralized activation for The Society for the Neurobiology of Language Speech Motor Control C58 Cortical dynamics of the speech motor control network in the non-fluent variant of Primary Progressive Aphasia Hardik Kothare1,2, Kamalini Ranasinghe1, Leighton Primary Progressive Aphasia (PPA) is a clinical syndrome in which patients progressively lose speech and language abilities. The nonfluent variant of PPA (nfvPPA) is characterised by impaired motor speech and agrammatism. These speech and language deficits are often associated with left fronto-insular-striatal atrophy in nfvPPA patients. Functional magnetic resonance imaging as well as diffusion tensor imaging studies further suggest impaired connectivity of neural circuitry involved in speech motor control. However, none of these studies provide sufficient temporal resolution to document the dynamics of the recruitment of the speech motor control network during vocal production. In this study, we employed magnetoencephalographic (MEG) imaging to investigate sensorimotor integration during an altered auditory feedback paradigm in 18 nfvPPA patients and 17 healthy controls. Participants were prompted to phonate the vowel /ɑ/ for ~2.4s. Unbeknownst to them, following a randomly jittered delay of 200 to 500 ms after voice onset, the pitch of their feedback was shifted either up or down by 100 cents (1/12th of an octave) for a period of 400ms. Vocal pitch responses were examined as the participants responded to this pitch perturbation. Taskinduced neural oscillations relative to a pre-perturbation baseline were examined in the theta-alpha (4-13 Hz) and the beta bands (13-30 Hz) associated with attention and sensorimotor integration respectively. Nonparametric statistical tests were performed to look at neural activity differences in patients compared to healthy controls with cluster-threshold corrections for multiple comparisons Behaviourally, nfvPPA patients showed a smaller compensation response to pitch perturbation than controls. Baseline pre-perturbation pitch variability did not differ significantly between the two groups, indicating that reduced vocal compensation cannot simply be attributed to insufficient vocal control range in patients. Patients also exhibited reduced task-induced theta-alpha neural activity in the right superior temporal gyrus, right superior temporal sulcus, right middle temporal gyrus and the right temporoparietal junction. Patients also showed increased task-induced beta-band activity in the left dorsal sensorimotor cortex, left premotor cortex and the 159 Poster Session C left supplementary motor area. Collectively, these results suggest significant impairments in processing of auditory feedback during vocal production in nfvPPA patients. C59 “Mean Speech Rate” Doesn’t Mean Much: Analysis of Speech Rhythm Benefits from Quantising Inter-Onset Intervals Alexis MacIntyre1, Sophie Scott1, Ceci Cai Qing1; University College London 1 Speech rhythm describes the temporal patterns and structure that emerge as speech unfolds in time, and is usually analysed in terms of recurring units, such as syllables; however, how to characterise and compare speech rhythms—within and across languages—is a controversial topic, partly because no recurring unit is known to be isochronously timed (equal in duration). The apparent irregularity of natural speech challenges theories of neural entrainment as a mechanism facilitating speech perception, given that simplistic models of entrainment oblige at least a quasi-periodic function. Despite this incongruence, speech rhythm is typically quantified using a mean speaking rate or inter-onset interval (IOI) calculated as syllables per second, yet, closer examinations of actual speech data reveal that the arithmetic mean can be a poor model. The current project addresses this limitation by investigating the viability of quantising IOI derived from vowels and stressed vowels across two relatively unrelated languages, English and Mandarin, collected under a variety of speaking conditions chosen for their differing rhythmic qualities. Rather than treat IOI as a normal distribution of random continuous durations, this approach attempts to represent speech rhythm as a mixture of discrete, recurring values generated from short sequences of speech, thereby preserving some of the local-sequential information lost in aggregate statistics. Moreover, quantifying how well individual values are represented by multiple modal peaks, rather than a single mean, may shed light on how it is that listeners report the percept of temporal regularity in speech, despite a current lack of evidence for its physical correlate. Finally, in addition to acoustic data, the timing of respiratory kinematics is also measured via inductance plethysmography, providing a complementary signal directly relating to processes that are largely inaudible, but nonetheless essential to speech production. Speech was segmented according to inhalation cycles. Model goodness of fit and comparison with traditional techniques indicate that ecologically valid speech is best described in more complex terms than simple isochrony-based explanations allow, and that, together with breathing effort, the interpretation of both vowel and stressed vowel IOI illustrate rich rhythmic similarities and differences across English and Mandarin. For example, in the case of stressed vowel IOI, sequence mode values for both languages hovered consistently close to 380 milliseconds (ms) across speakers (n = 8, range = 350400), describing nearly a third of the data to within a threshold of 20 ms; in contrast, the sequence arithmetic mean (450 ms) failed to capture 10% of real values, and varied comparatively more by speaker (range = 400-490 ms). Shuffling stressed vowel IOI into pseudo-sequences 160 SNL 2019 Program resulted in significantly fewer data points that fell within an arbitrary threshold of .05 of the modal peak (t(8,568) = 5.37, p < .001), suggesting local temporal dependencies may be lost when data is pooled across longer timescales. More detailed results are discussed with a view to the role of regularity in speech timing, cross-linguistic comparisons, and possible implications for future studies of neural speech entrainment. C60 SimpleDIVA: A 3-Parameter Model for Examining Adaptation in Speech and Voice Production Elaine Kearney1, Alfonso Nieto-Castañón1, Ayoub Daliri2, Frank H Guenther1,3,4; 1Boston University, 2Arizona State University, 3 Massachusetts Institute of Technology, 4Massachusetts General Hospital Sensorimotor adaptation paradigms have become an important experimental technique in examining the neural mechanisms of motor control, including speech and voice production. In a typical adaptation paradigm, participants produce speech while they receive perturbed auditory feedback (e.g., shifts in formants or fundamental frequency). When the perturbations sustain over several trials, participants gradually learn to adjust their movements to compensate for the perturbation (i.e., participants adapt). This process relies on an interplay between feedback control (in detecting and correcting errors within a trial) and feedforward control (in updating the motor command for the following trial). However, it is challenging to determine the relative contribution of each system based on behavioral data alone. Here, we describe a simple 3-parameter computational model (SimpleDIVA) that estimates the relative contribution of feedback and feedforward control mechanisms to sensorimotor adaptation. The model is based on the DIVA model of speech production (Guenther, 2006; 2016) and the three parameters reflect the three subsystems underlying speech motor control, specifically, auditory feedback control, somatosensory feedback control, and feedforward control. The model is tested through computer simulations that identify optimal model fits to six existing datasets (Abur et al., 2018; Ballard et al., 2018; Chao & Daliri, unpublished data; Daliri et al., 2018; Haenchen et al., 2017; Heller-Murray, 2019). Through the simulations, we show how SimpleDIVA can be used in the interpretation of adaptation experiments involving first and second formants and fundamental frequency. The results highlight the model’s sensitivity to changes in experimental protocols, for example, when using masking noise (to eliminate the effect of auditory feedback) and when perturbing more than one auditory dimension at the same time (e.g., first and second formants). The model also captures the differential role of feedback control and feedforward control early and late in the production of a trial. The final property of the model revealed through the simulations is its power in predicting average group responses from one experimental condition to another. To illustrate this, we model data from an adaptation paradigm with a gradual onset of the perturbation and use the resulting parameters to predict performance in a paradigm with a sudden perturbation onset (from the The Society for the Neurobiology of Language SNL 2019 Program  Poster Session C same groups of participants). Across all simulations, the model fits were excellent and showed strong positive correlations with the experimental data (range of Pearson correlation coefficients = .86 – .97). SimpleDIVA offers new insights into speech and voice motor control by providing a mechanistic explanation for the behavioral responses to the adaptation paradigm that are not readily interpretable from the behavioral data alone. In future work, the model can be used to develop clear, testable hypotheses that can be evaluated empirically, ultimately advancing our understanding of speech motor control and informing future directions of rehabilitation research for individuals with communication disorders. Compiled SimpleDIVA code, including an easy-to-use graphical user interface, is available to facilitate the use of the model by other groups in future studies (http://sites.bu.edu/guentherlab/ software/simplediva-app). that these results, together with previous behavioral and neural findings, can be parsimoniously explained by a simple neural model that assumes that the motor cortex behaves as a phase oscillator receiving an input signal coming from auditory areas. This study illustrates the suitability of the SSS-test as an experimental tool for a deeper understanding of speech-to-speech synchronization. Furthermore, a parsimonious neural model accounts for the phenomenon. C61 A comprehensive evaluation of the Spontaneous Speech Synchronization phenomenon Arianna Zuanazzi1, During spoken language comprehension, prediction is one of the core incremental processing operations, guiding the interpretation of each upcoming word with respect to its preceding context. As words in a sentence accumulate over time, the constraints they generate become more specific and informative. The predictive processing framework provides an explicit hypothesis about incremental speech comprehension which has been tested and supported in previous studies with the quantification of “entropy” (Hale, 2006; Frank, 2013) and “surprisal” (Hale, 2001; Levy, 2008). Within this broad context, here, we investigated the role of semantic constraint elicited by each word in a sentence, including its subject, verb and object, in generating the event representation that guides the message-level interpretation using models of constraint and integration derived from the latent Dirichlet allocation (LDA) approach of topic modelling (O’Seaghdha & Korhonen, 2014). This approach enabled us to incorporate prior knowledge about different topics in a way that maximizes model evidence during training. Specifically, we asked people to listen to spoken sentences of the form: “The experienced walker chose the path” and tested the following hypotheses under the Bayesian belief updating framework (Kuperberg & Jaeger, 2016): a) The subject will semantically constrain its object as soon as the subject noun is recognised b) The verb’s constraint on its direct object will be incorporated into the subject’s constraint as soon as it is recognised c) The subject + verb constraint will be utilized to process their object as soon as it is integrated To test these hypotheses, we collected electroencephalography (EEG) and magnetoencephalography (MEG) data millisecond (ms) by millisecond while participants were listening to each of 200 natural sentences. To explore the spatial dynamics within the neural language network (Kocagoncu et al., 2017), we analyzed these data in source space using representational similarity analysis (RSA; Kriegeskorte et al., 2008). In this way, we preserved the rich multivariate response pattern of neural activity spanned by source vertices (space) and time using a small searchlight sphere, and correlated this response pattern with each of different patterns predicted by models of constraint and integration. M. Florencia Assaneo1, Pablo Ripolles1, Joan Orpella1, David Poeppel1,2; 1Department of Psychology, New York University, 2 Neuroscience Department, Max Planck Institute for Empirical Aesthetics Spontaneous synchronization of a motor output to an auditory input (i.e. without explicit training) is a basic trait present in humans from birth and has important cognitive implications. For instance, infants’ proficiency in following a beat is a predictor of language skills. From a phylogenetic perspective, spontaneous audio-motor synchronization is argued to be a unique characteristic of vocal learning species (e.g. parrots), including humans. Audio-motor synchrony in the context of speech perception/production remains largely unexplored. Here we evaluate the extent to which speech motor output synchronizes to speech auditory input. In a previous study, we designed a simple new behavioral task (Spontaneous Speech Synchronization Test, SSS-test) in which participants listened to a rhythmic train of syllables while concurrently whispering the syllable ‘tah’. Using this task, we found that some listeners are compelled to spontaneously align their own speech output to the input (high synchronizers), whereas others remain impervious to the external rhythm (low synchronizers). In this study, we assess whether the ability of the SSS-test to segregate the population into two different groups (i.e. a bimodal distribution) depends on the specific set of parameters previously employed (specifically, fixed 4.5 Hz syllable rate and implicit task instructions). We tested two different variations of the SSS-test: in experiment 1, the syllable rate was fixed at 4.5 Hz but participants were explicitly instructed to synchronize to the external rhythm (i.e., fixed rate 4.5Hz and explicit); in experiment 2, task instructions were the same as in experiment 1 but the syllable rate was increased from 4.3 to 4.7Hz during the task (i.e., accelerated and explicit). The results of experiments 2 replicated the findings of the previous study, showing a bimodal (high versus low synchronizers) outcome. Surprisingly, this result was not replicated in the fixed rate 4.5Hz and explicit condition (experiment 1). We suggest The Society for the Neurobiology of Language Computational Approaches C62 Modelling incremental development of semantic prediction Hun S. Choi1, Barry J. Devereux2, Billi Randall1, William Marslen-Wilson1, Lorraine K. Tyler1; 1Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, 2School of Electronics, Electrical Engineering and Computer Science, Queen’s University Belfast 161 Poster Session C Our results confirmed all three hypotheses above: a) The constraint (entropy) placed by the subject on its object is generated as early as the onset of subject noun, lasting for about 300ms in right anterior and superior temporal areas (RTP/RSTG) and soon re-emerging around the verb-onset until the verb was recognised in right IFG. This was, then, replaced by the integrated (SN+verb) entropy effect on the object. b) The integrated (subject + verb) constraint entropy effect was observed around the uniqueness point of the verb, replacing the subject-alone entropy effect in the right middle temporal gyrus (RMTG). c) The surprisal effect of the integrated (SN+verb) constraint on the object was clearly reflected around the offset of the object noun, lasting for more than 200ms in RSTG/RMTG. Taken together, these results provide neurobiological evidence for cyclical development of semantic constraint to complete the event representation in the regions involved in semantic constraint and integration (Jung-Beeman, 2005). C63 Learning words by encoding the sequence of sounds: A computational model of speech perception Meropi Topalidou1, Gregory Hickok1; 1Department of Cognitive Sciences, University of California, Irvine In Topalidou et al. 2018c, we propose a simple computational model of speech production that produces sequences. The novelty of this method is that the sequences are encoded by the synaptic weights of the network that results in reduced spatial and temporal complexity compared to the existing proposed models. For example, speech production models generally contain buffers or working-memory modules to encode sequences (Bohland et al, 2010; Grossberg, 1978a) or use slots to label the kind of the unit (Foygel and Dell, 2000). The goal of this work is to demonstrate how the proposed sequence encoding in the weights emerge as a result of an initial learning of auditory-lexical association. Thus, here we present a computational model of speech perception that learns the mapping between sound sequences and representations of individual words. The organization of the model is derived from psycholinguistic models that propose a higher-level lexical (abstract word) and a lower-level phonological system. Accordingly, the proposed model contains a lexical and auditoryphonological structures bidirectionally connected to each other. These components map onto the cortical regions of mid-posterior superior temporal sulcus/middle temporal gyrus (pSTS/pMTG) for the lexical component, and posterior superior temporal gyrus (pSTG) for the auditory-phonological one. Initially, the units at the lexical level are randomly connected in an all-to-all manner with the units at the auditory-phonological level. Furthermore, the model contains a soft winner-take-all mechanism through self-excitatory and lateral-inhibitory connectivity among the units at each level. On each trial, input is sent to the ``phonemes’’ of a word with a short delay between them. A consequence of the lateral inhibition among the auditory units is that the activity of the unit receiving a preceding input is higher compared to its following one. The random connectivity between the two levels results 162 SNL 2019 Program in activation of only a few of the lexical units by these auditory units. At the end of each trial, Hebbian-learning is applied among the active units of the two levels. During a simulation, a number of ``words’’ (sequence of phonemes) are presented multiple times to the model. Analysis of the network behavior shows that after a simulation is completed, the lexical unit that represents a word is more strongly connected with the first ``phoneme’’ than the second one, and so on. This results from (i) the different maximum activity of the auditory-phonological units on each trial and (ii) Hebbian-learning, where the more active a pre- and a post-synaptic unit are, the stronger they will be connected. A limitation of the model is that multiple lexical units can learn the same sequence, but also, in rare cases, a unique unit can learn multiple sequences. This might be remedied by adding a mechanism for pattern separation, e.g., modeling the function of the dentate gyrus in the hippocampus. To conclude, our model proposes a new method for encoding sequences in speech perception, that can easily be expanded to the encode of sequences in speech production as we previously introduced. Speech Perception C64 Neural processing of speech-in-noise in autism spectrum disorder Stefanie Schelinski1,2, Katharina von Kriegstein1,2; 1Technische Universität Dresden, 2Max Planck Institute for Human Cognitive and Brain Sciences Introduction: Recognising what another person is saying under noisy conditions (i.e., speech-in-noise perception) is an everyday challenging experience. There is evidence that speech-in-noise perception is restricted in people with an autism spectrum disorder (ASD) (Alcantara et al., JChildPsycholPsychiatry, 2004; Schelinski & von Kriegstein, under review). However, the underlying neural mechanisms of this speech perception difficulty are unclear. A recent meta-analysis showed that three cerebral cortex regions are particularly involved in speech-in-noise processing (Alain et al., HBM, 2018). Here we tested, whether atypical responses in these brain regions might explain speech-in-noise perception difficulties in ASD. Methods: 17 adults with ASD (Mage = 30,53 years; 14 males) and 17 typically developing adults (matched pairwise on age, sex, handedness, and full-scale intelligence quotient (IQ)) performed an auditory-only speech recognition task during functional magnetic resonance imaging (fMRI). All participants had normal hearing (confirmed with pure tone audiometry) and did not take psychotropic medication. Participants in the ASD group had previously received a formal clinical diagnosis and underwent additional clinical assessment including the ADOS and ADI-R (Lord et al., JADD, 1994, 2000). During the fMRI experiment, we presented blocks of sentences that were either presented with or without noise (noise / no noise condition). Sentences were semantically neutral and phonologically and syntactically homogenous. In the noise condition, sentences were presented together with pink noise (signal-to-noise ratio = -8). In both conditions, the first sentence of a block was the target sentence and participants decided whether the The Society for the Neurobiology of Language SNL 2019 Program  content of the following sentences matched the content of the target sentence. Both conditions included the same set of sentences. All sentences were spoken by six male speakers. Before the fMRI session, participants were familiarised with voices of all speakers together with their face during an audio-visual training phase. For the fMRI analysis, we used a general linear model implemented in SPM12. Results: Both groups showed typical speechsensitive blood-oxygenation-level-dependent (BOLD) responses for the no noise condition including bilateral superior and middle temporal sulcus, inferior parietal and inferior frontal brain regions (p < .05 family wise error (FWE) corrected for the whole brain; e.g., Friederici, TiCS, 2012). For recognising speech in the noise as compared to the no noise condition, we found higher BOLD responses in the control as compared to the ASD group in the left inferior frontal gyrus (left IFG), whereas both groups showed similar responses for the two other regions that are particularly involved in speech-in-noise processing (i.e., right insula and left inferior parietal lobule; p < .05 FWE corrected for the three regions of interest). An ANOVA revealed that there were no significant group differences in the speech recognition performances for none of the conditions (for p < .05). The ASD and the control group did not differ significantly in the average amount of head movements (for p < .05). Conclusion: Our findings suggested that in ASD the processing of speech is particularly reduced in the left IFG under noisy conditions. These differences might be important in explaining restricted speech comprehension in noisy environments in ASD. C65 Speech production rate modulates perception only for a subgroup of the population M. Florencia Assaneo1, Johanna M. Rimmele2, David Poeppel1,2; 1Department of Psychology, New York University, 2Max Planck Institute for Empirical Aesthetics, Frankfurt/Main Recent studies suggest that auditory perception relies on temporal predictions from the motor system to increase its performance (see Rimmele et al. 2018). However, there exists little behavioral evidence for this conjecture in the speech domain. In order to test this prediction, we designed a behavioral protocol capable of testing the influence of rhythmic speech production on speech perception. In line with previous results (Assaneo et al. 2019), we hypothesize that individual differences in the degree of audio-motor coupling could modulate the strength of behavioral effects. Thus, we first measured and subsequently classified participants into two groups according to the strength of their spontaneous audiomotor synchronization (high or low). Next, during the main experiment participants were instructed to produce rhythmic sequences of syllables. Immediately following speech production offset, a syllable was presented, embedded in noise, and participants performed a syllable discrimination task. Using a decoding approach, we assessed whether task performance was modulated by the phase of the syllable presentation with regard to the motor rhythm. The motor rhythm was derived from the oscillation generated by the produced speech The Society for the Neurobiology of Language Poster Session C envelope. We show that, only for individuals with high audio-motor coupling, performance is modulated by the speech production rhythm; i.e., participants’ perceptual performance is predicted by stimulus occurrence with respect to motor production phase. C66 Effects of Musical Training on White Matter Diffusivities and Speech in Noise Perception Yi Du1, Xiaonan Li1; 1Institute of Psychology, CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences Introduction: Musical training is related with pervasive plasticity in the brain. However, evidence linking specific neural reorganization with certain behavioral advantage after musical training is still lacking. Speech perception in noisy environments is one of the critical abilities that have been improved in musicians. Since sensorimotor synchronization is ubiquitous in playing music, musician is an effective model for understanding the nature of sensory-motor coordination in speech perception. Although studies have found that musicians and nonmusicians differed in morphology of white matter (WM) tracts implicated in sensorimotor integration, none has directly associated those changes with speech in noise (SIN) perception ability. Methods: In the current diffusion tensor imaging (DTI) study, deterministic tracking algorithm was used to attain the averaged diffusivity values (FA, fractional anisotropy; AD, axial diffusivity; RD, radical diffusivity; MD, mean diffusivity) of three representative tracts and their subcomponents which connect sensory and motor regions, the superior longitudinal fasciculus (SLF), anterior thalamic radiation (ATR) and corpus callosum (CC), in a group of young musicians (n = 14) and a group of young non-musicians (n = 14). Participants’ SIN performance as tested by a syllable-in-noise identification task, pure-tone hearing threshold, auditory working memory as measured by forward and backward digit span, and non-verbal IQ by Cattell’s culture fair intelligence test were also recorded. Results: Compared with non-musicians, musicians had higher FA values in the right arcuate fasciculus (AF), orbital and anterior frontal portions of CC, lower RD values in the left anterior of SLF, right ATR and orbital portion of CC, as well as lower MD value in the right ATR. Moreover, higher FA and lower RD and MD values in those tracts significantly correlated with better SIN performance in all participants after controlling for hearing level, auditory working memory and non-verbal IQ. Additionally, the RD value of the left AF and MD value of the left anterior of SLF negatively correlated with SIN accuracy, although no significant group difference was found. Surprisingly, longer musical training time was associated with decrement of FA value in the right AF and increment of RD value in the left anterior of SLF, indicating a complex effect of longterm musical training on myelination of tracts. Conclusion: Our findings suggest that the white matter reorganization in the right AF, left anterior of SLF, right ATR, orbital and anterior frontal of CC, which connect intra- and inter- 163 Poster Session C hemispherical sensorimotor regions, may serve as a neural foundation of musician advantage in understanding speech under noisy circumstances. C67 Motor engagement relates to accurate perception of phonemes and audiovisual words, but inaccurate perception of auditory words. Kelly Michaelis1, Makoto Miyakoshi2, Andrei V. Medvedev1, Peter E. Turkeltaub1,3; Georgetown University Medical Center, 2Swartz Center for Computational Neuroscience, University of California San Diego, 3 Medstar National Rehabilitation Network 1 Prior studies have demonstrated motor activity during speech perception, but have not systematically examined the conditions under which the motor system is engaged, including perception of whole words and meaningful non-speech sounds. We examined an EEG signature of motor activity (sensorimotor μ/beta suppression) to test the hypothesis that motor regions are engaged when ventral stream processing mechanisms are insufficient to identify a word (e.g., during isolated syllable perception or noisy conditions) or when additional information, such as seeing the speaker, obligatorily engages motor speech systems. In contrast, we hypothesize that during unambiguous word-level perception, processing should occur solely in the ventral stream. In 24 healthy adults (mean 23.6yrs, 16 female, right-handed), we measured EEG signal during the perception of auditory single words (AudWord), auditory CVC phonemes (Phoneme), audiovisual single words (AVWord), and auditory environmental sounds (EnvSound). The task was an adaptive four-alternative forced choice task that manipulated signal-to-noise ratios to achieve two levels of difficulty for each stimulus type (Easy 80% and Hard 50% correct). EEG was recorded using a 128-channel Geodesic net. Using EEGLAB, data were preprocessed and subjected to independent components analysis, dipole fitting, and clustering. Component clusters included four a priori areas of interest: left IFG, left sensorimotor cortex, right sensorimotor cortex, and left auditory cortex. We performed time-frequency decomposition and measured stimulus-related μ/beta power (8-30Hz). A series of 2x2 mixed-effects models examined effects of condition (AudWord vs. each of the other conditions) by accuracy (Correct, Incorrect), and condition by difficulty (Easy vs. Hard, only including correct trials). Sensorimotor μ/beta suppression was left-lateralized. Within the left sensorimotor cluster, AVWord and Phoneme stimuli showed enhanced μ/beta suppression for correct relative to incorrect trials, while AudWord stimuli showed the opposite pattern: enhanced suppression for incorrect trials and synchronization for correct trials (AVWord vs. AudWord, condition*accuracy, p=.000; Phoneme vs. AudWord, condition*accuracy, p=.001). As expected, there was little modulation of μ/beta power by EnvSound (AudWord vs. EnvSound condition*accuracy, p = .042). For correct responses in the AVWord and Phoneme conditions, μ/beta suppression was observed for Easy and Hard trials, whereas in the AudWord condition, there was no suppression for Easy trials and increased power for Hard trials (AVWord vs. AudWord, main effect condition, 164 SNL 2019 Program p=.003; Phoneme vs. AudWord, condition*difficulty, p=.012). In both the left IFG and auditory clusters, μ/ beta suppression was present in all conditions but was greatest for the AudWord Easy trials (IFG: AVWord vs. AudWord, condition*difficulty, p=.007; Phoneme vs. AudWord, condition*difficulty, p=.012; Auditory Cortex: AVWord vs. AudWord, main effect difficulty, p=.012; Phoneme vs. AudWord, condition*difficulty, p=.009). Our results suggest that motor involvement in perception is left-lateralized and is specific to speech. Furthermore motor engagement relates to correct perception of sublexical and audiovisual words but incorrect perception of auditory-only words. These findings support a model in which the motor system is flexibly engaged to aid perception depending on the nature of the speech stimuli, and suggests that processing auditory-only words via this mechanism is ineffective. The results also suggest that the left IFG and auditory cortex preferentially process clear lexical items. C68 Structural neural correlates of native-language speech perception and non-native speech sound learning Pamela Fuhrmeister1, Emily Myers1,2; 1University of Connecticut, 2Haskins Laboratories Individuals show a wide range of variability in how well they learn non-native speech sounds, but the cognitive and neural sources of this variability are poorly understood. Individual differences in native-language category representations may be one source of variability in non-native speech sound learning. Specifically, individuals vary in how categorically or graded they perceive sounds in their native language (e.g., Kapnoula et al., 2017; Kong & Edwards, 2016), but the relationship between native-language speech perception and nonnative speech sound learning has not yet been tested. Just as the behavioral connections between native and nonnative speech processing are unknown, it is not yet known whether individual differences in brain structure relate to native as well as non-native speech sound perception. Structural differences in brain morphology (for example, in early auditory areas) have been shown to be related to success on non-native speech sound learning tasks (Golestani et al., 2002, 2007; Turker et al., 2017). However, for native-language speech perception, no work has yet established the structural differences that underlie variation in performance. The current study seeks to identify whether structural differences in a common area underlie behavioral differences in native and non-native speech processing. To test this, native English-speaking participants rated speech sounds from two different native contrasts (stop and fricative continua) on a scale to measure individual differences in native language sound processing. Participants additionally learned to categorize a non-native contrast (Hindi voiced dental and retroflex stops) and completed assessments of their learning in the evening hours and returned the next morning for reassessment. Anatomical images were collected at a separate session. Results suggest no strong behavioral relationships between native-language speech perception and non-native speech sound learning. However, surface The Society for the Neurobiology of Language SNL 2019 Program  area and gray matter volume of the left transverse temporal gyrus positively predicted learning on the Hindi task in both the immediate and next-day assessments, and surface area of the pars opercularis region of the left inferior frontal gyrus was negatively correlated with non-native speech sound learning. In addition, volume of the left hippocampus was positively related to overnight change in the non-native speech sound learning task. These findings are consistent with the view that individual differences in auditory cortex morphology support the perception of difficult non-native speech contrasts. C69 Interactions of voice identity and emotion in speech processing: the role of attention Ana Pinheiro1, João Sarzedas1; 1Faculdade de Psicologia - Universidade de Lisboa, Portugal During speech comprehension, multiple cues need to be integrated at a millisecond speed. From the perspective of a listener, it is not only important to understand “what” is being said and “how”, but also to relate that information to “who” is saying it. A processing advantage has been demonstrated for self-related stimuli when compared with non-self stimuli, and for emotional relative to neutral stimuli. However, few studies investigated how emotional valence and voice identity interactively modulate speech processing. In the present study we probed how the attentional focus demanded by the task affects the interactions of speaker’s identity and emotion during word processing. Thirty participants (15 females; Mean age = 24.30, SD = 2.51 years) listened to 210 prerecorded words differing in voice identity (self vs. other) and semantic valence (neutral, positive and negative), while electroencephalographic data were recorded. In Experiment 1, participants were instructed to decide whether words were spoken in their own voice, another voice, or whether they were unsure. In Experiment 2, they were instructed to rate the emotional quality of the words (neutral, positive or negative). The N1, P2 and Late Positive Potential (LPP) were analyzed. In both experiments, the N1 was more negative for selfgenerated words compared to words uttered by an unfamiliar speaker (Experiment 1: β=-1.774, SE=0.433, p<.001; Experiment 2: β=-0.822, SE=0.391, p=.036). The P2 was decreased for positive compared to neutral words uttered by an unfamiliar speaker (β=-1.822, SE=0.718, p=.011) in Experiment 1, but not in Experiment 2. In Experiment 1, an increase in the LPP was observed for positive relative to neutral words irrespective of speaker’s identity (β=1.649, SE=0.591, p=.005). In Experiment 2, the LPP was reduced for positive compared to neutral words uttered by an unfamiliar speaker (β=1.647, SE=0.595, p=.005). Task (focus on voice identity or emotion) modulated the LPP amplitude only: the LPP was larger when attention was focused on the emotional quality of the words (β=2.331, SE=0.647, p<.005). ERP differences between self and other speech occurred despite similar accuracy in the recognition of both types of stimuli. Together, these findings confirm that speech and speaker information interact during spoken word processing. They further suggest that attention (focus on voice identity The Society for the Neurobiology of Language Poster Session C vs. emotion) affects how the processing of emotional words is modulated by self-relevance, particularly in later processing stages. C70 Speech perception under effortful listening conditions in older adults Laura Jagoda1, Pia Neuschwander1, Ira Kurthen1, Nathalie Giroud2, Martin Meyer1,3; 1University of Zurich, 2 Concordia University, 3Charitè - Universitätsmedizin Berlin Understanding speech under adverse listening conditions (environmental noise, multiple talker situations) becomes more and more demanding with increasing age. Even with intact peripheral hearing, understanding speech in noise is often impaired in older adults. As speech in noise perception constitutes a difficult task in general, requiring not only sufficient hearing acuity but also a wellfunctioning central auditory system and a high amount of cognitive resources, deficient speech perception can have multiple causes. On neurophysiological level, cerebro-acoustic coherence, the entrainment of neural oscillations to rhythmic characteristics of the speech input, represents a key mechanism in the most formative models on speech decoding and has been shown to be a correlate of successful comprehension. To disentangle the possible sources of speech-understanding problems in older adults and its neurophysiological underpinnings, we investigated a sample of subjects aged between 65 and 80 years with normal hearing (n = 31) and with mild to moderate hearing loss (n = 44). Participants underwent a comprehensive cognitive assessment (attention, inhibition, working memory) and audiometric testing, including puretone audiometry and suprathreshold measurements of frequency selectivity and temporal compression. During EEG recording, participants listened to ~10sec long sentences presented distinctly over hearing thresholds in quiet, in pink noise and in babble noise. We analysed cerebro-acoustic coherence by estimating the phaselocking value between extracted speech envelopes and filtered EEG (2-8Hz). In both groups, the quiet condition induced significantly higher phase-locking than babble and pink conditions respectively. There was no group effect on phase-locking, indicating no significant influence of peripheral hearing loss on the auditory system’s capacity to entrain to the speech input. Cognitive functioning, especially attention and inhibition were found to play a more essential role and exhibited an impact on phase-locking in both groups, highlighting the importance of considering and investigating multiple factors affected by aging, when trying to understand speech perception difficulties in older adults. C71 Listening experience and syntactic complexity modulate neural networks for speech and language: A functional near-infrared spectroscopy study Bradley White1, Clifton Langdon1; 1Gallaudet University INTRODUCTION: A large body of work suggests that experience listening to acoustically-degraded speech modulates how the brain perceives and processes speech and language information, ultimately shaping the brain’s structure and function (White, Berger, & Langdon, 2019; White & Langdon 2018a-c; White, Kushalnagar, 165 Poster Session C & Langdon 2018; Alain et al., 2018; Peelle, 2018; Mattys et al., 2012; McKay et al., 2016; Werker & Hensch, 2015). Investigations of long-term, experienced hearing aid (HA) and cochlear implant (CI) users further suggests that the type of developmental listening experience may also predict how the brain functions at rest (White, Berger, & Langdon, 2019; McKay et al., 2016). However, the effects of early, life-long exposure to acoustically-degraded speech on active speech and language networks remains elusive, especially in the presence of increased cognitive demand. We assess this relationship and hypothesize that the duration (naïve v. experienced) and type (narrow- v. wide-band) of auditory experience modulates speech and language networks differently for simple compared to complex conditions, possibly indicative of developmental neurocognitive changes in early, life-long HA and CI users. METHODOLOGY: Participants. Right-handed, healthy, monolingual, young adult typically-hearing listeners (TH, N=15), HA users (N=6), and CI users (N=4). HA and CI participants received and began using their devices before age 5 and have good speech perception. Procedures. Participants completed a battery of language and cognitive assessments. The fNIRS task presented participants with 192 sentences for a plausibility judgment task, during which we recorded cortical hemodynamic activity from 50 channels spanning the frontal and bilateral temporal-parietal cortices. The sentences varied linguistically (i.e., simple subject-relative and complex object-relative clause structures) for all participants and, for TH participants only, acoustically (i.e., clear, HA simulated, and CI simulated speech). We performed taskbased functional connectivity (tbFC) analysis to assess the role of auditory experience and cognitive demand on active language processing networks. All analyses were thresholded at pFDR<0.05 or more stringent. RESULTS: (A) Narrow-band speech: Naïve and experienced listeners of narrow-band speech exhibit little difference in neural network patterns for simple and complex grammar. (B) Wide-band speech: Naïve and experienced listeners of wide-band speech exhibit significantly dissimilar neural network patterns for simple grammar. For complex grammar, preliminary behavioral and tbFC results suggest that HA users are disengaging from the task and performing independent thinking unrelated to the task. (C) Clear speech: Experienced TH listeners of clear speech exhibit different neural network patterns for simple and complex grammar conditions than HA and CI users and all simulated listening conditions. CONCLUSION: Observed tbFC results for naïve and experienced listeners of narrowband speech are corroborated by previous findings of functional hemodynamic activation in TH listeners and CI users (White, Kushalnagar, & Langdon 2018), suggesting that naïve and experienced listeners of narrow-band speech utilize similar neurocognitive mechanisms. Comparisons to the HA users, however, suggest that their listening experiences can result in distinct modulations of the speech and language processing networks despite performing similarly on standardize speech perception 166 SNL 2019 Program assessments. Taken together, these findings advance our understanding of neuroplasticity and resilience for speech and language perception. Perception: Auditory C72 Commensurability of Cognitive Neuroscience of Language and Music Nicolas Araneda Hinrichs1, Rie Asano2, Greta Kaufeld3, Courtney Hilton4; 1University of Concepción, Institute of Musicology, University of Cologne, 3Max Planck Institute for Psycholinguistics, 4University of Sydney 2 Cognitive Neuroscience of Language and Music (CNLM) is a rapidly growing research niche (Peretz et al, 2015) which has provided a broader epistemological framework for the experimental exploration of a diverse range of questions regarding the structure and evolution of the underlying biological mechanisms of -human and animallinguistic and musical processing and production, in the fashion that Kording et al (2018) have described to occur within computational neuroscience, as these niches strive towards interdisciplinarity (in the case of CNLM, we count at least cognitive and behavioural neuroscience, ethnomusicology and psycholinguistics). Nonetheless, it lacks a completely coherent semiotic framework, in great part due to several key concepts -recurrently borrowed from CNLM’s composing disciplines- emerging as mutually incommensurable (Kuhn, 1962; Feyerabend, 1970; Popper, 1996) conceptual metaphors (Lakoff and Johnson, 1980), with their implicit ontological orientation and particular implementation becoming obfuscated, and thus leading to recurring cul-de-sacs within field. It is the aim of this article to provide a review of the main articles of CNLM where such key concepts appear in an attempt to clarify their particular ontological provenance and experimental implementation. A recent example of this phenomenon is to be found in the critique by Martins and Boexkc (2019) of Berwick and Chomsky (2016); according to the former, the latest iteration of the minimalist program extrapolates what occurs at the computational level of language towards the algorithmic and implementational, carrying an ontological approach and complexity inherent to one domain of our niche into another one (which posits the central argument of their critique towards the latest iteration of the minimalist program regarding evolution of language), in the form of a fallacy. Several more examples are to be found within the history of the cognitive sciences; particularly, since the emergence of the Theory of Embodiment. Nonetheless, as Craver (2014) points out: “Not all of the facts in an ontic explanation are salient in a given explanatory context, and for the purposes of communication, it is often necessary to abstract, idealize, and fudge to represent and communicate which ontic structures cause, constitute, or otherwise are responsible for such phenomena”. Metaphors have thus still pedagogical value as they cross-map ontic views. Thus, since lack of a common ontology in CNLM is indeed rendered visible in many aspects it requires to be addressed and the present article proposes for this purpose a commensurable ontic glossary. The Society for the Neurobiology of Language SNL 2019 Program  C73 Auditory neural responses to native and foreign language syllables in typical readers and in children with reading difficulties Najla Azaiez Zammit Chatti1, Otto Loberg1, Sari Ylinen2, Jarmo Hämäläinen1, Paavo Leppänen1; 1University of Jyväskylä, Finland, 2University of Helsinki The strong link between the auditory processing and the reading difficulties is still not fully understood especially in the foreign language learning context. In this study, we explore this link in school children with dysfluent reading and how could the auditory brain responses reflect the reading through their ability to perceive first and second language sounds. Brain event-related potentials (ERPs) of 112 sixth grade Finnish children were recorded with a high-density electroencephalography (EEG) system (128 electrodes) in two groups: 86 typical readers and 26 with reading deficits. Participants were exposed to foreign language stimuli (English) and to native language stimuli (Finnish) in an auditory oddball paradigm presented in two different blocks. One of these two blocks consisted of foreign syllables used as stimuli, /shoe/ as standard (80%), /shy/, and /she/ as deviant stimuli (10% each). Finnish phonologically matching syllables were used in the second block, /suu/ as standard (80%), /sai/ and /sii/ as deviant stimuli (10% each). Cluster-based permutation statistics for ERP waveforms and topographic maps were calculated between the different conditions and between groups. Our results show that the brain responses to the non-native syllables were different compared to the responses obtained with the native stimuli in each group. Overall, the ERP amplitudes and latencies were varying between the groups and also between the responses within and between groups to the Finnish and English syllables. There were also differences between the brain responses to the stimuli within the same language. Participants with dysfluent reading showed atypical brain activities. These atypical responses suggest less specific phonemic representations and more reliance on stimulus features during passive auditory processing. C74 Acquisition of word meaning: associative reward learning stimulates fast temporally coherent auditorymotor mapping in the human brain Boris Chernyshev1,2,3, Anna Butorina1, Alexandra Razorenova1,2, Tatiana Stroganova1; 1 Moscow State University of Psychology and Education, 2National Research University Higher School of Economics, Moscow, 3 Lomonosov Moscow State University It is generally accepted that meaning of action words is acquired as a result of co-activation of cortical areas supporting both speech processing and motor control. However, action word learning in the natural environment assumes a temporal gap between an action and its word associate, and how the brain achieves a precise temporal coupling of auditory and motor cortical representations is largely unknown. We hypothesized that in the course of association learning such coherent auditory-motor neural representations might gradually emerge due to strengthening of the reciprocal inter-modal connections. Through activation spreading from motor areas to speechrelated areas, preparation of motor response triggered by a specific pseudoword could repeatedly activate The Society for the Neurobiology of Language Poster Session C auditory word representation in the left temporal cortex. In order to test this intriguing prediction, we recorded MEG in 28 adult subjects who were involved in a novel auditory-motor learning procedure. The participants were required to discover meaning of four novel action words from their association with the specific actions by way of “trial-and-error” learning in the presence of four interfering pseudowords. We explored the magnetic counterpart of motor readiness potential, which precedes motor response in a phase-locked manner presumably reflecting motor initiation and planning. Cortical sources of response-locked magnetic field were reconstructed using MNE software. We found that in the course of auditory-motor learning, the motor readiness magnetic field gradually started to be associated with responselocked activation in the perisylvian speech-related areas. This perisylvian activation was virtually absent for actions performed in the initial learning trials (before associative learning was established), and its strength significantly increased as learning proceeded. The difference between learnt and naïve trials occurred in the event-related fields preceding the motor response onset by 150 - 500 ms, and it was clearly time-locked to the movement onset rather than to the stimulus onset. Perisylvian activation correlated with shortening of the behavioral response time during learning acquisition: the stronger was this brain signal, the shorter was response time in the end of the learning sessions in relation to the beginning of the learning sessions. Thus, action preparation recurrently reactivates speech processing thus guaranteeing activation of the auditory and motor nodes of emerging network to be tightly time-synchronized. Our results demonstrate for the first time that experimentally induced association between acoustically presented pseudowords and actions involves activation of speech-related cortical areas during action planning and initiation. Presumably, newly formed auditory-motor attractor neural networks induce such a recurrent reactivation of phonological and lexical circuits, thus likely promoting further increase in the strength and specificity of the association between newly learned pseudowords and corresponding actions. Supported by RFBR grant 17-29-02168. Perception: Speech Perception and Audiovisual Integration C76 Musical background affects audiovisual modulation of speech and music at N1 - an ERP study Marzieh Sorati1, Dawn M. Behne; 1Norwegian University of Science and Technology (NTNU) Previous research on audiovisual speech perception has shown that mouth movements predicting the upcoming sound give an anticipatory effect which can behaviorally and electrophysiologically modulate speech perception (van Wassenhove et al., 2005; Paris et al., 2013). Similarly, in audiovisual music perception, hand movements provide predictable regularities for the note being played in a musical event. Practicing a musical instrument is a rich multimodal experience, and while extensive musical training is known to enhance 167 Poster Session C audio perception (Zatorre et al., 2007), an unattended question is whether this enhancement is fixed to unimodal auditory perception for music or transfers to audiovisual perception, and speech, by superior prediction of what sound is coming and when. The current study compares musicians and non-musicians’ audiovisual modulation in music and speech based on the visual cues (hand and mouth movements) predicting the upcoming sound and providing an anticipatory effect which can modulate perception. Event-related potentials (ERPs) were recorded with seven musicians and seven non-musicians while presented music (keyboard note, C4) and speech (/ ba/). The music and speech stimuli were presented in three conditions: audio-only (AO), video-only (VO) and audiovisual (AV). In the AO condition, analysis of N1 amplitudes and latencies showed that for music stimuli musicians have a higher N1 amplitude than non-musicians while for speech stimuli no group difference was observed. These results are consistent with previous ERP research showing that musicians have improved auditory perception in music (Shahin et al., 2005). Next, to isolate the anticipatory effect of hand movements in music stimuli and mouth movements in speech, for each stimulus ERP waveforms for the VO conditions were subtracted from the AV conditions (AV-VO) and compared with the corresponding AO ERPs. For music stimuli, a two-way analysis of variance with musical background (musicians vs. non-musicians) and condition (AO vs. AV-VO) showed a significant interaction for N1 amplitude. The difference between AO and AV-VO was hence further analyzed for each group, and results showed that while musicians` N1 amplitude was significantly lower for AV-VO compared to the AO condition, no difference was observed for non-musicians. For the speech stimuli, despite no significant interaction, the same main effect for condition was observed for N1 amplitude, with the musicians’ N1 amplitude lower in AV-VO compared to AO, and no N1 suppression for non-musicians. Notably, N1 latency was shorter for AV-VO than AO for both groups, and this pattern was the same for music and speech stimuli. These findings show that N1 amplitude is affected by musical experience both for music and speech perception, while, N1 latency is lower for AV-VO compared to AO condition independently of expertise. In other words, while both groups showed audiovisual modulation of N1 latency with anticipatory visual cues, musicians showed a greater audiovisual modulation of N1 amplitude than nonmusicians, suggestion greater prediction from the visual information for what sound is coming, and when. These findings also imply that enhancement in audio perception for musicians is transferable to audiovisual perception both for music and for speech. C77 Task-related effects during natural audio-visual dialogues in fMRI Patrik Wikman1, Artturi Ylinen1, Alina Leminen1, Miika Leminen2, Kimmo Alho1,3; 1Department of Psychology and Logopedics, University of Helsinki, 2Department of Phoniatrics, Helsinki University Hospital, 3AMI Centre, Aalto University 168 SNL 2019 Program The ability to attend to a single speech stream in the presence of irrelevant speech is fundamental to human audition. Effects of different speech related taskmanipulations on these functions have, however, not been studied using naturalistic audiovisual dialogues. In our recent study (Leminen et al., in preparation), we used videos of dialogues between two speakers during functional magnetic resonance imaging (fMRI). The visual and auditory quality of dialogues was modulated from high (i.e., fully comprehensible) to low (i.e., virtually incomprehensible) with masking and noise-vocoding, respectively. The participants either listened to dialogues delivered with a concurrent irrelevant voice in the background, and answered questions about the dialogue afterwards, or ignored the dialogues and performed a visual control task. In Leminen et al., listening to the dialogues increased activation in inferior parietal lobule and inferior frontal gyrus (IFG), previously associated with top-down cognitive control. Additionally, activations were stronger in medial parietal and frontal regions, previously associated with socio-emotional and semantic processing. Here we used the same paradigm as in Leminen et al., but in addition to the listening task (L task) and visual control task (V task), participants also performed a phonological detection task (P task) where they were to report the number of occurrences of the phoneme [r] in the dialogue. We expected that as the P task requires thorough processing of the acoustic signal it would be associated with stronger activation in the auditory cortex than the L task. Also, as the P task is a much more novel than the L task, the P task might be associated with stronger activations in regions associated with executive control. As expected, our results showed that the P task activated more strongly regions associated with executive control and phonological processing than either the L or V task (IFG, premotor and dorsolateral frontal cortex; dLFC). The L task, in turn, activated regions associated with semantic and socio-emotional processing more strongly than the other tasks (anterior temporal, inferior parietal, posterior cingulate and orbitofrontal cortices). However, contrary to previous studies, which have shown that motor regions are recruited when fine phonological discrimination is needed, activations in the motor cortex were not modulated by auditory quality during the P task. Yet, unexpectedly, activations were stronger in the left IFG, dLFC and dMFC during the clear speech condition than the poor speech condition in the P task, but not in the L task. Our results indicate that although both the P and L task demanded attentive listening to the dialogues, they were associated with distinct activation patterns. That is, presumably because the L task demanded semantic and social processing it activated regions previously associated with these functions. The phonetic detection task, in turn, activated regions in the premotor cortex, supporting the idea that the motor areas are recruited when attention to phonetic details in speech is required. However, contrary to a prominent hypothesis activations in regions associated with speech production such as the left IFG were actually stronger during the clear speech than the distorted speech condition. The Society for the Neurobiology of Language SNL 2019 Program  C78 A new framework for studying audiovisual speech integration: Partial Information Decomposition into unique, redundant and synergistic interactions Hyojin Park1, Robin A.A. Ince2, Joachim Gross2,3; 1University of Birmingham, 2University of Glasgow, 3University of Muenster Network processing of complex naturalistic stimuli requires moving beyond the mass-univariate analysis of simple statistical contrasts to use methods that allow us to directly quantify representational interactions, both between different brain regions as well as stimulus features or modalities. In our recent work, we quantified representational interactions in MEG activity between dynamic auditory and visual speech features, using an information theoretic approach called the Partial Information Decomposition (PID). We showed that both redundant and synergistic interactions between auditory and visual speech streams are found in the brain, but in different areas and with different relationships to attention and behaviour. In the current study, we aimed to investigate how these two interactions aforementioned as well as unique information can be characterized spatiotemporally. We computed redundant, synergistic and unique information between dynamic auditory and visual sensory signals about the ongoing MEG activity localised to pre-defined anatomical regions (AAL; Automated Anatomical Labeling). We first found that behaviourally relevant synergistic information has shown differentially in primary sensory areas and higher-order areas when participants paid more attention to matching audiovisual speech while ignoring an interfering auditory speech. In primary visual and auditory areas, synergistic interaction depends on low-frequency rhythmic fluctuation as a function of auditory delay. However, in inferior frontal and precentral areas, synergistic interaction depends on low-frequency fluctuation as a function of visual delay. Second, the involvement of rhythmic fluctuation of auditory unique information as a function of auditory delay is shown in the right primary sensory areas (auditory, visual). This was critical for speech comprehension. The current method allows us to investigate multi-sensory integration in terms of explicitly quantified representational interactions: either overlap or common information content (redundancy), or superlinear interactive predictive power (synergy), as well as the unique information provided by each modality feature alone. We hope this framework can provide a more detailed view of cross-modal stimulus representation and hence give insight into the cortical computations which process and combine signals from different modalities. Poster Session C Previous brain imaging studies on lip-reading have found areas associated with auditory speech perception to be activated during silent visual speech. However, in most of these studies simplified linguistic units were used, which are far from the complexity of natural, continuous speech. In this study, we investigate the neural substrate of lipreading of connected natural speech, during functional magnetic resonance imaging. A narrative of 8-min length was presented in three conditions: during (i) lip-reading, (ii) listening, and (iii) reading, to 29 subjects whose lipreading skill varied extensively. The similarity of individual subjects’ brain activity within and between conditions was estimated by voxel-wise comparison of the BOLD signal time courses as inter-subject correlation (ISC). Our results show specific clusters of ISC for lip-reading in the cuneus, lingual gyri and right cerebellum, after initial visual processing. When the subjects listened to the narrative or read it, ISC was found bilaterally in temporo-parietal, frontal, and midline areas as well as specifically superior temporal areas during listening and occipital visual areas during reading. The comparison of the ISC between different conditions revealed that both lip-reading the narrative or listening to it, are supported by the same brain areas in temporal, parietal and frontal cortices, precuneus and cerebellum. Note however, that lipreading activated only a small part of the neural network that is active during listening or reading the narrative. Thus, listening and reading a natural narrative activates the brain extensively and similarly, whereas similarity of brain activity during lip-reading vs. reading or listening the same narrative is much less extensive. Further, when analyzing the lip-reading skills of the subjects, we found that skilled lip-reading was specifically associated with bilateral activity in the superior and middle temporal cortex, which also encode auditory speech, suggesting an efficient coding of visual speech gestures by the same mechanisms used in auditory coding of phonetic speech features. Our data suggests that there is an extensively shared mechanism for lip-reading and listening to natural, connected speech, consistent with the view that comprehension of narrative speech involves both modality-specific perceptual processing as well as more general linguistic processing that may be amodal or multimodal. Phonology and Phonological Working Memory C80 Neural correlates of verbal working memory revealed through voxel-based morphometry Maryam C79 Lip-reading connected natural speech in normalhearing individuals: Neural characteristics Satu Saalasti1,2, Ghaleh1, Elizabeth Lacey1,2, Mackenzie Fama1,3, Zainab Anbari1, Candace van der Stelt1, Stephen Tranchina1, Peter Turkeltaub1,2; 1 Georgetown University Medical Center, 2MedStar National Rehabilitation Hospital, 3Towson University Seeing speaker’s visual speech gestures (i.e. lipreading) enhances speech perception, especially in noisy environments. However, only few readers are also skilled lip-readers while most struggle at the task. INTRODUCTION: The neural basis of verbal working memory (WM) is still a matter of debate. Studies have suggested two separate mechanisms for maintenance of verbal information in WM (Leff et al., 2009, Trost & Gruber, 2012). In our recent multivariate lesion-symptommapping study of left-hemisphere stroke survivors, we found sensorimotor cortex to be crucial to the articulatoryrehearsal process (backward digit span task), and Jussi Alho2, Lahnakoski Juha2,3, Bacha-Trams Mareike2, Glerean Enrico2, Jääskeläinen Iiro2, Hasson Uri4, Sams Mikko2; 1University of Helsinki, 2Aalto University School of Science, 3Max Planck Institute of Psychiatry, Munich, 4Princeton University The Society for the Neurobiology of Language 169 Poster Session C posterior superior temporal gyrus (pSTG) to be crucial to non-articulatory maintenance (forward digit span task). Although several studies have suggested a role for left inferior frontal gyrus (IFG) and Dorsolateral prefrontal cortex (DLPFC), in verbal WM (Gruber & Goshke, 2004), we did not find evidence for the involvement of these regions. It is suggested that bilateral prefrontal areas are involved in WM such that unilateral prefrontal lesions do not cause WM deficits (D’Esposito et al., 2006). The aim of the current study was to investigate neural correlates of WM using voxel-based morphometry (VBM). We examined the roles of right hemisphere and bilateral prefrontal areas, in particular. METHODS: Seventy-one left hemisphere stroke survivors and 39 healthy adults completed four tasks: forward and backward digit span, and forward and backward spatial span. VBM was first used to identify regions in which gray matter volume (GMV) correlated with performance on each task in healthy adults. Additional analyses identified regions in which the correlation between GMV and performance in each task was different between healthy adults and stroke survivors, covarying for age, total GMV, and lesion volume. Threshold-free cluster enhancement was used and family-wise error correction was applied to correct for multiple comparison across the entire brain, along with small volume correction in particular regions of interests, including bilateral IFG, DLPFC, and our previous lesionsymptom mapping results (bilateral sensorimotor areas and pSTG). Reported results are thresholded at p<0.05 corrected. RESULTS: In our first set of analyses, positive correlations were found between GMV in: a) right IFG and forward and backward digit span scores, b) left IFG and right pSTG, and forward digit span scores, and c) left DLPFC and backward digit span scores. No clusters survived correction for spatial span tasks –even at lower thresholds no prefrontal results were found (p<0.5 uncorrected). Our second set of analysis revealed stronger correlations between right IFG and forward and backward digit span tasks in healthy participants compared to patients. We found stronger correlations between bilateral cerebellum and backward digit span in patients compared to healthy adults. CONCLUSION: Our results suggest that right IFG is involved in performing both digit span tasks. Since GMV in IFG did not correlate with spatial spans, it is possible that IFG’s role is domain-specific, perhaps involving integration of episodic information. Our findings also demonstrate that right pSTG is involved in forward digit span and might have a role in nonarticulatory maintenance. Left DLPFC might be involved in manipulation of verbal information in WM required for backward digit span task. Bilateral cerebellum is involved in inner speech and might play a role in recovery of articulatory-rehearsal functions after left hemisphere lesions. SNL 2019 Program Past studies have found that Vagus Nerve Stimulation (VNS) administered to epileptic patients has a positive effect on performance of memory consolidation tasks (Ghacibeh et al., 2006). More recent research has also demonstrated that applying transcutaneous stimulation to the auricular branch of the vagus nerve (tVNS) can also enhance associative learning in humans, in theory via modulation of neurotransmitters related to memory and learning (Jacobs et al., 2015). Presently, very little is known about the effects of tVNS on procedural learning. The current project investigates the effects of tVNS on phonological working memory and statistical (implicit) language learning. Method: We used a within-subject design with two sessions three weeks apart. Sessions 1 and 2 were counterbalanced to be either with tVNS stimulation via the tragus in the left ear, or with a sham (earlobe) stimulation on the left ear. Stimulation was continuous throughout the study, and set at 80% of the level that caused slight discomfort. Participants carried out a phonological working memory task followed by a statistical learning task in each session. In the phonological working memory task (Nittrouer & Miller, 1999) participants were presented first with 8 nonrhyming words, and then 8 rhyming words paired with corresponding pictures. Participants then listened to the 8 words spoken in a random order and were told to choose the corresponding pictures based on the order they heard the words spoken. The statistical learning task was based on Isbilen et al. (2017). Participants listened to an 11 minute sequence of 6 trisyllabic pseudowords presented without boundaries (e.g. modipalatibilomarikibudutagalu…). After the exposure phase, participants completed a forced choice task in which they heard two items, one that matched and one that did not match the transitional probabilities of the stream they just heard, and indicated which sounded more familiar. Next, participants were asked to repeat sequences of six syllables, which were either composed of two words in the pseudo language or of random combinations of syllables. If statistical learning occurred, words should be repeated more accurately than the non-words. Results and Discussion: Data collected thus far (n=5) show a trend towards a positive effect of tVNS on phonological memory for the rhyming condition only. In addition, tVNS seems to be associated with better word versus non-word repetition in the statistical learning task. No systematic differences between sham and tVNS are seen in the alternative forced choice task. If these results persist with more participants, these outcomes suggest that tVNS enhances phonological memory when the task becomes more challenging (phonologically similar materials). We cannot say at this point whether tVNS enhances statistical learning or the retrieval processes involved in conducting the repetition task. C81 Effects of Transcutaneous Vagus Nerve Stimulation on Statistical Language Learning and Phonological Working Memory Edith Kaan1, Ivette De Aguiar1, Megan S. Nakamura1, Atharva P. Chopde1, Chenyue Zhao1, Damon G. Lamb1, John B. Williamson1, Eric C. Porges1; 1University of Florida 170 The Society for the Neurobiology of Language SNL 2019 Program  Reading C82 Universal neural anomalies in Chinese and French dyslexic children Xiaoxia Feng1, Irene Altarelli2, Le Li1, Guosheng Ding1, Franck Ramus3, Hua Shu1, Karla Monzalvo2, Stanislas Dehaene2,4, Xiangzhi Meng5,6, Ghislaine DehaeneLambertz2; 1State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 2Cognitive Neuroimaging Unit, CEA DRF/ I2BM, INSERM, NeuroSpin Center, Université Paris-Sud, Université ParisSaclay, Gif-sur-Yvette, 3Laboratoire de Sciences Cognitives et Psycholinguistique (ENS, CNRS, EHESS), Ecole Normale Supérieure, PSL Research University, Paris, France, 4Collège de France, Paris, 5School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, 6PekingU-PolyU Center for Child Development and Learning, Peking University To investigate whether the neural anomalies underlying dyslexia are universal across languages or influenced by the writing system, we tested Chinese and French dyslexics and controls in a cross-cultural experimental fMRI paradigm. We compared 10-year old children brain activity to words, faces and houses while they were asked to detect a rarely presented star. As previously reported in alphabetic writing, we observed a correlation between reading scores and activation to words in several key regions of the reading circuit, e.g. left fusiform gyrus, superior temporal gyrus/sulcus, precentral and middle frontal gyrus. Analyses based on ROIs reported in the literature as sensitive to dyslexia revealed main effects of dyslexia with no interaction with the children native language, suggesting a cross-cultural invariance in the neural anomalies underlying dyslexia. Multivariate pattern-information analysis confirmed the impaired representation in the left fusiform gyrus in dyslexics, implying that anomaly in this region is a most robust correlate of dyslexia in different languages. However, impaired representation in the posterior superior temporal gyrus was only found in French dyslexics. This finding may reflect more severe phonological deficits in alphabetic dyslexia or a culturally-modulated compensatory strategy of Chinese dyslexia in the posterior superior temporal gyrus in visual word recognition. The current study revealed universal neural anomalies of dyslexia in different writing systems. C83 The Effects of Extensive Reading on Second Language Listening Proficiency: an fNIRS Study Katsuhiro Chiba1, Atsuko Miyazaki2, Satoru Yokoyama3; 1Bunkyo University, Japan, 2RIKEN, Japan, 3Chiba Institute of Science, Japan Extensive reading (ER) has long been recognized as an effective means of enhancing L2 proficiency, and studies have reported positive effects on various aspects of L2 acquisition, in particular, increases in reading speed, comprehension, and even listening proficiency (Chiba & Yokoyama, 2016). As prefrontal cortex activation was reported to change due to task difficulty level and proficiency (Chiba, 2016; Takeuchi et al., 2012;), this research aims to determine whether eight months of ER training intervention affects listening ability and the The Society for the Neurobiology of Language Poster Session C amount of change in blood flow of the prefrontal cortex. In this study, we measured cerebral blood flow using NIRS (two channels, wave length of 810 nm, sampling frequency 10 Hz; NeU corp.) while twenty-four healthy right-handed college freshmen performed listening tasks at three time points: at the start of an ER program within an English course of their university (pre), after having read approximately 150,000 words (post1), and after having read about 300,000 words (post2). In addition, we collected TOEIC L/R scores and reading speed data by using reading tasks. Both behavioral data and NIRS data were analyzed by GLMM. TOEIC reading scores showed no significant difference between the three time points. Regarding the listening scores, the pre scores were lower than those of post1 and post2 respectively, but there was no difference between the scores of post1 and those of post2. The reading speed of post1 was faster than that of pre, and post2 was faster than post1. In the listening task, comprehension question accuracy rate (AR) showed no significant difference between the three periods. The response times (RT) showed a significant decrease from pre to post1, but none after that point. With regard to the NIRS data, post1-right showed a significant increase from pre-right. Post2-left and post2-right showed significant decreases from post1-right. Activation increase of post1-right compared to that of pre-left was marginally significant. Other contrasts showed no difference. In conclusion, ER was found to enhance listening ability in TOEIC scores and RT. In addition, there were differences in blood flow. From pre to post1, there was no differnce in AR, but RT became faster, and the t-Hb increased in post1right from that of pre-right. Previous studies suggested that cognitive training accelerates processing speed and leads to increases in activation of the prefrontal cortex (Takeuchi et al., 2011). Compared with the pre stage, at which students had no ER experience, 150,000 words of ER training might enable cognitive processing in the prefrontal cortex. This can be considered to be training effect. However, from post1 to post2, there was no change of AR or RT, but t-Hb decreased. Poldrack (2000) claimed that activation decrease is an increase of neural efficiency. Therefore, these post1 to post2 results can be interpreted as an indication that 300,000 words of ER training enabled neural efficiency, in other words, the same cognitive performance with reduced brain activity. However, the reason why siginificant changes were observed in only the right channel requires further research. C84 Removal of Ocular Artifacts from Magnetoencephalographic Natural Reading Data: Recommended Methods Sasu Mäkelä1, Jan Kujala1,2, Riitta Salmelin1; 1Aalto University, 2University of Jyväskylä, Finland Introduction Reading is one of the most important forms of human communication. For understanding the cortical basis of reading, it seems essential to use natural reading paradigms in conjunction with a brain imaging modality that has both high spatial and temporal resolution, such as magnetoencephalography (MEG). However, due to its electrophysiological nature, MEG is sensitive to electromagnetic artifacts, of which especially 171 Poster Session C problematic for reading studies are ocular artifacts. In this study we present two different methodological pathways for removing ocular artifacts from MEG reading measurements, with the focus on saccades and blinks. The aim is to demonstrate the effectiveness of the approaches in removing ocular artefacts from continuous reading data and to evaluate the possible benefits of using a multi-stage process. Methods Both of our alternatives are based on blind source separation methods, but differ fundamentally in their approach. The first alternative is a multi-stage process consisting of two methods presumably well-suited for removing either saccades or blinks. In the first part saccades are extracted by applying Second-Order Blind Identification (SOBI) which exploits the temporally coherent patterns that saccades form in normal reading situations. In the second part an independent component analysis (ICA) method called FastICA is used to extract blinks, which appear in the measurements as independent random deviations. Our second alternative is to use only one method for removing both artifact types. For this we used Adaptive Mixture ICA (AMICA), a method shown to outperform most competitors (Delorme et al., 2012, PloS one, 7(2), e30135). The alternatives were tested on MEG data recorded from 10 subjects in a natural reading task. For both methods we evaluated whether and how clearly they were able to identify artifacts from the data. In order to compare the spatial similarity of the artifact components yielded by the different methods, Pearson correlations of the components’ sensor weights were calculated for all subjects. Results Saccades were extracted by both SOBI and AMICA from all subjects with a single component. These components were highly similar in 9 subjects, with SOBI/AMICA correlations in the range 0.954–0.994 (0.753 for one subject). Blinks were extracted with a single component from all subjects by AMICA, and from 8 subjects by FastICA. For one subject, FastICA decomposed the blinks into two components, and for a second subject the method had to be run with only a partial dataset in order to find a blink component. For the 8 completely successful runs, SOBI/AMICA correlations were 0.885–0.991 (0.825 and 0.507 for the remaining runs). Conclusion Based on these results the two pipelines produce very similar results on saccade and blink artifacts, except for the two less successful performances with FastICA. Discounting these instances, both alternatives seem equally recommendable. When the different pipelines yield noticeably different results, simulated data in which the true distribution of artifacts is exactly known is needed to determine which alternative extracts the artifacts better. C85 Phonological Visual Word Recognition in the Two Cerebral Hemispheres: Evidence from a Masked Priming Study with Cross-Script Pseudohomophones as Primes Orna Peleg1, Mor Moran-Mizrahi1, Dafna Bergerbest2; Tel-Aviv University, 2The Academic College of Tel-Aviv-Yaffo 1 To test the separate and combined abilities of the two cerebral hemispheres to activate phonological information during the initial stages of visual word recognition, the 172 SNL 2019 Program present study utilized a masked phonological priming paradigm, with cross-script pseudohomophones as primes. In the study, Hebrew-English bilinguals were asked to perform a lexical decision task on Hebrew targets (e.g., ‫ םגא‬/agam/ [lake]) briefly preceded by cross-script pseudohomophone primes, i.e., Hebrew words written phonetically with English letters (e.g., ‫ = םגא‬agam). The primes were either phonologically identical to the targets (agam – ‫)םגא‬, or unrelated to the targets (unrelated pairs were created by re-pairing primes and targets that were clearly not related in any way). In Experiment 1, the targets were presented in the central visual field to both hemispheres. In Experiment 2, the targets were presented either in the right visual field to the left hemisphere (LH) or in the left visual field to the right hemisphere (RH). Consistent with interactive models, such as the bi-modal interactive activation model (Grainger & Ferrand, 1994), in both experiments, targets were easier to recognize in the phonologically related condition than in the unrelated condition. Importantly, these pre-lexical phonological effects occurred even though the primes and the targets were orthographically dissimilar (i.e., written in completely different alphabets). Such results indicate not only that sub-lexical orthographic representations automatically activate their corresponding phonological representations, but also that these automatic bidirectional orthographicphonological interactions operate in a language nonselective manner (e.g., Dijkstra & van Heuven, 2002). Interestingly, no difference was found between the two hemispheres. That is, phonological effects were obtained irrespective of visual field presentation. Thus, despite the critical role assumed for the LH in activating phonological codes during the processing of written words (e.g., Peleg & Eviatar, 2012), the current evidence suggests that both hemispheres are able to access phonological codes during the early moments of visual word recognition. It is possible that when the visual word recognition process can benefit from both phonological and orthographic sources of information, the RH tends to rely more heavily on orthographic information (e.g., Halderman & Chiarello, 2005; Peleg & Eviatar, 2009, 2012, 2017); However, when orthographic information is completely nullified, as in the case of the present study, the RH is more inclined to process phonological information (e.g., Halderman, 2011). Additional studies are needed in order to clarify the conditions under which phonology affects visual word recognition in the RH. C86 The brain regions involved in development of kana reading skills: a longitudinal f-MRI study of Japanese primary school students Ayumi Seki1,4, Hitoshi Uchiyama2,4, Tatsuya Koeda3,4; 1Faculty of Education, Hokkaido University, Faculty of Humanities and Education, The University of Shimane, 3 The National Center for Child Health and Development, 4Child Developmental and Learning Research Center, Tottori University 2 The Japanese phonogram, kana, is highly transparent, and most children acquire the basic decoding skills during their first year of schooling. In the following years, their reading fluency develops greatly. Using a longitudinal fMRI study with children of this period, we investigated The Society for the Neurobiology of Language SNL 2019 Program  the brain activations related to the development of reading fluency. Eighteen Japanese students (12 boys and 6 girls) participated in the functional MRI studies twice at the second grade (G2: aged 8.3±0.3 years) and fourth grade (G4: aged 10.0±0.4 years). The participants performed the picture-word matching task during the scans. Target pictures of familiar objects were presented on a screen along with either visually or auditorily presented words/pseudowords. Twenty stimuli each for six conditions [i.e., matched words(vWM, aWM), nonmatched words(vWN,aWN), and pseudowords(vPW, aPW)] were presented in a pseudo-randomized order. In both sessions, reading skills were assessed outside the scanner. The fMRI data were analyzed with SPM8. For the first-level analysis, activation corresponding to each condition was derived with the general linear model. For the second-level analyses, a two-factor repeatedmeasure ANOVA was conducted to examine the effect of grade (2G/4G) and modality (auditory/visual). To further investigate the task-specific activations, a two-factor ANOVA (grade (2G/4G) × task (WN/PW)) was conducted separately for visual and auditory stimuli. Additionally, the brain activation at G2, which correlated with the reading time at G4, was evaluated with parametric analysis. The activated regions were largely overlapped between G2 and G4. The main effects of grade were found in the bilateral parahippocampal gyrus (G2G4). The main effects of modality were noted in the bilateral superior temporal gyrus (aW>vW) and bilateral inferior occipital to the fusiform gyri (vW>aW). The occipital activation was extended more anteriorly at the left side and formed a distinguishable peak at the middle part of the fusiform gyrus (-42, -56, -20), the vicinity of the visual word form area. The ROI analysis showed equivalent activations for three visual stimuli. Greater activations for vWN than vPW were found at the bilateral posterior cingulate cortex, left anterior part of the fusiform gyrus, and the left angular gyrus. The activation of the anterior fusiform (-30, -32, -20) was greater to vWN than to wPW at both G2 and G4, while the other two regions showed the difference mainly at G2. The anterior fusiform gyrus also showed greater activation for aWN than for aPW, suggesting the common semantic processing for both auditory and visual words. The parametric analysis with the reading time of words depicted the left posterior superior temporal sulcus. Stronger activation of this region at G2 was correlated to the longer RT at G4. The results of the present study indicated that the middle part of the fusiform gyrus, VWFA, was involved in reading both words and pseudowords in the transparent kana script. Subsequent reading proficiency was predicted by reduced activation of the posterior temporal area during visual stimuli, implying that localization of brain activation may be the key to the development of reading fluency. The Society for the Neurobiology of Language Poster Session C C87 To go or not to go? Or just go? An ERP-based analysis of two-choice vs. go/no-go response procedures in lexical decision Marta Vergara-Martinez1, Pablo Gomez2, Manuel Perea1,3; 1Universitat de València, 2DePaul University, Chicago, 3Basque Center on Cognition, Brain, and Language (BCBL) In cognitive neuroscience studies of language processing, two response procedures have indistinctly been used: go/ no-go (GNG) and two-choice (2C). In the GNG procedure, participants are instructed to respond to a category of stimuli (e.g., words in a word/nonword discrimination task [i.e., lexical decision]) and to refrain from responding to the other category (e.g., nonwords). In the 2C procedure, participants are instructed to respond not only to the stimuli from one category, but also to the stimuli from the other category (e.g., right-hand response for words and left-hand response for nonwords in lexical decision). Prior behavioral experiments across a variety of tasks have typically shown that the GNG procedure yields a greater sensitivity to experimental manipulations. A number of lexical decision experiments have shown greater orthographic, lexical, and semantic effects in the GNG than in the 2C version of the lexical decision task (Perea, Abu Mallouh, & Carreiras, 2014; Hino & Lupker, 1998, 2000). Furthermore, a larger sensitivity of the GNG over the 2C procedure has also been reported in other behavioral tasks (e.g., target detection in a scene: BaconMacé, Kirchner, Fabre-Thorpe, & Thorpe, 2007; samedifferent matching task: Grice & Reed, 1992; semantic categorization: Siakaluk, Buchanan, & Westbury, 2003). The apparent gains in the detectability of a number of phenomena with the GNG procedure in behavioral experiments do suggest that response demands in the GNG and 2C procedures may affect core components of processing rather than merely ancillary processes such as response selection or motor response execution. To uncover the time course of information processing in the GNG vs. the 2C procedures during visual word recognition, we examined the impact of a lexical factor (word frequency) in a lexical decision task by tracking ERP (Event-Related Potential) waves. In this set-up, the word-frequency effect was used as a marker for the activation of lexical properties. If the differences across response procedures influence relatively early lexical processing stages, we would expect word-frequency to induce differences across tasks in the early epochs of the ERPs. Alternatively, if the differences across response procedures only occur at a post-access response selection stage, we would only expect differences across procedures in late time windows of the ERPs. Results showed that the word-frequency effect occurred earlier in time (starting around 200 ms post-stimuli) in the GNG than in the 2C response procedure. These results support the view of a largely interactive cognitive network in which a subtle manipulation of the response procedure can affect early components of processing. 173 Poster Session C C88 Leveraging megastudy data in neuropsychological assessment of reading J. Vivian Dickens1, Sarah F. Snider1, Rhonda B. Friedman1, Peter E. Turkeltaub1,2; 1Georgetown University Medical Center, 2MedStar National Rehabilitation Hospital, Washington, DC INTRODUCTION: Modern cognitive neuropsychology was borne out of seminal studies of patients with alexia, an acquired disorder of reading. Lexical dimensions that define types of alexic reading include letter length (short vs. long), spelling-sound regularity (regular vs. irregular spelling-sound correspondences), imageability (poor vs. rich mental imagery), and lexicality (words vs. pseudowords). Traditionally, deficits along these dimensions are identified separately through administration of carefully matched words that differ along a single dimension. Many modern neuroimaging and lesion studies of alexia make use of partially normed lists from the PALPA (Kay et al., 1996), or otherwise use tests idiosyncratic to a lab. Well-characterized assessments of alexia that leverage big data and updated measures of frequency, regularity, and imageability are notably absent. METHODS/RESULTS: We constructed a corpus of 200 monosyllabic English words matched orthogonally on SUBTLEX-US frequency (low/high), regularity (regular-consistent/irregular-inconsistent), and imageability (low/high). Frequency was measured in frequency per million (≤10 = low, >10 = high). Imageability (Cortese & Fugett, 2004; Coltheart, 1981) ranged from 1-7 (≤4 = low, >4 = high). Regular words both adhered to typical spelling-sound patterns and had a spellingsound body consistency of 1 (Jared, 2002), while irregular words both violated typical spelling-sound patterns and had a spelling-sound body consistency <1. Low/ high frequency words were matched on letter length, consistency, imageability, and articulatory complexity. Regular/irregular words were matched on letter length, frequency, imageability, and phoneme onset. Low/ high imageability words were matched on letter length, frequency, consistency, and articulatory complexity. We derived naming latency (ms) and accuracy norms from the English Lexicon Project (http://elexicon.wustl.edu; 815 healthy adults) against which patient performance can be compared. Replication of benchmark effects in normal reading aloud make the corpus ideal for assessing alexia: 1) increased letter length relates to longer response latencies (r = .30, p < .0001); 2) low frequency words are read slower than high frequency words (t(198) = 6.02, p < .0001); 3) low frequency irregular words are read slower than low frequency regular words (t(98)= -6.73, p < .0001); 4) low frequency, low imageability irregular words are read slower than low frequency, high imageability irregular words (t(48) = -2.72, p = .009). In addition to item-level norms, we constructed a pseudorandomized list for testing in which each word type follows and precedes each other type an equal number of times, which minimizes order effects on overall performance while permitting examination of order effects. For assessing lexicality effects, we created 20 regularly spelled pseudowords, 20 orthographically unique pseudowords, and 20 pseudohomophones matched to 174 SNL 2019 Program each other and to a subset of 20 real words on length/ bigram frequency. CONCLUSION: We present a new corpus tailored to assessing alexia that leverages largescale normative data. We are currently extending these norms in healthy older controls and left hemisphere stroke survivors to include pseudoword naming latencies and accuracies, which are not available via the ELP, as well as lexical decision norms. These norms will be made publicly available to enable more targeted and replicable studies of the neurobiology of phonological and lexical-semantic reading processes. C89 Lingering word expectations in recognition memory Joost Rommers1, Peter Hagoort1, Kara D. Federmeier2; Donders Institute, Radboud University, 2Beckman Institute, University of Illinois 1 Expectations may promote rapid language processing, but it is unclear whether they have downstream consequences for what readers ultimately retain. In particular, when an expectation is disconfirmed, is it suppressed, or does it linger? Furthermore, is such suppression or lingering associated with particular memory processes during retrieval and/or comprehension processes during encoding? The present study manipulated word predictability, examined subsequent memory for words that had likely previously been expected (but were never actually presented), and characterized the associated electrophysiological signals. Forty participants read unexpected but plausible sentence endings while their EEG was recorded. The endings either completed a strongly constraining sentence frame wherein they violated a likely expectation (“Be careful, because the top of the stove is very dirty”, where “hot” was expected), or they completed a weakly constraining sentence frame that did not afford a strong, consistent expectation (“He is surprised, because the second object is very dirty”). Additional filler sentences had predictable endings. After reading all of the sentences and performing a brief distraction task (solving math problems), participants took a surprise recognition memory test. The memory test featured Old words that had previously been seen (“dirty”), New words that had not been seen (“hot” after reading the weakly constraining sentence), and Expected words that had been disconfirmed (“hot” after reading the strongly constraining sentence). The EEG signal during the sentence reading phase suggested that readers formed expectations: relative to the weakly constraining condition, strongly constraining sentence frames elicited an alpha/beta power decrease that started prior to critical word onset, and the expectation violations elicited a continued alpha/beta decrease and a late positivity. Signal detection-theoretic analyses of the memory responses revealed that Old/Expected discriminability was worse than Old/New discriminability, suggesting that expectations lingered despite having been disconfirmed. During the memory test, compared with false recognition, correct rejections of Expected words tended to elicit larger N400 amplitudes and a somewhat larger late positivity. Based on previous studies, this suggests that overcoming false memories was associated The Society for the Neurobiology of Language SNL 2019 Program  with less semantic priming and more effortful recollection of episodic details. Finally, during the reading phase, actually presented words (“dirty”) that were subsequently remembered (vs. forgotten) were associated with smaller N400 amplitudes and a larger late positivity. However, the on-line precursors of lingering and suppression remained unclear, as EEG responses to expectation violations did not differ as a function of whether the originally Expected word (“hot”) was subsequently falsely recognized or correctly rejected. Overall, the results show that expectations have downstream consequences beyond rapid processing. Poster Session D Wednesday, August 21, 2019, 5:15 – 7:00 pm, Restaurant Hall Computational Approaches D1 Post-hoc modification of linear models: take control of your machine learning algorithm Marijn van Vliet1, Riitta Salmelin1; 1Aalto University Machine learning models have enabled the use of increasingly ambitious experimental designs for studying the neurobiology of language. For example, single trial analysis of neuroimaging data (e.g EEG, MEG, fMRI) is now possible. However, it can be daunting to figure out what a model is “learning” about the data and assert control over it. In this study, we propose a framework for understanding linear models (e.g. OLS, lSVM, logistic regression, LDA, etc.) that allows for a back-and-forth between the learning algorithm and the researcher. First, the model is fitted to the data as usual. Then, its weight matrix is decomposed into a covariance matrix, a pattern and a normalizer. These subcomponents are much easier to reason about than the original weights and it is, therefore, also straightforward to modify them based on domain information. Finally, the modified subcomponents are re-assembled into a weight matrix, yielding an updated linear model. We demonstrate the operation of this framework on EEG data recorded during a semantic priming experiment. The task for the machine learning model was to deduce the associative strength between two words based on the EEG response. The words were presented sequentially in written form. By manipulating the subcomponents of the model, we were able to improve its performance by leveraging domain knowledge about the characteristics of the EEG method, the time course of the N400 potential, and the recordings of other participants. The improved decoding performance serves as one example of the primary goal of this framework, which is to have more control over what a model is doing. D2 Modelling a full-size grounded conceptual system: Categorical structure emerges spontaneously from the latent structure of sensorimotor experience Louise Connell1, James Brand2, James Carney3, Marc Brysbaert4, Dermot Lynott1; 1Lancaster University, 2University of Canterbury, 3Brunel University, 4Ghent University The Society for the Neurobiology of Language Poster Session D Many theories of semantic memory assume that categories spontaneously emerge from commonalities in the way we perceive and interact with the world around us. However, efforts to test this assumption computationally have been hampered by a number of issues, including the use of abstracted features without clear sensorimotor grounding and over-reliance on small samples of concepts from a limited number of categories. As such, even though theories of emergent category structure may assume grounded conceptual representations, current models do not adequately instantiate or test this assumption. In the present work, we take a radically different approach by creating a fully-grounded, multidimensional sensorimotor model at the scale of a full-size human conceptual system and examining whether categorical structure emerges from the latent structure of sensorimotor experience. The data underlying the model come from our newly developed set of sensorimotor strength norms, where each of 40,000 English lemmas is rated on the extent to which the referent concept is experienced via 11 separate sensorimotor dimensions: six perceptual modalities (auditory, gustatory, haptic, interoceptive, olfactory, visual) and five action effectors (foot/leg, hand/ arm, head excluding mouth, mouth/throat, torso). Each concept is therefore represented as a single point within 11-dimension sensorimotor space, where similar concepts are located close together. To model latent structure within this space, we ordered the concepts by age of acquisition (AoA) to provide a developmentally plausible trajectory of cluster formation, and then used two-step cluster analysis (preclustering ordered by AoA, followed by agglomerative hierarchical clustering) to extract the optimal cluster solution. We found evidence for (a) a high-level separation of abstract and concrete categories (that was not enhanced by the inclusion of affective information); (b) a hierarchical structure of concrete concepts that separated categories commonly impaired in double dissociations, such as fruit/vegetables, animals, tools, and musical instruments; and (c) a flatter hierarchy of abstract concepts that separated categories such as negative emotions, units of time, social relationships, and political systems. These findings demonstrate that sophisticated categorical structure can emerge spontaneously from grounded sensorimotor representations alone, without positing high-level, abstracted features. Moreover, the findings support theoretical claims that sensorimotor information is fundamental to the representation of all conceptual knowledge, including abstract domains where it has traditionally been assumed to play a minimal role. D3 Beyond the diffusion tensor model: probabilistic nTMS-based tractography of language pathways in patients with brain tumors Ioana Sabina Rautu1,3, Chiara Negwer1,2, Nico Sollmann1,2, Sebastian Ille1,2, Bernhard Meyer1,2, Sandro Krieg1,2; 1Department of Neurosurgery, Klinikum rechts der Isar, Technical University of Munich, 2TUM-Neuroimaging Center, Munich, 3University of Regensburg, Germany 175 Poster Session D Introduction: With the advent of hodotopical models describing the neurobiology of language (Catani & ffytche, 2005; Duffau, 2014), language function is considered to be distributed across a complex neuronal network involving both cortical and subcortical structures. Thus, maintaining the integrity of language-relevant subcortical tracts has also gained importance in modern neurooncological approaches aiming to preserve language function, especially in patients with tumors in critical locations, such as the perisylvian region. One of the most novel and promising methods used for this is navigated transcranial magnetic stimulation (nTMS), coupled with the tractography of language-relevant subcortical tracts (Negwer, 2017). The usage of nTMS allows for increased precision in identifying language-involved cortical regions during an object naming task, and the positive stimulation points can later be used as seed regions for the tractography stage. Although this method has proven to be successful in identifying the majority of known language tracts, most published protocols make use of a tensor-based deterministic fiber tracking algorithm, and more complex probabilistic algorithms are rarely employed. The main focus of this study was to evaluate the differences between these two tractography approaches. Methods: 20 patients with perisylvian spaceoccupying lesions received preoperative nTMS language mapping, and their respective data was integrated into our tractography software (MRtrix3). Diffusion-weighted MR imaging was performed on a 3T Philips Achieva scanner using the following parameters: eight-channel coil, 32 diffusion directions with a b=1000 s/mm2 and 1 b=0 volume, isotropic voxel size: 1.75*1.75*2mm3. Preprocessing was done in both FSL and MRtrix3. Two different fiber tracking algorithms were used: a tensorbased deterministic algorithm (Tensor_Det) and a 2nd order integration over fiber orientation distributions (FOD) probabilistic algorithm (IFOD2), which allows for resolving multiple fiber orientations within each voxel. During the fiber tracking the fractional anisotropy (for the tensor-based algorithm) or the FOD amplitude (for the probabilistic algorithm) and minimum fiber length were varied and 77 different such combinations were used. Results: In each patient, the number of the reconstructed language-related tracts was evaluated, as well as the number of fibers/tract to account for the specificity of the tracking. The subcortical white matter tracts considered of relevance were the cortico-nuclear tract, arcuate fasciculus, uncinate fasciculus, superior longitudinal fasciculus, inferior longitudinal fasciculus, arcuate fibers, commissural fibers, cortico-thalamic fibers, and inferior fronto-occipital fasciculus (Bohsali et al., 2015; Liegeois et al., 2016; Yang et al., 2016). Based on these two parameters (% of tracts and the specificity), the best 5 settings were selected for each algorithm for comparison. Both the tensor-based deterministic algorithm and the probabilistic FOD algorithm successfully tracked the language-related fiber tracts in all 20 patients. The probabilistic algorithm, however, had higher sensitivity and was more robust in detecting certain tracts (e.g. the arcuate fasciculus), while also lasting substantially longer and presenting a 176 SNL 2019 Program lower specificity. These results suggest that this particular probabilistic algorithm, although superior in identifying certain tracts, may be difficult to implement in clinical practice due to lower specificity and tracking duration. D4 Neurocomputational model of the mental lexicon in a word naming and word retrieval scenario Catharina Marie Stille1, Trevor Bekolay2,3, Stefan Heim4,5,6, Bernd J. Kröger1; 1 Department for Phoniatrics, Pedaudiology and Communication Disorders, Medical Faculty RWTH Aachen University, 2Applied Brain Research, Waterloo, Canada, 3Centre for Theoretical Neuroscience, University of Waterloo, 4Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical Faculty, RWTH Aachen University, 5Research Centre Jülich, Institute of Neuroscience and Medicine (INM-1), Germany, 6JARA — Translational Brain Medicine, Aachen The relationship between neural dysfunctions from deficits in specific brain regions and behavioral deficits like lexical disorders is an ongoing research topic. Models of speech production and speech perception offer an approach for relating neural deficits to behavioral deficits. The aim of the present study is to simulate a naming task with semantic and phonological cues under the condition of defined neural deficits. The simulated model is a largescale neural model of speech production. This model is implemented with spiking neurons (leaky-integrateand-fire neurons; LIF neurons) using the NEF (Neural Engineering Framework) and the SPA (Semantic Pointer Architecture) (Eliasmith et al., 2012; Eliasmith, 2013). The model comprises a cognitive processing module, a threelevel knowledge repository (with concept, lemma and lexeme levels; the mental lexicon) and a cortico-cortical control loop for action selection that includes the basal ganglia and thalamus (Stewart, Choo & Eliasmith 2010 a, b). In total, the model comprises 19 neuron buffers with 3250 neurons per buffer (61750 LIF neurons in total). Each neuron ensemble within the basal ganglia and thalamus comprise 50 neurons, yielding 2100 neurons in the basal ganglia and 400 neurons in thalamus. Learned items in the semantic and lexeme levels are organized as semantic pointer networks (cf. Kröger et al., 2016). Here, our conceptual network comprises 874 surface concepts and 222 deep concepts (e.g., superordinates like “vegetables” and abstract items like “love”). The lexeme network comprises 889 surface forms representing syllables or words and 1194 deep forms representing subsyllabic structures like single speech sounds and sound clusters (i.e., consonant clusters). The relationship between surface and deep forms models phonological and semantic relationships between surface items in a specific language. Neural deficits are modeled by decreasing neural activity in different buffers of the mental lexicon, i.e. within specific buffers that represent concepts, lemmas or lexemes in the perception or production speech processing pathways. Speech behaviors, i.e., naming deficits that occur for a specific neural lesion, are simulated here by using the model to perform a concrete picture naming task used in logopedic diagnostics comprising semantic and phonological cues (WWT 6-10; Word Range and Word Retrieval Test; Glück, 2011). The Society for the Neurobiology of Language SNL 2019 Program  The main question of our simulation experiments is: are phonological or semantic cues helpful in a naming task if neural deficits occur in specific cortical locations dealing with lexical processing and storage? Our simulation results indicate that (i) semantic and phonological cues facilitate naming even if neural dysfunctions occur within cortical locations dealing with speech processing and lexical storage; (ii) phonological cues have a stronger facilitating effect for naming than semantic cues. The only case in which semantic cues are more effective than phonological cues is if a neural deficit is located strictly in the concept storage part of the mental lexicon. Control, Selection, and Executive Processes D5 Spatiotemporal signatures of linguistic control mechanisms in bilingual and monolingual contexts. Polina Timofeeva1, Lucia Amoruso1,2, Manuel Carreiras1,2,3; 1Basque Center on Cognition, Brain and Language (BCBL), 2Ikerbasque, Basque Foundation for Science, 3University of the Basque Country UPV/ EHU An increasing number of people learn second and third languages to proficiently communicate on a daily basis. While extensive work has been conducted on how bilinguals control language selection and production [1-2], one question that remains unanswered is whether different linguistic control mechanisms subserve language and semantic category switching and if they follow the same time course. In order to address this issue, we compared switching processes under two different linguistic contexts, either requiring the alternation between languages or between semantic categories (nouns and verbs) within the same language. We recorded neuromagnetic signals with a 306-sensor Elekta Neuromag system while 20 balanced bilingual participants performed tasks in which they had to produce words in different contexts. In one context, they had to switch between languages (Spanish/Basque). In the other, they had to switch between producing nouns or verbs in a single language (Spanish or Basque). As in classical switching paradigms, there were two conditions: switch (i.e., language or category change between trials) and repeat condition (i.e., 2 consecutive stimuli presented in the same language or category). Conditions were randomized while tasks were blocked. We used eventrelated field (ERF) analysis to examine which components contributed to control processing within the two contexts. Differences between conditions were assessed using cluster-based permutation tests. The analysis revealed significant power modulations in M200 and M400 components, with the switching condition showing power decreases as compared to the repetition one, either in language or category switching. Interestingly, while the language switching effect was distributed over right sensors, the category switching effect was distributed over left sensors. Overall, our results point to the existence of different control mechanisms devoted to between-language and within-language switching in high-proficient bilinguals [3-4]. While both mechanisms displayed a similar time-course, their lateralization differed, with language switching involving the right The Society for the Neurobiology of Language Poster Session D hemisphere and semantic switching the left one thus suggesting the recruitment of distinct cortical networks depending on the linguistic context. References [1] H.Liu, S.Rossi, H.Zhou, and B.Chen (2014). Electrophysiological Evidence for Domain-General Inhibitory Control during Bilingual Language Switching. PLoS One, 9(10). [2] E.Blanco-Elorrieta, and L Pylkkänen (2016). Bilingual Language Control in Perception versus Action: MEG Reveals Comprehension Control Mechanisms in Anterior Cingulate Cortex and Domain-General Control of Production in Dorsolateral Prefrontal Cortex. Journal of Neuroscience, 36(2). [3] J.Sierpowska, A.Gabarrós, P.Ripollés, M.Juncadella, S.Castañer, Á.Camins, G.Plans, and A.Rodríguez-Fornells (2013). Intraoperative electrical stimulation of language switching in two bilingual patients. Neuropsychologia 51. [4] S.Moritz-Gasser and H.Duffau. (2009). Cognitive processes and neural basis of language switching: Proposal of a new model. Neuroreport 20. D7 Attention processes in children with attentional difficulties and in children with reading difficulties as revealed using brain event-related potentials Praghajieeth Raajhen Santhana Gopalan1, Otto Loberg1, Kaisa Lohvansuu1, Jarmo Hämäläinen1, Paavo Leppänen1; 1University of Jyväskylä, Department of Psychology, Finland Visual attention-related processes include three functional sub-components: alerting, orienting, and inhibition. Here we examined these components using brain eventrelated potentials and their neuronal source activations during the Attention Network Test (ANT) in children with attentional difficulties (AD) and reading difficulties (RD). EEG was measured with 128 electrodes and combined with simultaneous eye-tracking and reaction time data, from three groups of Finnish sixth-graders aged 12-13 years (control = 83; AD = 15; RD = 23). In the ANT test, participants were asked to detect the direction of a middle target fish out of a group of five fish. The target stimulus was either preceded by a cue (centre, double, or spatial) or without a cue, in order to manipulate the alerting and orienting sub-processes of attention. The direction of the target fish can either be congruent or incongruent in relation to the flanker fish, thereby manipulating the inhibition sub-processes of attention. Reaction time (RT) performance of AD group showed reduced orienting effects compared to other groups. No differences were found between groups on RT of alerting and inhibition effects. Neuronal source analysis revealed significant group differences in the occipital, medial temporal and medial frontal lobes in AD and RD groups. RT performances showed that AD and RD group might have limited ability to maintain the spatial attentional focus. The neuronal source differences between groups suggest that there is a reduced top-down control which exhibits a deficit in disengaging and processing of relevant visual target information. 177 Poster Session D Development D8 The detection of repetition-based regularities from visual input at 6 months of age Irene de la Cruz-Pavía1,2, Judit Gervain1,2, Iris Berent3; 1Integrative Neuroscience and Cognition Center, CNRS, Paris, 2Integrative Neuroscience and Cognition Center, Université Paris Descartes, 3College of Science, Northeastern University, Boston Infants’ ability to learn regularities related to repetitions in linguistic sequences (e.g. ABB: “wo fe fe”) has received considerable attention since Marcus et al.’s (1999) first showed that 7-month-olds can generalize different linguistic rules based on these patterns (e.g. ABB vs. ABA: “wo fe fe” vs. “wo fe wo”). But whether this capacity is specific to speech or whether it also extends to sign language is unclear (Rabagliati et al., 2012). Its neural correlates also remain only partly understood. We sought to answer two questions: Is the ability to learn repetition-based regularities found across linguistic modalities? If so, is it similar across all visual stimuli or is sign language processed differently than a non-linguistic visual analogue? We used NIRS to investigate whether 6-month-old infants, never exposed to sign language, were able to extract repetition-based regularities from visual stimuli. In Experiment 1, infants were presented with 6 sequences of two novel disyllabic signs in each block, for a total of seven blocks per condition, in two conditions: in the repetition condition (AA), the signs were identical, in the control conditions (AB), they were different. In Experiment 2, signs were replaced by visual analogues, matched for spatiotemporal properties. We measured infants’ brain responses in the bilateral temporal, parietal and frontal areas using a NIRx NIRScout system (Exp 1: 12 channels/hemisphere, Exp 2: 10 channels/ hemisphere). Blocks with artifacts in the signal or when infants were not attending to the stimuli were discarded. We averaged responses across the remaining blocks of each condition. The results of cluster-based permutation tests show that infants discriminated between the AA and AB patterns in both experiments. Remarkably, though, the effect of repetition differed in the two experiments. While signs elicited greater activation to AA relative to AB sequences, visual analogs elicited the opposite pattern. When presented with sign, increased activation to AA sequences — as indexed by concentration changes of both oxy- and deoxyhemoglobin — was found in the bilateral fronto-temporal areas, involving the superior temporal and inferior temporal gyri, including Broca’s region. Spatial clusters included 3 channels in the LH and 4 channels in the RH (all p < 0.001). For the visual analogs, the greater responses to the AB sequences were found in the left fronto-temporo-parietal and the right temporal regions and were limited to changes of oxyhemoglobin concentration. Spatial clusters included 4 channels in the LH and two channels in the RH (all p < 0.001). These results suggest that the ability to extract repetitionbased rules is general across language modalities, and that linguistic visual stimuli may be processed differently 178 SNL 2019 Program from other visual input. Corresponding author: i.berent@ northeastern.edu - This research was funded by the National Science Foundation (NSF 1528411, PI Berent) D9 Deficits in Language Development of Children Raised in Institutional Care: Behavioral and ERP indices Marina Zhukova1,2, Sergey Kornilov1, Olga Titova2, Irina Ovchinnikova1,2, Irina Golovanova2, Aleksandra Davydova2, Tatiana Logvinenko2, Elena Grigorenko1,2,3,4; 1University of Houston, 2Saint-Petersburg State University, 3Yale University, 4Baylor College of Medicine Children raised in institutional care (IC) demonstrate а cascade of deficits in language development in receptive and expressive domains (Glatzhofer, 2010; Loman et al., 2009). Children raised in IC demonstrate a lack of comprehensive utterances at the age of 30 months when exposed to severe deprivation (i.e., Romanian orphanages) (Windsor et al., 2007), poor sentence comprehension (Desmarais et al., 2012), lower overall language functioning and cognitive control. It is argued that those deficits might be a result of functional alterations in neural structures due to chronic stress that children in orphanages are exposed to (Eigsti et al., 2011). Most studies have focused on samples of internationally adopted children and used behavioral methods of language testing, providing limited information about the developmental trajectories of children who remain in institutions. To the best of our knowledge, our study is the first to investigate both the behavioral and neuropsychological aspects of language development in children who currently live in IC. We have collected data from 28 children residing in institutional care in Russian Federation (IC; 15 males; Mage=32.50 months, SD=7.50) and 16 age-matched peers raised in biological families (BF; 5 males Mage=35.13 months, SD=8.08). Behavioral measures of language assessment included Preschool Language Scales-5 (Zimmerman, 2011; Zhukova et al., 2016) and McArthur CDI (Fenson et al., 2006). Neurobiological indices of language development were assessed using a cross-modal picture word paradigm, aimed at eliciting the N400 ERP component. Children were presented with a picture and an auditory word that matched the picture or mismatched it in three possible ways (unrelated real word, phonotactically legal Russian pseudoword, or illegal pseudoword), for a total of 160 trials. EEG was recorded using a highdensity 64-electrode actiCHamp EEG acquisition setup and processed offline in Brain Vision Analyzer. Average amplitude has been extracted for the 350-550ms time window. Findings suggest that children in the IC group significantly underperformed on expressive (β= -.86, SE= .18, p< .001) and receptive (β = -.59, SE =.19, p < .05) behavioral language measures compared to the BF group. Group effects were significant controlling for non-verbal intelligence, age and gender. On the neurobiological level, children in the IC group demonstrated a significantly reduced N400 component in response to the unrelated real word condition in the midline electrode clusters compared to the BF group (β= 2.11, SE= 1.07, p< 0.05). Instead of the expected N400 component, they demonstrated a positive waveform peaking around 300 ms, resembling the non-linguistic P3a component. We The Society for the Neurobiology of Language SNL 2019 Program  have also found significant associations between the N400 amplitude in the left-central electrode cluster and expressive language scores (r= -.423, p<.05) and communication (r= -.519, p< .05). The findings suggest that expressive language skills are associated with the magnitude of neural response to semantic mismatch. Reduced neural response to incongruity suggests that children in the IC group use different functional networks to process linguistic information. Our findings expand the literature regarding detrimental effects of institutional care on language development. Research was supported by the Government of the Russian Federation (grant № 14.Z50.31.0027; E.L.G., Principal Investigator). D10 Developmental changes in the neural underpinnings of word learning Alyson Abel1; 1San Diego State University Introduction: The school years are a critical time for word learning, an area of language development that is strongly tied to academic success. While recent research has expanded our understanding of developmental changes in the neural processes underlying sentence processing (Schneider, Abel, Ogiela, MCord & Maguire, 2018; Schneider, Abel, Ogiela, Middleton & Maguire, 2016), we know less about the development of the neural networks supporting word learning. Additionally, the past developmental literature has focused on differences between children and adults instead of across children of different ages. Here we use ERPs and time frequency analysis of the EEG to examine changes in the engagement of neural processes supporting word learning in school-age children (8-10 years old) and adolescents (13-15 years old). Methods: Twenty-four children, twelve 8-10 years old (younger group) and twelve 13-15 years old (older group), completed a word learning task in which they listened to naturally-paced sentence triplets that ended with a target novel word. In the Meaning condition, the three sentences in each triplet increasingly supported the novel word’s meaning. The No Meaning condition provided a control to the Meaning condition in that each sentence provided little contextual support, making it difficult to derive meaning for the novel word. After each sentence triplet, participants were asked to identify the novel word’s meaning, if possible. EEG was collected during the word learning task and was analyzed in two ways: 1) ERPs, focusing on the N400, and 2) time frequency analysis of the EEG, focusing on the theta, alpha, and beta frequency bands, often related to lexical retrieval, attention/inhibition, and syntactic integration, respectively. For this study, EEG analysis focused on data from the Meaning condition. Results: The younger group learned significantly fewer words compared to the older group, 68.9% and 83.4%, respectively (t(30)=6.08, p<0.001). For the ERP analysis, mean amplitudes in the 300-500ms time window were compared in an Repeated Measures ANOVA with group as a between-subjects factor and sentence (1,2,3), laterality (left, midline, right), and anterior-posterior (frontal, central, posterior) as within-subjects factors. A significant 4-way interaction was found (F(8,23)=2.93, p<0.05). The younger group The Society for the Neurobiology of Language Poster Session D showed a graduated attenuation of the N400 amplitude across the sentences at parietal sites. The older group showed an amplitude attenuation from sentence 1 to sentence 2 with no change for sentence 3 across central and parietal sites. The time frequency analysis revealed a significant group x sentence interaction within each frequency band. The age groups differed primarily in theta synchrony and beta desynchrony. Namely, across sentences in the triplet, the older children demonstrated sustained theta synchrony and beta desynchrony that was not as robust in younger children. These patterns are similar to those reported in the auditory sentence processing literature (Schneider et al., 2018). Conclusion: Taken together, these findings suggest that the cognitive and linguistic systems engaged by younger children during word learning are less efficient for learning outcomes compared to older children. In particular, data indicate a developing neural system related to semantic processing and retrieval and syntactic integration through the school years. Disorders: Developmental D11 Neural correlates of learning novel word forms in children with developmental language disorder Saloni Krishnan1, Harriet J. Smith1, Hannah Willis1, Kate E. Watkins1; 1 Department of Experimental Psychology, University of Oxford Background: Developmental language disorder (DLD; previously known as specific language impairment) is characterised by unexplained difficulties in learning one’s native language, and affects at least 7% of children. We have argued that children with DLD have particular trouble with tasks that involve implicit learning of sequence structure, such as learning novel word forms. We hypothesised that dysfunction in corticostriatal systems is associated with these learning difficulties in DLD. Yet, to date, no previous fMRI work has directly tested this idea. In adults, learning to articulate novel word forms elicits changes in BOLD activity within frontostriatal systems, with decreases in activity in left premotor cortex, inferior frontal gyrus and putamen as pseudowords are repeated. Here, we explore the neural correlates of learning novel word forms in children with DLD and in typically developing children. Methods: 19 children with DLD (14 males) and 54 typically developing controls (19 males), aged between 10-15 years, were scanned at 3T while overtly repeating pseudowords (2- and 4-syllables). Sixteen words were heard and repeated only once; another sixteen were heard and repeated four times (each stimulus repetition occurred at 30s intervals, and never consecutively). Echo-planar images of the whole head were acquired during task performance (60 axial slices, voxel size 2.4mm3, TR=0.8s, TE=30ms, 600 volumes). For whole-brain group analyses, age and nonverbal IQ were entered as covariates with the general linear model (thresholded at Z>3.1, cluster corrected p<0.05). We used ROIs of the caudate nucleus and the putamen to probe changes within striatal nuclei. Results When repeating pseudowords, both groups activated regions associated with speech production, namely the left inferior frontal gyrus, the premotor cortex, superior temporal gyrus, 179 Poster Session D SNL 2019 Program supplementary motor cortex, and cerebellum bilaterally, as well as the caudate nucleus, putamen, thalamus and parahippocampal gyrus bilaterally. Repetitions of pseudowords resulted in linear decreases in activity in the superior temporal gyrus bilaterally and at a slightly lowered threshold, in the left inferior frontal gyrus, premotor cortex, and putamen, and the right caudate nucleus. ROI analysis showed that children with DLD had significantly reduced learning related decreases in activity compared with controls in the caudate nucleus bilaterally and in the right putamen. Conclusions The neural correlates of learning novel word forms in children mirrors the pattern seen in adults. A subset of regions, including the left inferior frontal gyrus and the dorsal striatum, showed decreases in activity with repeated stimulus presentation, consistent with the idea that learned word forms are represented more efficiently in these areas. In children with DLD, we find that activity in the dorsal striatum did not decrease to the same extent as seen in typically developing children. These results support our hypothesis that the frontostriatal system is dysfunctional in DLD, and the importance of corticostriatal interactions in speech and language learning. D12 Stuttering and the Social Brain Eric S. Jackson1, Swethasri Dravida , Vincent Gracco , Xian Zhang1, Adam Noah1, Joy Hirsch1; 1New York University, 2Yale School of Medicine, 3 Haskins Laboratories 2 3 Functional neuroimaging studies of individuals who stutter have revealed a number of brain-based differences such as reduced activation in left hemisphere and elevated activation in right hemisphere sensorimotor networks. However, a significant area that has seen less focus is the contribution from social-cognitive networks. This limitation is problematic and theoretically and clinically relevant because stuttering occurs primarily during social interaction, presumably engaging social-cognitive processes. The current work extends the study of stuttering from sensorimotor control into the socialcognitive domain. We test the general hypothesis that social-cognitive processing de-stabilizes the speech motor systems of adults who stutter (AWS), which is reflected by atypical neural activation in social-cognitive regions (e.g., temporoparietal junction [TPJ], dorsomedial prefrontal cortex [DMPFC]) in addition to commonly reported findings of reduced activation in left hemisphere speech-language regions (i.e., inferior frontal and superior temporal gyrus [IFG/STG]). To do this, we examine the neural impact of social interaction (i.e., the presence of a communicative partner) on 22 AWS and 22 agematched controls (CON) using functional near-infrared spectroscopy (fNIRS) to detect changes in blood flow concentrations during relatively natural conditions (e.g., face-to-face speech interaction while participants sit upright). As a result, we evaluate neural activation in the context in which stuttering primarily occurs: social and communicative speech. In this study, there were two conditions during which participants responded to questions asked by the examiner, who was in the testing room (social), and the same questions presented via 180 audio recordings of the examiner while the participant was alone in the testing room (alone). The questions were followed by a five-second preparatory period after which participants verbally responded while looking at the examiner. Neural signals associated with the preparation period (prior to the onset of speech) were analyzed using the general linear model. Results indicate that compared to the CON, the AWS exhibited lesser activation in left IFG and left STG in the social > alone contrast. Posthoc testing revealed that both groups exhibited an increase in activation during the social condition, but the increase was significantly greater in the CON indicating that the commonly reported findings of reduced activation in left IFG/STG may be more related to social-cognitive than sensorimotor demands. In addition, the AWS exhibited greater activation in right TPJ and lesser activation in right DMPFC in the social > alone contrast. It appears that the AWS were less able to inhibit activity in right TPJ under increased social demands, as well as less able to recruit necessary resources for speech interaction (e.g., attention, inhibition) in frontal regions (right DMPFC). Overall, our results suggest that AWS exhibit atypical neural function in both speech-language and social networks during social and communicative speech. These findings highlight the importance of focusing on social-cognitive in addition to speech-language networks, as well as the benefits of using approaches (fNIRS) that facilitate increased ecological validity during neural data collection in stuttering research. D13 Frontal aslant tract differences in developmental stuttering Gabriel Cler1, Peter Howell2, Patricia Gough1, Kate Watkins1; 1University of Oxford, 2University College London The frontal aslant tract (FAT) has been identified as a white matter tract related to speech and language function. This tract connects the inferior frontal gyrus pars opercularis (BA44) and the pre-supplementary motor area (pre-SMA) and is considered an association motor pathway. Stimulation of the left FAT interferes with speech initiation and fluency in typically-fluent adults, which suggests that it could be implicated in stuttering. One previous investigation in 15 people who stutter (PWS) and 19 normally fluent controls used diffusion MRI to study the microstructure of the FAT (Kronfeld-Duenias et al, 2016). PWS showed increased mean diffusivity (MD) in the FAT bilaterally, but there were no differences in fractional anisotropy (FA). Here, we aimed to replicate and extend the previous findings by evaluating the FAT in a larger cohort of 29 PWS and 29 matched controls. We hypothesized that there would be differences in the microstructure of the FAT on the left and possibly also on the right. We predicted that FA would be reduced and, based on the previous finding, MD would be increased in PWS. We further hypothesized that in PWS, these abnormalities would correlate with greater disfluency measured with the Stuttering Severity Instrument-3 (SSI3). We also measured the uncinate fasciculus (UF) as a control cortico-cortical language tract in the ventral pathway that connects anterior temporal pole with orbitofrontal cortex; we predicted that the UF would be The Society for the Neurobiology of Language SNL 2019 Program  unaffected in PWS. Twenty-nine PWS (age range: 14–42 years; mean age: 22.6) and 29 controls (age range 14–45; mean 22.3) were scanned at 1.5T to acquire structural and diffusion-weighted MRI images of the whole head. The PWS ranged in stuttering severity on the SSI-3 from very mild to very severe. Diffusion data were processed using the FMRIB Diffusion Toolbox. Probabilistic tractography (via ProbtrackX) was used to reconstruct fibre tracts for each participant. Average FA and MD were calculated for each tract separately. One-tailed t-tests were used to compare groups, using false discovery rate correction. The PWS had significantly lower FA in the left FAT compared with controls (controls mean=0.359, SD=0.02; PWS mean=0.345, SD=0.01) but did not differ on the right. There was no relationship between FA in the left FAT and stuttering severity in the PWS. MD in the FAT did not differ between the two groups in either hemisphere. As hypothesized, there were no differences between groups in the UF. This analysis in a larger cohort of people who stutter suggests that the microstructure of left FAT is abnormal. However, this was manifest in terms of reduced FA (a measure of white matter integrity) and no differences in MD (as had previously been indicated) were detected. Although these results indicate that there are differences in the microstructure of this tract in people who stutter, it is unknown whether these differences are a cause or consequence of stuttering. Further analyses of additional datasets will be conducted with the aim of replicating the finding in left FAT and to increase power to detect differences in MD should they exist. D14 Investigating the Microstructure of LanguageRelated White Matter Tracts in Developmental Language Disorder Harriet Smith1, Saloni Krishnan1, Hannah Willis1, Gabriel Cler1, Dorothy Bishop1, Kate Watkins1; 1University of Oxford Introduction: DLD (previously known as specific language impairment) is characterised by unexplained difficulties in learning one’s native language and affects at least 7% of children. White-matter differences related to DLD remain relatively under-specified, but abnormal connectivity has been identified in several tracts associated with language processing in children with DLD (Vydrova et al., 2015). We used diffusion tensor imaging (DTI) and probabilistic tractography to examine the microstructure of four language tracts: arcuate fasciculus (AF), frontal aslant tract (FAT), extreme capsule fasciculus (ECF) and uncinate fasciculus (UF), in children with DLD. We hypothesised that children with DLD would show reduced fractional anisotropy (FA) in these tracts. Methods: 16 children with DLD (14 male, 2 female; age range: 10-15 years; mean age: 11.6 years) and 44 typically-developing controls (13 male, 31 female; age range: 10-15 years; mean age 12.2 years) were scanned at 3T to acquire T1- and diffusionweighted MRI images of the whole head. Diffusion images were acquired with 100 distinct directions, 50 at each of two b-values (1000 and 2000 s/mm2), at 2-mm spatial resolution, with multiband acceleration factor of 3. We used probabilistic tractography (ProbtrackX) to reconstruct fibre tracts for each participant, and extracted mean FA for each tract separately. These data were The Society for the Neurobiology of Language Poster Session D compared independently for each tract (AF, FAT, UF and EC) using two-way repeated measures ANOVAs, with a within-subjects factor of hemisphere and a betweensubjects factor of group (DLD or control). Intracranial volume was used as a covariate of no interest due to the gender imbalance between the groups. Results: The two groups did not differ by mean FA across the whole brain (mean ± SD: DLD = 0.27 ± .013; controls = 0.27 ± .008) or intracranial volume (DLD = 1503 ± 152 cc; controls = 1469 ± 125 cc). Boys had significantly larger intracranial volumes than girls (t(58) = 3.93; p < .001; boys 1547 ± 128 cc; girls = 1426 ± 111 cc), as expected. For the language tracts, FA was significantly lower in the DLD group than in the control group in the AF (F(1, 57) = 6.49, p = .014; DLD = 0.51 ± 0.02; controls = 0.52 ± 0.02). There was a trend towards significance for the group difference in mean FA for the ECF (p = .06; DLD = 0.47 ± 0.02; controls = 0.48 ± 0.02). These differences were the same across hemispheres. There were no group differences in the FAT or UF. Discussion: This preliminary analysis indicates that children with DLD have abnormal microstructure of the arcuate fasciculus in both hemispheres. The arcuate fasciculus is part of the dorsal auditory processing stream connecting the posterior temporal and prefrontal cortex. The extreme capsular fasciculus, part of the ventral processing stream, also showed a trend towards lower FA in the DLD group. It remains to be seen whether the extent of these structural deficits relates to the language deficits observed in children with DLD. Furthermore, it is unknown whether these differences are a cause or consequence of language impairment. D15 A Role for the Motor System in Minimally Verbal Individuals with Autism Spectrum Disorder Maria Mody1,2,3, Seppo P. Ahlfors1,3, Baojuan Li4, Christopher Wreh1, Christopher J. McDougle2,3; 1Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 2Lurie Center for Autism, Massachusetts General Hospital, 3Harvard Medical School, 4School of Biomedical Engineering, Fourth Military Medical University, China Introduction: Individuals with autism spectrum disorder (ASD) are characterized by deficits in speech and language. The relationship between oral motor and manual motor skills is of particular interest in this population given the notable difficulties in these areas. In a recent study, we found fine motor skills were positively associated with both receptive and expressive language abilities in children with ASD (Mody et al., 2017). We propose to use magnetoencephalography (MEG) to further explore this motor deficit in ASD. Methods: Data was collected using a 306-channel MEG system from 11 minimally-verbal adults with ASD and 11 age- and gender matched neurotypical controls (NT) (mean age: 25 years; right handed) while they performed a self-paced button press task. Head movements were monitored throughout the recording and compensated for in the analysis. MEG data was co-registered with anatomical MRI in each subject. We focused on left hemisphere activation corresponding to dominant (i.e. right) hand button presses. Results: There were no significant differences 181 Poster Session D between the ASD and NT groups in the number of left- or right-hand button presses. Source analysis of the MEG data using minimum norm estimation (MNE) revealed a similar sequence of motor activation in the two groups. However, there was a significant difference in amplitude (p < .008, corrected) between the groups in left superior parietal area in an early time window (20-60 ms) and in the left supplementary motor area (SMA) in a later time window (60-100 ms). Morphometric analysis of the subjects’ MRI data revealed cortical thickness differences in key speech, language and somatomotor areas, along with a reversed pattern of thickness across the hemispheres between the groups. Conclusions: Structural and functional brain differences between the ASD and NT groups provide insight into the results of our previous study. The SMA finding points to motor coordination as a potential basis for the speech deficits in minimally verbal adults with ASD. Disorders: Acquired D16 Longitudinal atrophy of the left inferior frontal gyrus following post-stroke aphasia Natalia Egorova1,2, Mohamed Khlif2, Emilio Werden2, Laura Bird2, Amy Brodtmann2; 1The University of Melbourne, 2The Florey Institute of Neuroscience and Mental Health, Melbourne Previous studies of post-stroke aphasia focused on functional and structural brain reorganisation that signals recovery. However, the hallmark of stroke is accelerated neurodegeneration. The aim of this study was to understand 1) whether post-stroke aphasia is associated with long-term brain atrophy, extending beyond the lesion, and 2) whether post-stroke neurodegeneration related to language deficits is global or specific to the language neural network. We used FreeSurfer to longitudinally quantify structural brain volume changes in a control group (N=29), as well as in stroke participants with (N=32) and without (N=59) aphasia, assessed using the 16 item Token Test at 3 months after the insult. The two stroke groups were significantly different on the Boston Naming Test Scores, as well as the COWAT Animal and FAS Scores. We extracted percent volume change from 3 to 12 months from 82 regions of interest (ROI) covering the whole brain. For each ROI, we performed ANCOVAs controlling for age, sex, stroke severity, level of education, and total intracranial volume, and correcting for multiple comparisons at Bonferroni p<0.0006 level. Stroke participants with aphasia symptoms at 3 months showed significant atrophy (>2%, p=0.0001) of the orbital part of the left inferior frontal gyrus (left IFG-po) over just 9 months, compared to average yearly brain loss of 0.20.5% in healthy aging. This brain volume reduction was not observed in either the control or non-aphasic stroke participants (p=0.0003). None of the participants in the aphasic stroke group had lesions that overlapped with left IFG-po. Furthermore, there were no group differences in the rate of language decline, suggesting that the atrophy in the language network was not a marker of aggravated language deficit in the aphasic group but was triggered by the mere presence of aphasia early after stroke. We conclude that post-stroke language 182 SNL 2019 Program deficits are associated with accelerated structural decline beyond the lesion location but within the language brain network. Thus, aphasia affects the course of post-stroke neurodegeneration, specifically targeting the language network. D17 Poor action fluency performance is associated with reduced activity in motor and cognitive control regions in Parkinson’s disease Noémie Auclair-Ouellet1,2,3,4,5, Alexandru Hanganu3,4,5,6, Erin Mazerolle4,5,7, Stefan Lang4,5, Mekale Kibreab4,5, Mehrafarin Ramezani4,5, Tazrina Alrazi4,5, Tracy Hammer4,5, Jenelle Cheetham4,5, Iris Kathol4,5, Bruce Pike4,5,7,8, Justyna Sarna4,5, Davide Martino4,5, Oury Monchi4,5,9,10; 1School of Communication Sciences and Disorders, Faculty of Medicine, McGill University, 2 Centre for Research on Brain, Language and Music, Montreal, 3 Centre de recherche de l’Institut universitaire de gériatrie de Montréal, 4Department of Clinical Neuroscience, Cumming School of Medicine, University of Calgary, 5Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, 6 Département de psychologie, Faculté des arts et des sciences, Université de Montréal, 7Department of Radiology, Cumming School of Medicine, University of Calgary, 8Medical Physics Unit, Faculty of Medicine, McGill University, 9Department of Neurology and Neurosurgery, McGill University, 10Département de radiologie, Faculté de médecine, Université de Montréal Introduction: The processing of action words is thought to depend on motor semantic features and motor regions. However, compared to other types of words, action words are considered to be more demanding in terms of cognitive resources. Patients with Parkinson’s disease (PD) have motor symptoms and executive functions difficulties, making them vulnerable to action word processing deficits. This study investigated the clinical, behavioural and neural correlates of action word processing deficit in PD. Methods: 48 idiopathic PD and 38 control participants were recruited. PD participants with a performance one standard deviation below the norm or lower on the action fluency test were identified (n = 15). All PD participants with a poor performance (PD-P) were male. They were compared to male PD participants with good performance (PD-G) (n = 19) and male controls (n = 16). All participants had an evaluation of their motor symptoms (UPDRS-III) and completed a comprehensive neuropsychological battery. Behavioural measures were analysed with one-way ANOVAs (alpha = 0.05). Participants also completed an fMRI version of the Wisconsin Card Sorting Test (WCST). The WCST requires learning and shifting between sorting rules based on feedback received after each trial. In this version of the test, participants are trained prior to scanning and feedback (correct or incorrect) is provided as change in screen brightness. The contrast between receiving a negative feedback and receiving a positive feedback represents the planning of a set-shift and has been associated with frontostriatal activation. Group differences were analysed with permutation analysis (FSL randomise; 5,000 permutations; TFCE: alpha = 0.05). Results: Motor symptoms were not different between PD groups. Action fluency performance was significantly poorer in PD-P compared to PD-G and controls, but there The Society for the Neurobiology of Language SNL 2019 Program  was no difference between PD-G and controls. The same pattern was observed for letter fluency, while there was no difference between groups on animal fluency, naming, and sentence comprehension. PD-P had a significantly lower performance than PD-G and controls on an executive composite score. However, when dividing measures into verbal and non-verbal executive composite scores, there was a significant difference between PD-P and PD-G, and between PD-P and controls on the verbal executive score, but no difference on the non-verbal executive score. On the WCST, PD-P completed less sorting rules and were slower on average than PD-G and controls, but there was no difference between PD-G and controls. During the planning of a set shift, PD-G had greater activity than PD-P in the left orbitofrontal region, middle temporal gyrus and temporo-parietal junction, occipital lobe and putamen. Controls had greater activity than PD-P in those regions, the cerebellum and the caudate nucleus. Discussion: Action word deficit was not associated with more severe motor symptoms in PD. However, it was associated with difficulties in verbal executive function tests and reduced activity in regions associated with executive functions, cognitive control for language, and motor control. The reliance of action words on complete motor plans rather than motor features and the effect of sex on action word processing in PD should be further investigated. D18 Neural Synchronization During Language Processing in Listeners with Aphasia Lisa Johnson1, Stephen Wilson4, Grigori Yourganov1, Alexandra Basilakos1, Brielle Stark2, Dana Eriksson5, Roger Newman-Norlund1, Chris Rorden1, Julius Fridriksson1; 1University of South Carolina Department of Communication Sciences and Disorders, 2University of South Carolina Department of Psychology, 3Indiana University, Bloomington, 4Vanderbilt University Medical Center, 5University of Arizona Background Listening to meaningful speech results in extensive reliable activity shared across multiple listeners (Silbert et al., 2014), and this neural coupling during language processing is integral for successful comprehension and social communication (Hasson et al., 2012). In persons with aphasia (PWA), receptive language is often impaired, thus affecting auditory comprehension. A possible explanation is that damage to certain cortical language areas affects the ability of the listener with aphasia to synchronize cortical ocsillations similarly to neurotypical listeners. The present study tests this hypothesis by investigating neural synchrony during comprehension in the right hemisphere (RH) ventral stream regions in PWA and neurotypical individuals. Aims The purpose of this study was to investigate (1) differences in neural synchrony in RH ventral stream regions between PWA and neurotypical controls, and (2) the relationship between neural synchrony in the contralesional hemisphere and aphasia severity. Methods Seven PWA (1 F, age M = 60.7±9.7 years) in the chronic phase of left-hemisphere stroke (months post-onset, M = 85±103.7) and 5 neurotypical controls (NT; 4 F, age M = 63.5±5.8) were included in the study. Participants The Society for the Neurobiology of Language Poster Session D underwent a battery of language and working memory assessments. In addition to structural MRI scans, we performed two functional MRI scans where participants watched an abbreviated television show (7 minutes in length). We computed mean BOLD response across RH ventral-stream regions of interest (ROIs) (Fridriksson et a., 2016). Neural synchrony within a group (PWA, NT) was estimated as correlation between individual timecourses. We also computed the correlation between the PWA’s individual timecourses and the mean of the NT timecourses as a measure of similarity of each PWA to the NT group. Results Timecourses were positively correlated across subjects within NT (r = .73) but not PWA (r = -.09) groups. A t-test between z-transformed correlation coefficients revealed a significant difference between groups (t(45.82) = 2.7; p = .009). To identify which PWA were demonstrating successful neural synchrony, timecourses from each PWA were correlated with a NT timeseries, which was the average of all members of the NT group. Therefore, PWA with higher correlations to the NT timeseries were thought to demonstrate neural synchrony success. PWA demonstrating neural synchrony success were those with greater spontaneous speech (r = .65, p = .11), naming (r = .66, p = .10) and repetition (r = .60, p = .15) abilities, as well as milder aphasia (r = .68, p = .09). These correlations did not reach statistical significance, likely due to a small sample size. Summary These preliminary results confirm results shown elsewhere: the typical listeners, over the course of a naturalistic comprehension paradigm, demonstrate neural coupling in reasonable regions (e.g. RH ventral stream). Here, we extend this to PWA who showed heterogeneity in timecourses in the contralesional hemisphere. PWA whose timecourses most resembled neurotypical listeners demonstrated milder aphasia and milder impairments of naming, spontaneous speech and repetition. More broadly, this early finding suggests that neural activation within the intact hemisphere in PWA is related to an impairment in language processing. D19 Establishing brain-to-behaviour prediction models of post-stroke aphasia: A systematic investigation of brain parcellations, multimodal imaging, and machine learning algorithms. Ajay Halai1, Anna Woollams2, Matthew Lambon Ralph1; 1MRC Cognition and Brain Sciences Unit, University of Cambridge, 2Neuroscience and Aphasia Research Unit, Faculty of Biology, Medicine and Health, University of Manchester In recent decades, structural and functional neuroimaging have radically improved our understanding of how speech and language abilities map to the brain in normal and impaired participants, including the diverse, graded variations observed in post-stroke aphasia. Despite the potential for a paradigm shift in neuroscience theory and clinical practice, only recently have a handful of studies begun to explore the reverse inference: creating brain-tobehaviour prediction models. In order to establish some key foundations for successful prediction models, in this large-scale study we systematically investigated four critical issues to determine the optimal: (1) behavioural measures to use as targets; (2) partitioning of the brain 183 Poster Session D space for use as predictive features; (3) combination of structural and connectivity measures from multimodal neuroimaging; and (4) type of machine learning algorithms to generate predictions. There is increasing agreement that binary aphasia classifications are limited; furthermore, while it is possible to predict performance on individual neuropsychological tests, any assessment taps multiple underlying component abilities and hence performance across tests is often intercorrelated over patients. An alternative approach places patients as points in a continuous multidimensional space, where the axes represent primary neuro-computational processes. Therefore, we explored the influence of the core model factors while predicting four principal dimensions of language and cognition variation in post-stroke aphasia. The results showed that across all four behavioural dimensions, we consistently found that the best prediction models were derived from structural measures extracted from T1 scans, parcellated using a larger number of ROIs and submitted to a multi-kernel learning algorithm. Adding information on white matter connectivity (in vivo patientspecific diffusion weighted data) did not improve the models. Our results provide a set of principles to guide future work aiming to predict outcomes in language disorders from brain imaging data. From a clinical implementation perspective, it suggests that the majority of the information needed for effective outcome prediction in stroke aphasia can be obtained using standard clinical MR scanning protocols, obviating the need for acquisition and processing of diffusion weighted data. D20 French version of the Phonological Component Analysis: Preliminary results with nine participants Michele Masson-Trottier1,2, Karine Marcotte1,3, Carol Leonard4,5,7, Elizabeth Rochon5,6,7, Ana Inés Ansaldo1,2; 1Université de Montréal, 2Centre de recherche de l’Institut universitaire de gériatrie de Montréal, 3Centre de recherche de l’Hôpital SacréCoeur de Montréal, 4University of Ottawa, 5University of Toronto, 6 Toronto Rehabilitation Institute, 7Canadian Partnership for Stroke Recovery, Heart and Stroke Foundation Introduction. Anomia is the main symptom, and most persistent aphasia sign. Among anomia therapy procedures, Phonological Component Analysis (PCA) [1] has been proven effective in improving the naming capacities of some English-speaking persons with chronic aphasia. PCA uses phonological cues associated with the target word to elicit naming. The present study aims to identify the effects of an adapted French Canadian PCA therapy[2] on the accuracy and response time (RT) of native French speakers with aphasia. Methodology. The present work, part of a larger ongoing study, presents the preliminary results of nine chronic aphasia participants. Participant received 1 hour therapy, 3 times per week for 5 weeks (total of 15 hours). The performance (accuracy and RT- when available) before therapy was compared to the performance after therapy using a related sample – Wilcoxon test. Pre-, post-therapy and 3-month follow-up scores on the Test de Dénomination de Québec: TDQ-60[4] and the DVL-38[5] are used to measure generalization and maintenance. Pre and post 184 SNL 2019 Program functional connectivity (FC) changes during resting state fMRI was also used to study neurofunctional impact of the therapy with five of the nine participants (four were fMRI non-compatible). Preliminary results. At the group level, the therapy was effective at improving accuracy (rate) (from μacc-pre=0.67±0.02 to μaccpost=0.84±0.01, Z=8.776<0.0001) and RT (in seconds) (from μRT-pre=8.00±0.60 to μRT-post=5.84±0.42, Z=6.219, p=0.012). On the TDQ-60, we observe participants who improve at post-therapy (6/9) and that maintain or continue improving at 3 months post-therapy (3/6). The therapy also significantly increased the FC between the canonical language network[6] and the action observation[7] network as well as decreased the FC between the action observation network[7] and the mental imagery network[7]. Conclusions. The results replicate results obtained in previous work[1], PCA leads to improvements in naming for some aphasic participants. It is important to continue this work to find markers predicting the effectiveness of this therapy. In addition, future studies will examine the neurobiological substrates supporting the effectiveness of PCA. 1 Leonard et al. Aphasiology 22, 923-947, doi:doi. org/10.1080/02687030701831474 (2008). 2 MassonTrottier et al. in Academy of Aphasia – 55th Annual Meeting, November 5th - 7th 2017 (Baltimore, USA, 2017). 3 Jacqueline et al. Frontiers in Psychology 7, doi:10.3389/conf.fpsyg.2016.68.00113 (2016). 4 Macoir et al. Aging, Neuropsychology, and Cognition, 1-14 (2017). 5 Hammelrath. (Isbergues: Ortho-Édition, 2005). 6 Baldassarre et al. Neurology 92, e125-e135, doi:10.1212/ WNL.0000000000006738 (2019). 7 Courson et al. in Tenth Annual Society for the Neurobiology of Language Meeting. D21 No selective action verb impairment in Parkinson’s disease: evidence from Danish patients reading naturalistic texts Andreas Højlund1, Marie Louise Holm Møller1,2, Sabine Grene Thomsen1,2, Karen Østergaard1,3, Mikkel Wallentin1,2; 1 Center of Functionally Integrative Neuroscience (CFIN), Aarhus University, 2Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, 3Department of Neurology, Aarhus University Hospital The present study investigates whether Parkinson’s disease (PD) patients are impaired in their processing of action-related verbs when reading naturalistic stories. Previous research suggests that PD patients exhibit difficulties in naming, producing, remembering and identifying action verbs. García et al. (2018) recently found that this specific deficit for PD patients seems to be present even when reading naturalistic stories (in Spanish) rather than isolated words or sentences. The present study is a replication of García et al.’s study (2018) with Danish-speaking PD patients and texts in Danish, while at the same time also an extension of the original study by including a crucial new contrast with both action and non-action verbs embedded in the same text. To this end, we constructed 2 x 2 naturalistic stories in Danish, with each pair of stories closely matched on several linguistic factors (e.g. word frequency and readability) The Society for the Neurobiology of Language SNL 2019 Program  Poster Session D similarly to García et al.’s stimuli. The first pair of texts, closely mirroring García et al.’s design, included one text with a high degree of action-content and one with a high degree of non-action content, while the second pair of texts integrated action and non-action content in both texts. With the extension of the second pair of texts, we sought to investigate whether the specific deficit for action content found in García et al.’s study could be due to a substantial build-up of action content over the course of one text compared to the other. 28 PD patients and 28 age- and gender-matched controls read the four stories and answered questions about situational (mainly time and place), action-related and non-action-related content. This allowed us to investigate the hypothesis that PD patients would perform worse on action content than on situational and non-action content compared to controls. Results showed no significant differences in performance between PD patients and controls. In fact, for several contrasts, equivalence tests showed no practical difference between the two groups’ performances. However, we did see a significant main effect of verb type as both groups generally performed worse on nonaction-related questions compared to action-related and situational questions, suggesting that non-action content may be generally harder to remember (also when embedded in naturalistic stories). Our finding that Danishspeaking PD patients do not seem to be specifically impaired in their action language processing underlines the importance of testing such claims in cross-linguistic and cross-cultural designs. Further research is needed to properly delineate whether typological differences between Spanish and Danish may affect PD patients’ language processing differently or whether the originally reported effects are not replicable. In broader terms, research within the field of action language processing in PD patients is still in its early stages and it is thus still unclear whether action verb impairment is a sui generis deficit in PD. previous neurologic or severe psychiatric history. Auditory comprehension scores were taken from the Western Aphasia Battery (WAB), which includes both single-word and sentence-level auditory comprehension subtests. Patients’ average auditory comprehension scores ranged from very impaired to mildly impaired (range = 18-90% correct; mean = 69% correct). The sample included patients with Broca’s aphasia, Wernicke’s aphasia, global aphasia, conduction aphasia, and anomic aphasia. A mixed effects model using a single, fixed recovery slope was used to test for the best auditory comprehension recovery time transformation by finding the best fit to the longitudinal patient data, indicated by the lowest Bayesian Information Criterion (BIC) score under the model. To identify the neural predictors of recovery, Voxel-based Lesion Symptom Mapping (VLSM) was used to determine brain regions associated with improvements in patients’ comprehension scores over time. Results: Curves involved in survival analysis across time such as the Log-Logistic, the Lomax, and the Exponential-Log provided the best fit transformations for the longitudinal recovery data and showed that auditory comprehension scores often improved substantially in both year 1 and year 2 of stroke recovery, and to a lesser extent in subsequent years. With respect to predictor variables, age was a significant factor in auditory comprehension recovery, but none of the following variables were significantly related: gender, education, and lesion volume. The VLSM analysis using the best Log-Logistic recovery transformation identified a significant region of left superior temporal cortex and underlying white matter that was associated with poor recovery of auditory comprehension. Conclusions: Findings from this study in 46 patients with aphasia confirm suggestions that spontaneous language improvement continues well beyond the first year following stroke, and that the degree of improvement is affected by age and the presence of lesions in left superior temporal cortex and adjacent white matter. D22 Longitudinal Recovery of Auditory Comprehension following Stroke Juliana Baldo1, Timothy Herron1, Brian Curran1, Sandy Lwi1, Krista Parker1, Maria Ivanova1,2, Nina Dronkers1,2; Veterans Affairs Northern California Health Care System, 2 University of California, Berkeley D23 Neural correlates of spoken and written language comprehension in chronic stroke aphasia Niki Drossinos 1 1 Introduction: Aphasia recovery research has largely focused on changes during the first year following stroke, but it has been suggested that clinical progress continues to take place after this time. Also, recovery studies have typically focused on speech and language broadly, with few studies focusing on the recovery of auditory comprehension. To address these gaps, we examined the demographic and neuroanatomical variables associated with natural recovery of auditory comprehension deficits following stroke during the first year and beyond. Methods: We retrospectively analyzed data from 46 stroke patients in our database who met the following criteria: 1) single, left hemisphere stroke; 2) righthanded; 3) native English speaker; 4) language testing at two or more time points with no intervening treatment; 5) available CT or MRI lesion reconstruction; and 6) no Aphasia affects around 30% of stroke survivors subacutely and persists chronically in 20%. Although much research has considered spontaneous recovery in the first year post-stroke (Hillis et al., 2018), very little is known about ongoing behavioural change in the chronic phase. In this study we considered the neural predictors of change in spoken and written sentence comprehension (as assessed by the Comprehensive Aphasia Test) over a minimum interval of at least one year in 34 chronic stroke aphasic patients. Detailed T1 and DWI imaging was acquired at the first assessment. Normalised change per year in sentence comprehension was used in Voxel-Based Correlational Methodology (VBCM) analyses (Tyler et al., 2005) to investigate the neural predictors of behavioural change in T1 and Anatomical Connectivity Maps (Embleton et al., 2007) derived from DWI. Although the magnitude The Society for the Neurobiology of Language Sancho1, Ajay D. Halai2, Lauren Cloutman1, Anna M. Woollams1; University of Manchester, 2MRC Cognition and Brain Sciences Unit, University of Cambridge 185 Poster Session D of change was modest, whole brain analyses using a voxel level p-threshold of p<.005 revealed a number of clusters associated with change over time, as evaluated by AAL and NatBrainLab Atlases. Amount of spoken CAT change was positively associated with a 498 voxel cluster in T1, centred on the right temporal pole (extending into the inferior temporal gyrus, amygdala, fusiform gyrus and parahippocampal gyrus and including the uncinate faciculus and inferior longitudinal fasciculus). Amount of written CAT change was positively associated with a 249 voxel cluster in T1 in the left cerebellum (extending into the vermis and superior cerebellar pedunculus). Amount of spoken CAT change was positively associated with a 1197 voxel cluster in ACM centred on the right cuneus (extending into the right precuneus and cingulum, left lingual gyrus and occipital and calcarine cortices, and the right corpus callosum and cingulum). Amount of written CAT change per year was positively associated with a 1177 voxel cluster in ACM centred on the right anterior cingulum (extending into medial and superior frontal gyri, and orbitofrontal cortex, and the right cingulum and right corpus callosum). ACM effects for both CAT modalities fell below FWE p<.05 at cluster level. These results indicate that structural aspects of the intact right hemisphere can predict ongoing behavioural change in chronic stroke aphasia due to a left MCA stroke. The results implicating the right anterior temporal lobe are consistent with previous work on prediction of recovery from the subacute to chronic stage using fMRI of speech comprehension (Saur et al., 2010), and recent work showing right temporal regions underpinning changes in chronic stroke aphasia (Hope et al., 2017). Moreover, our results indicate measurements of structural connectivity appear to be more sensitive that standard T1, in line with recent work concerning prediction of recovery from the subacute to chronic stage with DWI (Forkel et al., 2018). These results motivated our ongoing work using a wider range of in-depth assessments to provide greater sensitivity to change across a variety of language domains in chronic stroke aphasic patients. D24 Parsing trimorphemic words in sentences: Evidence from aphasia Roberto G. de Almeida1, Maude BrissonMcKenna1; 1Concordia University Introduction: Linguistic productivity is intrinsically dependent on the ability to compute complex hierarchical structures. In morphology, this ability is determined by selectional restrictions of roots and affixes. For instance, in a word such as unstoppable the prefix un- attaches to the complex adjective stoppable not to the verb stop (thus, ruling out *unstop). In the case of unlockable, however, both possibilities exist: [[un][lockable]] “not able to be locked” or [[unlock][able]] “able to be unlocked”. Proper parsing of these trimorphemic structures is thus key to accessing their meaning in natural language use. Few experimental studies have investigated the parsing and interpretation of these types of words in isolation and in sentence contexts (de Almeida & Libben, 2005; Libben, 2003; Libben, 2006; Pollatsek, Stockall, Drieghe, & de Almeida, 2010), with results pointing either to a 186 SNL 2019 Program right-branching or to a left-branching preference, with factors such as context and frequency affecting later, not initial stages of analysis. We investigated morphological parsing in individuals with stroke and aphasia aiming to understand (a) whether there is a default parsing strategy, (b) the potential influence of sentential-semantic context on parsing preferences, and (c) the breakdown of morpho-semantic processing. Method: Participants were 14 individuals with stroke, 12 of whom were diagnosed with aphasia (3 fluent [FL], 2 mixed [MX], 2 mixed but predominantly non-fluent [MN], 5 non-fluent [NF]) and 2 were RH-damaged. Controls were 30 healthy individuals matched to the clinical groups in age, sex, education. They were all native speakers of English. Stimuli were 48 sentences containing ambiguous trimorphemic words (e.g., unlockable), with 24 biasing towards the leftbranching, 24 towards the right-branching analysis of the trimorphemic word (e.g., ‘When the zookeeper went to unlock/lock the cage, he found it was unlockable’). In addition, materials included 24 sentences containing left-branching words ([[refill][able]]) and 24 sentences containing right-branching words ([[un]stoppable]). These sentences were divided into two booklets, with each participant completing one booklet. Participants rated each sentence on a 5-point scale (Rating task) and then were asked to indicate, by drawing a line on a target word, where in the word they thought a separation could be made (Parsing task). Results and Discussion: Analyses by items (Mycroft et al., 2002) were based on deviations from the control group. With the exception of NF, all groups, showed strong preference for right branching. Of note were the NF and MX groups which showed greater deviations in parsing strategies (F (1, 11) = 5.43, p = .04, ηp2 = .56), with NF showing a significant left-branching preference for all word types, irrespective of meaning, even with unambiguous right-branching words (thus yielding *[[unthink]-[able]]). MX showed the reverse pattern, with a significant preference for prefix-stripping (right-branching, thus yielding *[[re]-[fillable]]). Results are compatible with online studies (de Almeida & Libben, 2005; Pollatsek et al., 2010) suggesting that both parses are available during initial stages of analysis, regardless of context. But results also show that NF and MX aphasia groups are the least sensitive to morpho-semantic restrictions, applying default parsing strategies leading to semantic anomalies. Language Therapy D25 Differential effects of intensive language-action therapy on processing grammatical word class and semantics demonstrated by functional magnetic resonance imaging Felix R Dreyer1, Lea Doppelbauer1, Benjamin Stahl2, Guglielmo Lucchese3, Bettina Mohr2, Friedemann Pulvermüller1,4,5,6; 1Brain Language Laboratory, Freie Universität Berlin, 2Charité Hospital Berlin, 3University Medicine Greifswald, 4 Humboldt Universität zu Berlin, Cluster of Excellence Matters of Activity. Image Space Material, Berlin, Germany, 5Berlin School of Mind and Brain, 6Humboldt Universität zu Berlin, Cluster of Excellence Matters of Activity. Image Space Material The Society for the Neurobiology of Language SNL 2019 Program  Introduction: Previous discussions on the neural correlates of aphasia therapy effects were predominantly centered on the issue of differential hemispheric contributions to aphasia recovery, with studies showing rather heterogeneous results in favor of a compensatory role of perilesional left hemispheric, as well as (homologue) areas on the right hemisphere (Saur & Hartwigsen, 2012). In how far those neural correlates depend on processing of specific semantic and grammatical received less attention in comparison. The current study therefore investigates differential effects of aphasia therapy between semantic and grammatical word types using fMRI and standard behavioural aphasia measures. Methods: Sixteen chronic stroke patients suffering from aphasia received intensive language action therapy (ILAT; also known as Constraint Induced Aphasia Therapy, CIAT) for 32h. Before and after therapy all patients participated in a passive reading fMRI paradigm. Stimuli consisted of Hashmark strings, which served as a visual baseline, as well as nouns and verbs with either abstract or concrete semantics, with categories being matched on psycholinguistic properties. Therapy outcome of each patient was assessed by comparing pre and post therapy results of the Aachener Aphasie Test (AAT) battery. Changes in behavioral performances were correlated with changes in fMRI signal strength across different stimulus categories applied. Results: In a 2 (Time: pre/post therapy) X 2 (Word Class: noun/verb) X 2 (Semantics: abstract/concrete) second level design on first level contrasts of proper words against visual baseline, a main effect of Time was revealed in the Thalamus of both hemispheres at p <.005 (unc.) and k > 50. Furthermore, applying the same statistical thresholds, a Time X Semantics interaction was reported located in frontal regions, the Precuneus and Thalamus of the right hemisphere and the Cingulum in both hemispheres. Correlation analysis revealed exclusively post vs pre signal increase for Abstract Stimuli in Time X Semantics clusters to correlate significantly with language Perception measures; Spearmans Rho = .68, p = .025 (Bonferroni corrected). Cluster specific analysis revealed signal change for Abstract stimuli in the right Precuneus (Spearmans Rho = .71, p = .008, Bonferroni corrected) and the Cingulum (Spearmans Rho = .72, p = .006, Bonferroni corrected) to show strong correlations to increases in AAT Language Perception measures. In contrast, no such correlations were observed for analysis of concrete stimuli signal changes, for noun/verb or general time effects in the respective clusters. Discussion: Current results indicate translational therapy effects between semantic word types. Training material of ILAT consists of pictures showing concrete objects, while abstract material is not used in therapy. In contrast, the passive reading fMRI paradigm showed correlations between fMRI signal change and AAT score improvement specifically for abstract but not for concrete stimuli. Furthermore, in line with previous approaches targeting the neural correlates of conventional therapy (Menke et al. 2009) or CIAT/ILAT effects (Mohr et al. 2014), the current findings highlight activation increase in the right hemisphere to be related to behavioural language improvement. The Society for the Neurobiology of Language Poster Session D Meaning: Lexical Semantics D26 Developmental changes in the inferior frontal cortex for processing Chinese classifiers Shu-Hui Lee1, Chi-Lin Yu1, Chuan-Ching Liao2, Ting Chen2; 1National Tsing Hua University, 2National Taiwan University Reading ability is associated with maturational changes in the brain, which in turn is essential for functioning in our daily life (Johnson, 2011). Despite the fact that the specialization of the language network is well established in alphabetical languages (Brauer & Friederici, 2007), we do not know whether these findings can be generalized to Chinese, especially for children. We attempted to study violations in Chinese classifiers, a relatively constrained phrase structure. By adopting this classifier violation paradigm, the present functional magnetic resonance imaging (fMRI) study aimed to clarify the neural correlates of processing phrasal level of semantics. Forty-six typically developing children (aged 7-15, 27 females) were asked to perform semantic congruency judgments on congruent, intra-classifier (IA) violated, and inter-classifier (IE) violated phrases. The IA and IE violations involved changing a correct classifier to an incorrect classifier of the same category (e.g. count-count or mass-mass) and of a different category (e.g. count-mass or mass-count), respectively. The comparison of the IE violation vs. the IA violation produced greater activation in the left middle temporal gyrus (MTG) and left inferior frontal gyrus for both age groups. Moreover, greater activation was found in the IFG for the adolescents (aged 11-15) compared to the children (aged 7-10) on the contrast of [IE vs. IA]. Left IFG is proposed to be specialized for selecting relevant semantic information. All together, these results suggest that older children may have better executive control in selecting the appropriate classifier for the phrasal structure, resulting greater activation in the left IFG. D27 The dynamic influence of newly established associations between concepts on the semantic network Jinfeng Ding1,2, Yufang Yang1,2; 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, 2Department of Psychology, University of Chinese Academy of Sciences Previous studies on semantic memory mainly focused on the relatively static neuro-cognitive architecture of the conceptual representation, leaving its dynamic changes underexplored. With the EEG (electroencephalograph) technique, the current study aimed to investigate the updating of conceptual system which resulted from the new associations established between known concepts in contextual reading. We constructed two types of contexts in which the critical words were embedded in two sentences. In the new association (NA) condition, the critical words were associated with other known concepts. In the original meaning (OM) condition, the critical words represented their original meanings. Participants read the contexts and inferred the meaning of the critical words in the NA condition. Immediately after reading and 24hrs later, they performed a lexical decision task in the semantic priming paradigm and a cued-recall memory task. In the lexical decision task, the critical words in the 187 Poster Session D contexts served as the primes. Words semantically related and unrelated to the original meanings served as the targets. In the cued-recall memory task, participants were presented with the critical words and asked to write down their meanings in the contexts. Immediately after reading, the EEG results revealed a semantic priming effect in the OM condition, with the semantically related words eliciting a smaller N400 than the unrelated words. However, this semantic priming effect was smaller in the NA condition, indicating that the new associations established between known concepts interfered with the semantic spreadingactivation of the original meanings in the semantic network. Twenty-four hours later, the semantic priming effects between the OM and NA conditions were not significantly different, although the cued-recall task showed that participants could remember more than half new associations on the next day. This results imply that the interference might be context-dependent and modulated by the strength of the new associations. Our findings suggest that the semantic network dynamically changes with the language usage. D28 Identifying distinct functional subdivisions of the anterior temporal lobes Andrew S Persichetti1, Stephen J Gotts1, Alex Martin1; 1Laboratory of Brain and Cognition, NIMH/ NIH The functional role of the anterior regions of the temporal lobes is a contentious issue in cognitive neuroscience. A wide range of functions have been posited, and not all are mutually exclusive. For example, different theories claim that the anterior temporal lobes serve as a general convergence zone or hub for all semantic information, whereas other models posit more selective roles in object recognition, language processing, and social cognition. However, given the anatomical heterogeneity of this brain region, the anterior temporal lobes almost certainly support a wide array of cognitive functions. Our goal is to identify functionally distinct regions within the anterior temporal lobes and understand the extent to which each region is involved in specific cognitive functions. To this end, we used resting-state fMRI from 88 participants (24 female) and a novel clustering method to identify subdivisions within anterior temporal cortex, as well as medial temporal lobe structures (i.e., the amygdala and hippocampus). Specifically, we calculated the functional connectivity (Pearson correlation of the resting-state time series) between voxels within the anterior temporal lobes (defined as any temporal lobe voxels anterior to y=-35 in Talairach space) and all voxels outside of the temporal lobes. We then thresholded the resultant correlation matrices across a wide range of correlation values and clustered the group-average matrices using both Infomap and Louvain Modularity. Finally, the identified parcels were required to replicate across halves of the data (44 participants in each half) and across 10 randomized split-half samples. We found at least six functionally distinct cortical parcels in each hemisphere as well as parcellations in the medial temporal lobe structures. Thus, the parcellation profile of the anterior temporal lobes is consistent with its involvement in diverse cognitive 188 SNL 2019 Program functions. Next steps of this project will involve identifying non-temporal-lobe targets of the selective subdivisions found in the current work and using this information to design experimental task manipulations that can more precisely dissociate these parcels based on their unique contributions to specific cognitive tasks. D29 Controlled Semantic Cognition Necessitates a Deep Multimodal Hub Rebecca Jackson1, Timothy T. Rogers2, Matthew A. Lambon Ralph1; 1MRC Cognition & Brain Sciences Unit, Cambridge University, 2University of Wisconsin-Madison Semantic cognition, or the controlled representation and use of conceptual knowledge, is a core process underlying language. The semantic system must satisfy a number of essential properties. Principally, it must 1) learn to form coherent context-invariant conceptual representations by abstracting over episodes across time and by learning the complex non-linear relationships between features across different sensory modalities, and 2) dynamically use subsets of features to create a context-appropriate similarity space and produce context-dependent behaviours. These performance criteria are non-trivial to achieve, particularly because they necessitate the presence of and interaction between context-variant and context-invariant processes. A variety of different architectures can and have been theorised to subserve the semantic system, however, the ability of these architectures to synthesise context-invariant representations and task-specific outputs have never been formally tested. We designed a framework in which to test different possible architectures of a semantic network, to inform on the plausibility of various theorised candidate cortical architectures. We investigated the importance of five architectural features: a hub, a multimodal hub, depth, hierarchical convergence across modalities and the inclusion of sparse long-range connections. An architecture employing a single, deep multimodal hub with sparse long-range connections from modality-specific inputs, was identified as optimal. We also explored where the control signal should connect into the network, and the consequences of lesioning control and representation regions of the model. Implications for the neurobiology of the cortical semantic system, in health and disorder, are considered. D30 Modulation of functional connections from temporal cortex during second language word recognition in noise: does L2 - L1 phonological similarity matter? Sara Guediche1, Angela de Bruin1, César Caballero- Gaudes1, Martijn Baart1,2, Arthur G. Samuel1,3,4; 1Basque Center on Cognition, Brain and Language (BCBL), 2Tilburg University, 3Stony Brook University, 4Ikerbasque Basque Foundation For Science Spoken word recognition can be hindered by noisy listening conditions, especially when listening in a second language. The current study aims to elucidate the functional networks and modulations in connectivity that support accurate word recognition of a second language, under adverse listening conditions. In particular, we investigate two factors which are known to affect word recognition, namely 1) the phonological similarity of a The Society for the Neurobiology of Language SNL 2019 Program  target word to the native language translation equivalent, and 2) the semantic relationship of a preceding prime word. To this end, a fast-event related fMRI design (using a multi-echo sequence) was conducted. Highly proficient bilingual participants (Spanish-Basque, L1-L2) heard Prime-Target pairs in their second language (Basque) and performed a lexical decision task on the Targets. Word targets varied in the degree of phonologicallexical overlap (cognate type) with the native language (identical cognate, partial cognates, or non-cognates). The preceding prime words were either semantically related or unrelated (semantic priming) to the word targets. We examine whether L2 phonological similarity to L1 affects the integration of information across acoustic and semantic levels of processing and whether the interaction between the two factors involves areas implicated in executive/language control. Not surprisingly, semantic priming improved performance. In contrast, having an L1 cognate impaired performance (replicating a prior behavioral study). For the fMRI analysis, correct and incorrect responses were modeled separately in the GLM model. A whole-brain voxelwise ANOVA analysis (N= 23) on correct responses with Semantic Priming (Related, Unrelated) x Cognate Type as factors showed a significant interaction between semantic relatedness and cognate type in a network consisting of temporal, parietal, and frontal areas. Previous studies in monolingual listeners have shown that connections between frontal and temporal cortical areas are modulated by a predictive context (e.g., Obleser et al., 2007; Gow, 2015; Sohoglu and Davis, 2016). Much of this research has focused on the influence of frontal areas on activity in temporal areas. Here, we conduct a context-dependent functional connectivity analysis (generalized psychophysiological interactions (gPPI)) using a voxel from the posterior superior temporal gyrus (pSTG) as a seed in order to determine how the relatedness of the prime and the cognate type modulate its functional connections during accurate word recognition (correct responses). We predict that semantic relatedness modulates the functional connections between pSTG and more anterior and inferior temporal areas associated with lexical and semantic processing. Multi-echo fMRI has been shown to prevent signal loss in regions susceptible to signal dropout, thus we expect to better capture potential changes in functional connections to these regions. We also predicted that native language co-activation induced by Cognate Status may modulate cognitive demands, and thus connections to frontal areas, especially when the preceding semantic context is unrelated. Preliminary results from the gPPI analysis seem consistent with these predictions. Specifically, they suggest that, in noisy listening conditions, cognate status affects the need for cognitive control to coordinate lexical knowledge with the acoustic speech signal. The Society for the Neurobiology of Language Poster Session D D31 Adaptation of a semantic picture-word interference paradigm to language mapping with transcranial magnetic stimulation Magdalena Jonen1, Stefan Heim1,2, Marie Grünert1, Georg Neuloh1, Katrin Sakreida1; 1RWTH Aachen University, 2 Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Germany Language mapping with neuro-navigated transcranial magnetic stimulation (TMS) helps to identify languagerelated cortical regions and is mostly applied for clinical purposes. A high spatial resolution approach was proposed to explore cortical sub-areas, such as Broca’s region. Functional imaging data suggested segregated semantic, syntactic and phonological processing in an anterior-to-posterior direction along the inferior frontal gyrus. As yet, in TMS language mapping studies qualitative aspects of language processing have been rated subjectively. We therefore recently introduced a reaction time based picture naming paradigm with phonological picture-word interference (PWI) for more objective TMS mapping, which enables to adress specifically and control the level of phonological processing within Broca´s region. The inhibitory effect of TMS on language processing reduced the behavioral phonological priming effect, which is characterized by accelerated naming responses to target pictures accompanied by auditory presented phonologically related distractor words. Here, we complementarily adapted a semantic PWI paradigm. Semantic relations, particularly categorical relations rather lead to a behavioural semantic interference effect in terms of inhibition of naming responses to target pictures accompanied by visually presented related distractor words. However, semantic priming has been shown for associative semantic relations with a stronger effect for part-whole than functional relations. We therefore chose part-whole relations so that our hypotheses related to TMS-induced inhibition can be transferred. In this study we aimed at the comparison of unimodal presentation, i.e. visual distractors, and multimodal presentation, i.e. auditory distractors, to test for efficiency of naming response acceleration. We used 30 target pictures from a standardized set and combined them with a related distractor and an unrelated distractor from the same set. The item set was controlled for numbers of syllables, phonemes and graphemes as well as low lexical frequency in spoken German language and semantic categories. According to the constraints given by the future TMS study, we created six blocks with 30 target-distractor pairs and a pseudo-randomized session for each participant. Importantly, each target noun was required to be presented with the related and the unrelated distractor at the same position within the block in two out of the six blocks. After familiarisation prior to the experiment, 15 healthy German native speakers were employed in both conditions each. The 2 × 2 ANOVA analysis with the factors RELATION and MODALITY yielded only a marginal significant main effects RELATION. The facilitatory effect, the difference unrelated minus related distractors, was, however, numerically higher in the unimodal condition as compared to the multimodal condition in which no 189 Poster Session D facilitation effect was shown. In a post-hoc item analysis we selected the ten most facilitating target pictures. The restricted analysis of these items supported a stronger semantic facilitation effect in unimodal presentation with visual distractors as compared to multimodal presentation with auditory distractors. Aiming at the most effective design of a semantic PWI in TMS-setting we found a preference for visually presented part-whole distractors. The specific stimulus set showing a stronger semantic faciliation effect may build a basis for efficient TMS language mapping controlling the level of semantic processing directly by the task. D32 Category-Level Semantic Top-Down Modulation of the N170 Jack E. Taylor1, Sara C. Sereno1, Guillaume Rousselet1; University of Glasgow 1 INTRODUCTION: A common feature of current models of visual word recognition is the notion that readers’ predictions of upcoming words, given semantic context, extend to the top-down prediction of visual word forms. One electroencephalography (EEG) component consistently shown to be sensitive to features of visual word forms is the N170. The present EEG study examined whether the N170 is sensitive to category-level semantic top-down modulation. METHODS: Participants (N=31) were presented with 496 stimuli, comprising equal numbers (N=124) of category-relevant words, control words, pseudowords, and random consonant strings. These were presented in two tasks: a lexical decision task (LDT) and a semantic categorisation task (SCT). Category-relevant words could belong to one of 6 categories of concrete nouns (presented in blocks). Control words were category-irrelevant and matched closely in terms of length and word frequency. Pseudowords and consonant strings were matched in terms of length. The order of tasks and laterality of button responses were both counterbalanced randomly. It was expected that if participants actively predicted visual word forms of words in the relevant category, there would be a task-stimulus interaction in the N170. Specifically, it would be expected that there would be reduced amplitude (and possibly earlier latency) in the N170 for categoryrelevant words in the SCT relative to the LDT. Control words, meanwhile, would show no task effect. RESULTS: Left-lateralised N170 amplitude and latency was analysed at the trial level with a linear mixed effects model. While the previous finding of bottom-up sensitivity to wordlike visual word forms was replicated in differences in amplitude and latency between random consonant strings and control words, no evidence of an interaction between task and stimulus was found. To quantify the extent to which there was evidence for the null hypothesis, a Bayesian mixed effects model was fit with the same maximal structure as the linear mixed effects model. The hypothesis that the interaction between task and stimulus was equal to zero was favoured 100.74 times over the alternative hypothesis for N170 amplitude, and 105.18 times for N170 latency. CONCLUSION: It is suggested that if word form prediction in the N170 is indeed sensitive 190 SNL 2019 Program to semantic-level top-down modulation, then this is only detectable with a narrower range of candidate predictions than is possible at the level of semantic category. D33 Can computers understand word meanings like the human brain does? Assessing the correlation between EEG responses and NLP-generated word similarity Xing Tian1,2, Linmin Zhang1,2, Lingting Wang2,3, Jinbiao Yang4, Peng Qian5, Xuefei Wang6, Xipeng Qiu6, Zheng Zhang1,7; 1NYU Shanghai, 2 NYU-ECNU Institute of Brain and Cognitive Science, 3East China Normal University, 4Max Planck Institute for Psycholinguistics, 5 MIT, 6Fudan University, 7AWS Shanghai AI Lab Semantic representation, a crucial window into human cognition, has been studied mostly independently in several disciplines, including cognitive neuroscience and computer science. Within cognitive neuroscience, semantic relatedness can elicit N400 priming effects: target words in unrelated word pairs (e.g., apple-moon) elicit larger responses than those in related word pairs (e.g., star-moon) around 400 ms after target onset. Within computer science, it has been assumed that semantically similar words tend to appear in similar context, and thus semantic similarity can be computed via co-occurrence frequency. Recently, with deep learning methods, natural language processing (NLP) models can learn semantic similarity from large corpus. Can the representational formats established independently in two complex systems – our brain and computers – be related? More specifically, to what extent can NLP models predict humans’ N400 priming effects? Do distinct NLP models differ in their predictability? To address these questions, we implemented a typical two-word priming paradigm with EEG recordings. Then we evaluated several representative NLP models and tested which model was the best predictor for the EEG responses. Assessing the correlation between the measurement of semantic similarity in two representation formats can not only shed light on the nature of N400, but also potentially provide a reliable evaluation for NLP models. We collected 32-channel EEG data from 25 participants (Chinese native speakers). Each participant read 240 critical trials (pairs of words) and 120 fillers (pairs of a word and a nonword) and performed lexical decision tasks. For each critical trial, correlation coefficient (r) was computed for each millisecond and each channel between the averaged (over participants) EEG signals and the cosine similarity value by an NLP model (between the prime and the target). Among the three NLP models involved in this study, CBOW (Mikolov et al. 2013) is based on word cooccurrence within local context; GloVe (Pennington et al., 2014) is based on word co-occurrence within both global and local context; CWE, a model similar to CBOW, captures also single-Chinese-character-level information (Chen et al., 2015). For each model, the 32 * 1000 r values (channels * milliseconds) formed a heat map, showing how these r values evolve over time and across channels. We used permutation tests to find statistically significant r values. We found that for each of the three models, there were significant correlations between NLP-generated word similarity and EEG signals elicited between 200 to The Society for the Neurobiology of Language SNL 2019 Program  300 ms after target onset in most posterior channels. The spatiotemporal information of these elicited EEG signals is consistent with known spatiotemporal information of N400 priming effects, suggesting that overall, these NLP models can indeed predict elicited N400 responses. Moreover, the most robust correlation was found between GloVe model and EEG responses in the channel Oz around 300 ms after target onset (r = 0.173, p = 0.007). We found correlations between N400 responses and NLP-generated word similarity. Our findings revealed a specific time course of semantic processing, linked semantic representation in the human brain and NLP models, and provided an objective and reliable evaluation for NLP models. Methods D34 Navigating the turbulent seas of lesion symptom mapping: Comparative analyses of univariate and multivariate lesion symptom mapping methods Maria V. Ivanova1,2,3, Timothy Herron2, Brian Curran2, Nina F. Dronkers1,2,4, Juliana V. Baldo2; 1University of California, Berkeley, 2Center for Aphasia and Related Disorders, VA Northern California Health Care System, 3National Research University Higher School of Economics, Moscow, 4University of California, Davis Lesion symptom mapping (LSM) tools are used to identify brain regions critical for a given behavior. Univariate lesion-symptom mapping (ULSM) methods provide statistical comparisons of behavioral test scores in patients with and without a lesion on a voxel by voxel basis. Multivariate lesion-symptom mapping (MLSM) methods consider the effects of all lesioned voxels in one model simultaneously and analyze their contribution to behavior. Advantages and disadvantages of both techniques have been extensively discussed in the literature, but very little systematic work has been done to empirically test these claims. In the current study we conducted a comprehensive comparison between ULSM and MLSM methods by analyzing their performance under varying conditions with artificial behavioral scores. We used lesion masks from 404 left hemisphere chronic stroke patients. Thirteen LSM methods were compared: 5 ULSM (voxel-based linear LSM methods with different permutation-based FWER thresholds for multiple comparisons) and 8 MLSM (6 data reduction (DR) and 2 regression methods). Using artificial behavioral scores (based on lesion load to atlas-based anatomical ROIs), we investigated mapping power and accuracy for single and dual (network type) anatomical target simulations. We tested spatial precision of mapping across anatomical target location, sample size, noise level, and lesion smoothing by using different distance- and overlapbased metrics as indices of spatial accuracy. Additionally, we performed a false positive simulation, where the behavioral target variable consisted of pure Gaussian white noise and thus should not lead to detection of any anatomical areas as significantly related to behavior. Here, we evaluated the size and number of the false positive clusters. Single anatomical target simulations revealed: a) good spatial accuracy for ULSM methods with conservative FWER thresholds and some of the simpler DR (e.g., SVD-based) and regression-based (e.g., The Society for the Neurobiology of Language Poster Session D SVR) MLSM methods; b) variable accuracy across spatial locations, with especially poor performance in cortical locations on the edge of the lesion masks (areas of lower power); c) more accurate localization with lesion mask smoothing for all LSM methods; d) the importance of having a sample with ≥ 64 patients (with the majority of MLSM methods requiring on average 10-20 more patients to achieve a ULSM level of spatial accuracy); e) robustness of the weighted centroid as a measure of LSM statistical map location. Simulations with a dual anatomical target showed: a) more accurate localization with some of the DR MLSM techniques (e.g., LPCA) as well as ULSM methods with relatively liberal cluster-based FWER thresholds; b) the importance of having a sample with ≥ 100 patients. False positive cluster sizes were generally the lowest for ULSM methods with conservative FWER thresholds and regression-based MLSM methods. In summary, our simulations show no clear superiority of MLSM techniques over the ULSM methods. Depending on the design of a particular LSM study and specific hypotheses regarding the expected brain-behavior relationship, different LSM methods are indicated. In general, it is advantageous to implement both ULSM and MLSM methods in tandem to enhance confidence in the results, as significant matching foci identified with both methods are unlikely to be spurious. D35 How does eye-tracking in the MRI scanner compare to the lab? Evidence from a linguistic prediction task Jennifer Mack1, Colleen Ward1, Sofia Stratford1; 1University of Massachusetts-Amherst Introduction. Over the past two decades, eye-tracking (ET) has become one of the primary tools used to examine online language processes such as prediction. More recently, ET has been combined with functional magnetic resonance imaging (fMRI) to investigate the neurocognitive mechanisms of these processes. However, little is known about how ET measurements obtained in typical laboratory settings compare to those obtained in the scanner, given cross-environment differences such as lighting, noise, participant position, and ET camera mount. These factors might affect data quality, sensorimotor processes underlying oculomotor control, and/or higherlevel cognitive and linguistic processes (e.g., the likelihood of prediction). To examine these questions, in the present study, we compared ET patterns in a linguistic prediction task performed in the lab and in the scanner. Methods. 50 neurotypical young, right-handed, English-speaking adults performed a linguistic prediction task. 25 participants did so in typical lab conditions (desktop-mounted ET system, brightly lit room, seated position, minimal background noise) and 25 in the scanner under typical conditions (long-range ET mount, brightly lit scanner bore, supine position, background scanner noise). Apart from these factors, the ET system and data collection procedures were matched across environments. In the task, two object pictures appeared and disappeared, one at a time. One contained two objects and the other a single object (e.g., two apples, one shirt). Then, the participant heard a predictive cue (“Here is one …”/“Here are two …”) and, 191 Poster Session D following a brief interval, the picture that matched the cue in number (the target picture) re-appeared in its original position. Simultaneously, the participant heard a word, and indicated whether it matched the target picture. To test for differences in ET data quality and sensorimotor processes, we examined ET measures extracted from participants’ eye movements following the appearance of the first picture at the beginning of each trial. To examine linguistic prediction, we extracted ET measures time-locked to the onset of the predictive cue, specifically the likelihood of an eye movement to the target picture’s anticipated position prior to its re-appearance. Results. With respect to the appearance of the first picture, saccadic latencies were approximately 40 ms longer in the scanner than in the lab, and there was a significantly higher level of noise in the data (i.e., an increase in the rate of data loss and blinks). With respect to prediction, a higher rate of predictive eye movements was found in the scanner environment vs. the lab. Conclusion. The study demonstrates that ET measures vary between typical laboratory and scanner environments, in part due to differences in data quality and sensorimotor processes. Higher-level cognitive processes such as linguistic prediction are also affected. Specifically, we observed a higher likelihood of prediction in ET measures collected in the scanner vs. the lab, which was evident despite the higher level of noise in the scanner-based data. Although further research is needed to identify the cause of this increase in prediction, one possibility is that reduced intelligibility of auditory stimuli due to scanner noise contributes to compensatory predictive processing. Meaning: Combinatorial Semantics D36 Traveling back in time: does switching the focus to the initial state of the changed object come at a cost? Yanina Prystauka1,2, Gerry Altmann1,2; 1University of Connecticut, 2The Connecticut Institute for the Brain and Cognitive Sciences The theory of Intersecting Object Histories (IOH; Altmann, & Ekves, 2019) postulates that events are “ensembles of intersecting object histories” and that the processing of a previously encountered object entails (at least transient) activation of its previous states, and these compete for selection. For example, the processing of the final “onion” in “The chef will chop the onion, and then she will smell the onion” entails activation of both the intact and the chopped states of the onion. fMRI evidence (Hindy, Altmann, Kalenik, &Thompson-Schill, 2012) suggests that such reactivation and competition manifest in increased activation in the brain area pVLPFC, recruited during Stroop interference, and this competition occurs regardless of whether the target state required by the context is the most recent (current) or the initial state of the object. Perhaps surprisingly, switching the focus from the current to the initial state (as in “The chef will chop the onion, but first she will smell the onion”) didn’t elicit stronger conflict (and higher activation in VLPFC) than selecting the most recent state representation, although previous behavioral (Clark, & Clark, 1971; Mandler, 1986) and electrophysiological (Nieuwland, 2015; Politzer-Ahles, 192 SNL 2019 Program Xiang, & Almeida, 2017; Münte, Schiltz, & Kutas, 1998) research suggests that comprehending events described out of chronological order comes at increased processing cost. The current EEG study is aimed at directly testing whether reversing the order of events (via language) has consequences for the interplay between alternative object states. EEG was acquired while participants (N=24) read sentences presented to them one word at a time. In a 2 by 2 design, we manipulated the degree of change that the object underwent (“The chef will chop/weigh the onion”) and the order of events (“and then/but first, she will smell the onion”). A time-frequency analysis of EEG power, timelocked to the sentence-final determiner phrase, revealed a stronger suppression of alpha/beta power in sentences describing substantial change (“chop”) and presenting events in a chronological order (“and then”) compared to all other sentences. This effect was most pronounced around 250-500 ms after the determiner onset. Such pretarget alpha/beta decreases have been associated with anticipatory processes and preparation for the input (Li, Zhang, Xia, & Swaab, 2017; Rommers, Dickson, Norton, Wlotko, & Federmeier, 2017; Piai, Roelofs, & Maris, 2014; Piai, Roelofs, Rommers, & Maris, 2015; Wang, Hagoort, & Jensen, 2018). This finding raises interesting questions regarding the constraining nature of state change verbs and their effect on downstream sentence processing. The fact that no such preparatory signal was observed in response to sentences describing events in reverse order (“but first”) suggests that in such scenarios our comprehension system is less likely to assume that the unfolding language will refer back to the previously introduced object. To summarize, the interplay between the order in which the events were presented and the degree of change that the events entailed manifested in the anticipatory region as increased prediction for the substantial change & chronological order condition. D37 Bag of words precedes bag of arguments: The time course of computing argument identity in sentence comprehension Chia-Hsuan Liao1, Wing-Yee Chow2, Ellen Lau1; University of Maryland, College Park, 2University College London 1 Unpredictable words typically elicit larger N400 responses than predictable words, but ‘role-reversed’ sentences (‘The waitress that the customer served…”) are a notable exception. This observation has often been taken to provide insight into the speed or accuracy of argument structure computation. Chow et al. (2015) proposed initial verb prediction was driven by arguments in the same clause as the verb (the ‘bag-of-argument’ mechanism), even though it takes up to 1200 ms for argument role information to impact predictions of the verb (Momma et al., 2015). Here we focus on mapping the time course of one part of this computation that must either precede or co-occur with argument role assignment: identifying which noun phrases are arguments of the clause at all. We did this by extending the standard paradigm to include structures containing a clause boundary like (‘The waitress thought that the customer served…”), and evaluating sensitivity to this boundary on the N400 to the verb in two experiments that varied in stimulus-onset The Society for the Neurobiology of Language SNL 2019 Program  asynchrony. Notably, these experiments were conducted in Mandarin Chinese instead of English because Mandarin allows tight matching of word position across the relevant conditions, and is known to show Englishlike N400 insensitivity to role reversal (Chow & Phillips, 2013; Chow et al., 2018). Our materials were sentences modified from Chow et al. 2018. In the Baseline condition, the two arguments were still in the same clause (The millionaire ba the servant fired, meaning: the millionaire fired the servant). The predictability of the target verb was 40% based on the result of offline cloze norming. In the Complement condition, the second argument was the subject of a separate complement clause (the millionaire thought the servant fired …), which reduced the predictability of the target verb to 0%. Sentences were presented with RSVP, with 20% of the sentences followed by a comprehension question. In this design, facilitation due to simple associations between nouns and verb would be equal across conditions, and an N400 predictability difference should only be observed if comprehenders have enough time to identify the arguments based on the clause boundary. In Experiment 1 (n=33), with a presentation rate of 800ms/word, we did observe an N400 difference between conditions. This suggests that the grouping of arguments into clauses impacts verb predictions fairly rapidly, perhaps more rapidly than role information (Momma, et al., 2015; Chow et al., 2018). In Experiment 2 (n=38) we aimed to identify a lower time limit for this “bag-of-arguments” mechanism by speeding up the presentation rate, now using 600ms/word. With this slightly faster rate, we no longer observed N400 differences. These findings suggest that 600ms was not long enough for the parser to compute argument identity information; at this point in time, the parser relies on simple word associations, or “bag-of-words” mechanism to predict the verb. It takes up to 800ms for the “bagof-argument” mechanism to exert its effects, such that an argument outside of the clause domain no longer constrains predictions of the verb in the embedded clause. D38 No language unification without neural feedback: How awareness affects combinatorial processes Valeria Mongelli1,2,3, Erik L. Meijs4, Simon van Gaal2,3,4, Peter Hagoort1,4; 1 Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, 2Department of Psychology, University of Amsterdam, 3Amsterdam Brain and Cognition (ABC), University of Amsterdam, 4Donders Institute for Brain, Cognition and Behaviour, Radboud University How does the human brain combine a finite number of words to form an infinite variety of sentences? Describing the neural network subserving sentence processing, and what differentiates this network from single word processing, is a major, still unachieved challenge in brain research. According to the Memory, Unification and Control (MUC) model, both semantic and syntactic combinatorial processes require feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of linguistic information from sensory regions to LPTC. Here, we tested this The Society for the Neurobiology of Language Poster Session D core prediction of the MUC model by reducing visual awareness of words using a masking technique. Masking disrupts long-range feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that, at least under certain conditions, masked single words still elicit modulations of ERP components like the N400, a neural signature of linguistic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed four experiments in which we measured electroencephalography while 40 subjects performed a masked priming task. In Experiments 1 and 2, we investigated semantic combinatorial processing. Masked or unmasked words were presented either successively or simultaneously, thereby forming a short sentence (e.g. man pushes woman) that could be congruent or incongruent with a target picture. This sentence condition was compared with a single word condition, in which single words (e.g. man) were followed by congruent/ incongruent pictures. Experiment 3 and 4 aimed to test syntactic combinatorial processing. A masked/unmasked prime (e.g. he) was followed by an unmasked target (e.g. drives), forming syntactically correct or incorrect combinations (he drives vs. he *drive). This sentence condition was compared with a typical semantic priming task, in which masked/unmasked primes and unmasked targets formed congruent or incongruent combinations (e.g. cat-dog vs. cat-nurse). Overall, we found that both semantic and syntactic combinatorial processes are impaired when disrupting long-range feedback with masking. Indeed, no ERP modulation was found in the masked sentence condition across all experiments. On the contrary, in all unmasked sentence conditions incongruent trials triggered an N400 effect. We also found an N400 effect in the masked single word condition of Experiments 1 and 2, but not in Experiments 3 and 4. This suggests that experimental settings strongly affect masked single word priming. Our results suggest that feedback processing from the LIFC to LPTC is required for semantic and syntactic combinatorial processes but not for single word processing, supporting a core prediction of the MUC model. These findings provide an important contribution to ongoing debates about the specific roles of different brain regions in a distributed language network. D39 Disentangling semantic association from semantic composition in the LATL Jixing Li1, Julien Dirani1, Liina Pylkkänen1,2; 1New York University Abu Dhabi, 2New York University Introduction Although the process for composing two meanings (e.g., “coffee cup”) is conceptually different from associating two words together in memory (e.g., “coffee, cup”), the brain regions supporting semantic composition have also been implicated for associative encoding. For example, compositional phrases such as “red boat” elicit an increased left anterior temporal lobe (LATL) activity as compared to non-compositional phrases (e.g., Bemis & Pylkkänen, 2011; Pylkkänen et al., 2014), yet this region is also involved in associative memory tasks 193 Poster Session D such as face-location associations (e.g., Nieuwenhuis et al., 2012). The current study applies distributional semantic models to disentangle semantic association from semantic composition, and tests whether the LATL is indeed sensitive to semantic composition or simply tracks the association between words. Methods 24 righthanded native English speakers (12 female, M=21.8 years) participated in the study. The experiment was a 2x2 design with associative strength (Low, High) and compositionality (List, Comp) as factors. Associative strength between the two words was determined by the cosine similarity score between the word embeddings based on the GloVe model (Pennington et al., 2014). High associative phrases (e.g., “French/France cheese”) have a cosine similarity score greater than 0.3 and low associative pairs (e.g., “Korean/ Korea cheese”) have a cosine similarity score lower than 0.15. The composition condition consisted of country adjective and food noun pairs (e.g., “French/Korean cheese”) and the list condition consisted of country noun and food noun pairs (e.g., “France/Korea cheese”). The critical stimuli (i.e., “cheese”) were held constant cross all four conditions. In each trial, participants indicated whether the target picture matched the preceding words. MEG data were recorded at 1000 Hz (200 Hz low-pass filter), noise reduced and epoched from 100 ms before to 600 ms after the onset of the critical word. A main analysis was conducted in the LATL (BA38) which has previously been implicated for both associative encoding and composition. MEG activity was averaged over all sources within the ROI. A non-parametric cluster permutation test (Maris & Oostenveld, 2007) with 10,000 permutations was performed to identify temporal clusters during which the localized activity differed significantly between conditions (p<0.05). Results The 2x2 ANOVA yielded a highly significant cluster for association in the LATL from 227 to 264 ms (p=0.0012), within which the low-association phrases had greater activity than the high-association pairs. A main effect of composition was also found from 313 to 332 ms (p=0.013), where the adjective-noun phrases elicited higher activity than the list conditions. No interaction effect was found for association and composition. We further examined the association and combination activity within subsets of the data and found that within the low-association condition, combinatory effect was significant from 319 to 331 ms (p =0.048); within the list condition, association was significant from 214 to 263 ms (p=0.0014). Conclusion Our results suggest that the LATL is modulated by both semantic association and semantic composition. Specifically, low-association phrases induced increased LATL activity at ~220 ms after the word onset, whereas the combinatory effects came later at ~310 ms. Syntax D40 Dynamic updating of native syntax Malathi Thothathiri1, Kelly Sharer1; 1The George Washington University Background: Recent results suggest that native syntax learning is life-long and not restricted to childhood. Adults update sentence production and comprehension based on the statistical properties of new input. Prediction- 194 SNL 2019 Program and-error-based learning has been posited as a likely mechanism: specifically, that during language perception, the parser predicts how a sentence is going to unfold and updates the language system based on match or mismatch between the predicted and actual structures. Hypotheses: We hypothesized that (1) such learning might utilize frontal cortex based cognitive control mechanisms that can detect and resolve mismatch between expected and actual outcomes; and (2) the recruitment of these regions would vary according to (2a) individual differences in cognitive control and (2b) the statistical associations between verbs and sentence structures. Methods: In multiple studies using different sentence structures, we exposed participants to native language (English) input that manipulated which verbs were experienced in which structures, and investigated the neural substrates of subsequent naturalistic comprehension and production using functional MRI. Based on prior findings, we focused on two well-known control regions—the anterior cingulate cortex (ACC) and the left inferior frontal cortex (LIFC). Individual differences in cognitive control were measured using Stroop. The production study examined a common syntactic alternation in English between double-object (DO) and prepositional-object (PO) datives (Mika gave Joan the pencil vs. Mika gave the pencil to Joan). Participants heard some English verbs only in DO, some only in PO, and others equally in both. Subsequently, they described new events using structures of their own choice. The comprehension study examined a commonly studied syntactic ambiguity in English between mainverb (MV) and relative-clause (RC) interpretations (The soldiers warned about the dangers… before the raid vs. …conducted the raid). Participants read MV and RC sentences and indicated reading completion and answered periodic comprehension questions. Results: Behavioral results confirmed that adults adjust production and comprehension based on new input (not discussed). Neurally, for production, (1) ACC and LIFC were recruited; and (2) recruitment for producing the less common structure (DO) varied with (2a) individual differences in Stroop, with better performers showing greater activation and (2b) a verb’s statistical associations, with the highest ACC activation for producing verbs in the statistically unassociated structure. For comprehension, (1) LIFC was recruited when reading the less common structure (RC) and this recruitment decreased as participants received more exposure; and (2) the decrease in LIFC activation varied with (2a) individual differences in Stroop, with better performers showing a greater decrease and (2b) a verb’s statistical associations, with quicker adaptation for verbs that led to the greatest mismatch (i.e., predictionerror). Conclusions: These studies reveal new insights about how adults dynamically update their native language. Frontal cognitive control regions help integrate new language input with expectations from prior language experience, with potentially different regions used at the response (motor production, ACC) versus representational (LIFC) levels. The neural substrates of updating also vary according to an individual’s cognitive control abilities and The Society for the Neurobiology of Language SNL 2019 Program  a particular verb’s statistical associations, suggesting flexible weighting of different brain regions during sentence processing. D41 Lower LIFG activation for higher syntactic complexity: MEG evidence from conceptually-matched Arabic stimuli Suhail Matar1, Julien Dirani2, Alec Marantz1,2, Liina Pylkkänen1,2; 1New York University, 2NYU Abu Dhabi Research Institute INTRODUCTION. During language comprehension, many different types of combinatory operations are tightly correlated, including conceptual-semantic and syntactic ones. While a consensus is emerging that the left anterior temporal lobe participates in some form of conceptual combination, the hypothesis space of the neural underpinnings of syntax remains relatively wide. Currently, prominent hypotheses range from the localization of the syntactic Merge operation to the LIFG (Zaccarella & Friederici, 2015), to the virtual inseparability of syntactic processing from other computations during comprehension (Blank et al., 2016). However, it is precisely this difficulty in experimentally isolating and manipulating syntax (while controlling for semantic variables and avoiding pseudo-language) that has posed a major paradigmatic challenge to the syntax question. Here, we used the grammatical properties of adjectival modification in Arabic in order to vary the size of a projected syntactic tree, while keeping conceptual combination identical. Specifically, by introducing or omitting (orthographically contiguous) determiners from noun-adjective combinations, we can create structurally smaller noun phrases (1a-b) and larger full sentences (1c) that all feature the same open-class lexical items: (1)a. Indefinite phrase – small structure: sha:ḥina ḥamra:’ (gloss: truck red) = ‘a red truck’; b. Definite phrase – small structure: alsha:ḥina al-ḥamra:’ (gloss: THE-truck THE-red) = ‘the red truck’; c. Sentence – large structure: al-sha:ḥina ḥamra:’ (gloss: THE-truck red) ‘The truck is red.’ METHODS. In this MEG study, fifteen participants read grammatical phrases or sentences, as in (1). One third of the stimuli were followed by a task item: participants read a sentence with a gap, mentally filled the gap with the stimulus, and indicated whether the resulting sentence is grammatical and plausible. Our main aim was to identify activation sensitive to the phrase vs. sentence contrast, for which we tested four main ROIs typically cited in the syntax literature: LIFG, Angular Gyrus, LATL, and the posterior superior temporal cortex. Additionally, given the evidence suggesting the LATL is an integrative hub sensitive to conceptual -rather than syntactic- variables, we expected the combinations in (1a-c) to elicit more LATL activation than single-word controls (‘red’/’the-red’), but not to elicit significantly different levels of activation compared to one another. RESULTS. Replicating prior work on the LATL, all combinatory conditions in (1) showed increased LATL activation compared to single-word controls in an early time-window (140-200ms), with no modulation by the complexity of syntactic structure. In contrast, the LIFG’s pars opercularis was sensitive to syntactic complexity, but in the opposite direction than the one predicted by The Society for the Neurobiology of Language Poster Session D the Merge hypothesis: the syntactically simpler phrases generated more activation than the sentences (270320ms). CONCLUSION. Our results show a dissociation between semantic and syntactic processing of minimal phrases and sentences. In the LATL, conceptual combination elicited more activation compared to single words, regardless of syntactic complexity. For syntactic processing, rather than syntactic structure building, the LIFG’s role appears to be different, perhaps reflecting structure projection, as has been recently proposed (Matchin et al., 2017): unlike our sentences, the noun phrases could be driving the anticipation or projection of upcoming syntactic structure. D42 Syntactic Priming in Brazilian Portuguese Sentence Comprehension: An EEG Study Mailce B. Mota1,2, Daniela Brito de Jesus1, Ali Mazaheri3, Katrien Segaert3; 1Federal University of Santa Catarina, 2CNPq, 3University of Birmingham In online sentence comprehension, different types of constraints are very quickly taken into account during reading/ listening. How these constraints are implemented in the architecture of sentence processing is an active area of inquiry. A key aspect in this debate concerns how syntactic knowledge is organized in memory and how comprehenders make use of this knowledge to build syntactic structures in different languages. Syntactic (or structural) priming can provide evidence regarding the access to this knowledge. The purpose of the present study was to investigate the nature of priming effects in the comprehension of complex syntactic structures in Brazilian Portuguese, by assessing electrophysiological effects of structural repetition (passive voice), using the syntactic priming paradigm during a reading task. Eventrelated brain potentials were recorded from 60 scalp sites as 26 adult native speakers of Portuguese (N = 26; 14 women; mean age = 30,5 years; SD = 6,9) read active and passive sentences. We contrasted two conditions in which prime was presented in the passive voice (primed condition) and in the active voice (unprimed condition). Targets were in the passive voice for both conditions and there was no lexical (verb) repetition in the past participle in any of the conditions. The order of presentation of the conditions was alternated across participants: the primed>unprimed condition was presented first to half of the participants and the unprimed>primed condition was presented first to the other half. Eighty sentence sets were constructed per condition, each set containing one prime and one target sentence followed by one to three filler sentences. To encourage participants to read the sentences attentively, one comprehension question was inserted every 20 sentences. ERP data were analyzed with repeated measures ANOVA on the mean amplitude of the main verbs in the targets in five epochs to verify if ERP effects on scalp topographies and visual inspection were statistically significant. Separate ANOVAs were conducted for the targets on both conditions for both the whole head at the critical word in the past participle, as well as five regions of interest (ROI) in the scalp. Post hoc contrasts were formulated to include Order of condition’s presentation. We found that ERPs to verbs in the past 195 Poster Session D participle (critical word) were associated with an N400 reduction in the primed condition. No P600 effect was found for Condition. However, significant effects were found for the N100 in the central-posterior region and for the P600 over posterior sites, indicating a main effect of Order of presentation and an interaction of Condition X Order for half of participants, respectively. The N400 effect could be related to the past participle, which seemed to serve as more powerful primes for their corresponding target forms than the simple past. Morphology D43 Morpheme-specific neural representations in skilled adult readers: Evidence from fast periodic visual stimulation Maria Ktori1, Mara De Rosa1, Yamil Vidal1, Davide Crepaldi1; 1International School for Advanced Studies (SISSA), Trieste Morphemes constitute the smallest meaning-bearing units of language that are combined to create complex words (e.g., kindness consists of the stem kind and the suffix -ness). Despite considerable behavioral evidence that morphologically complex written words are processed and represented via their constituent morphemes, the neural underpinnings of morphological processing remain poorly understood (Leminen, Smolka, Duñabeitia & Pliatsikas, 2018). The present study investigated whether morphemes are selectively represented in the brain of skilled readers as independent sublexical units in the absence of lexical context. Specifically, we used a fast periodic oddball paradigm (Lochy, Van Belle & Rossion, 2015) to measure EEG discrimination responses to morphological word endings (i.e., suffixes) presented in isolation. Skilled readers (N = 36; native Italian speakers) were presented with Italian suffixes (e.g., eria) appearing every five items in rapid streams of visual stimuli (F = 6 Hz) that varied in terms of their likeness to Italian word endings: non-alphabetic pseudofonts, unpronounceable nonwords (e.g., pnsm), pronounceable pseudowords (e.g., beft), pronounceable pseudowords that would be legitimate word endings in Italian (e.g., enfa), and frequency-matched non-morphological word endings (e.g., enso). Participants engaged in a non-linguistic task by monitoring the color change of a central cross. Within a few minutes of visual stimulation, suffixes evoked a specific EEG response at their presentation frequency and its harmonics (i.e., nF/5: 1.2 Hz, 2.4 Hz, 3.6 Hz, 4.8 Hz), located predominantly over the left occipito-temporal cortex. This response was present in all experimental conditions and reflected the successful and, in the absence of explicit linguistic processing, automatic discrimination of suffixes from all other types of stimuli, irrespective of the degree to which they resembled Italian word endings. Critically, the discrimination response was significant even for the contrast between suffixes and other non-morphological sublexical units that occur equally frequently at the end of words in the lexicon, establishing, thus, its genuine morpho–semantic nature. The findings of the present study provide novel evidence for the selective neural representation of meaningful sublexical linguistic units and reveal the automaticity with 196 SNL 2019 Program which such representations are activated in the brain. They also inform us about the nature of the computations that may be carried out by the ventral occipito-temporal cortex in response to written input characterized by systematic correlations between orthographic form and meaning. D44 Semantic information facilitates memory trace formation for novel morphology: neuromagnetic investigation Viktória Roxána Balla1, Yury Shtyrov2,3,4, Miika Leminen5, Alina Leminen1; 1University of Helsinki, 2Aarhus University, 3Saint Petersburg State University, 4Higher School of Economics, 5HUS Helsinki University Hospital Learning to recognize morphemic boundaries is crucial for fluent language comprehension and production. Therefore, in languages with a rich morphology, such as Finnish, the question of morphological learning is particularly relevant. A crucial part of consolidating memories is assimilating new information into existing knowledge. Neurocognitive studies propose that morphologically complex words are decomposed to their constituent elements that are stored as separate units in lexical memory. However, neural mechanisms underlying the acquisition and consolidation of novel morphological units remain obscure. To address this question, we presented complex words that incorporated four novel derivational suffixes to 19 right-handed native Finnishspeaking participants. For half of the new suffixes, we provided semantic information through a word-picture association task. Following this short training session, we used magnetoencephalography (MEG) to record the participants’ brain responses to the novel semantically trained and untrained suffixes combined with real stems and pseudo-stems in an auditory passive listening task, a well-known tool for probing long-term memory traces for spoken stimuli. The stems used in this test were not used in the preceding semantic training as such. To assess the online acquisition dynamics, we compared the responses measured early (first 25% of trials, ~5 minutes after the onset of the passive exposure) and late (last 25% trials/~5 minutes) in passive exposure to investigate the online learning of novel suffixes. We investigated neural source activation in time-windows around the main peaks of event-related field (ERF) responses. We found an increased activation in left frontal and temporal regions of interest, that was present throughout the passive exposure, for the semantically trained compared to the untrained suffixes in the 60-80 ms and 120-140 ms time-windows following the suffix onset. This effect reflects more efficient decomposition of suffixes, for which semantic information had been provided through the word-picture association task. This effect of semantic training was highest when the suffixes were combined with real word stems rather than unfamiliar pseudo-stems, in which case decomposition might be less successful. However, in the 220-260 ms time-window, we found a reduction in difference between responses for real and pseudoword stems towards the end of the exposure, which may indicate online acquisition of novel stems even in the absence of semantic reference. Finally, we The Society for the Neurobiology of Language SNL 2019 Program  detected a general response attenuation over time in the earlier time windows that appeared to be greater for trained suffixes, which might indicate predictability, priming or repetition suppression effects that are more prominent for elements with stronger memory traces. Overall, our findings suggest that a short semantic training of novel affixes significantly facilitates morphological decomposition and speeds up suffix memory trace formation in the left fronto-temporal neocortical networks. Meaning: Discourse and Pragmatics D45 Counterfactual reasoning fails when its premise is not true: Evidence from ERP Xiaodong Xu1, Lijuan Chen1; School of Foreign Languages and Cultures, Nanjing Normal University 1 Counterfactual reasoning about what might have been is pervasive in everyday life (Byrne, 2002), it is closely related human’s cognition and emotions. One the one hand, counterfactual reasoning allows people to learn from the experience to avoid making the same mistakes. On the other hand, it can evoke strong emotional reactions, such as regret and relief, and thus help people regulate behavior and emotions in order to adopt to the physical and social environment (Kulakova & Nieuwland, 2016). Using Event-related Potentials, this study investigated how counterfactual reasoning is affected by the truth value of a premise context. The result showed that logically invalid sentence evoked a larger P600 compared to the logically valid sentence when the premise was true (e.g., New technology has made the quality of today’s buildings generally improved. If the San Francisco earthquake had occurred in this century, the number of casualties would be small/large, surely.), whereas the same size of the P600 was elicited at the critical word when the premise was false (New technology has made the quality of today’s buildings generally declined. If the San Francisco earthquake had occurred in this century, the number of casualties would be large/small, surely.), Moreover, for those logically valid sentences, the critical word evoked a larger P600 when the premise was false than when it was true. These results suggest that counterfactual reasoning depends on the truth value of the premise context, counterfactual reasoning fails when its premise is not true. Finally, there was a significantly positive correlation between the P600 effect difference (invalid vs. valid) and readers’ empathy’s score (by Reading the Mind in the Eyes Test: RMET) when the premise was true but not when it was false, suggesting that those readers with higher Theory of Mind/Perspective Taking abilities are more sensitive to counterfactual reasoning. In addition, a significant negative correlation between the P600 effect and readers’ working memory span suggests that readers with low working memory abilities have more difficulty in understanding and making counterfactual reasoning. D46 Gender differences in the neural processing of pitch focus Katharina Spalek1, Yulia Oganian2, Xaver Koch1; Humboldt-Universität zu Berlin, 2University of California San Francisco 1 The Society for the Neurobiology of Language Poster Session D Linguistic focus signals that alternatives to the focused element are relevant for the interpretation of an utterance. The sentence: “The SNL meeting 2019 takes place in [HELSINKI]F.”, with pitch focus accent on ‘Helsinki’, expresses the literal content of the sentence, but also implies that the meeting does not take place in Québec City or London. {Helsinki, Québec City, London, ...} are focus alternatives. While focus alternatives may come from the same semantic category as the focused element, this is not a requirement. Neurally, we previously found effects of pitch focus on alternative processing in a discourseintegration network, including the precuneus and the fronto-median wall, distinct from semantic priming effects in bilateral temporal lobes. Processing focus and its alternatives belongs to the field of pragmatics, an area where individuals have been found to differ to a large extent. In two studies, we investigated potential differences between the sexes during focus processing. Study 1 tested whether focus makes alternatives more salient and therefore aids memory for alternatives. Participants (n = 94, 47 female) listened to short narratives in which three items were mentioned (e.g., ‘shirts’, ‘socks’, and ‘jumpers’). The next sentence discussed one of these items again (e.g., ‘socks’), either with focus accent or without. Alternatives (here: ‘shirts’, ‘jumpers’) to the latter item were recalled better in a subsequent memory test if this item had been marked with a focus accent than if it had been unmarked. However, this memory benefit only occurred in women, not in men. In study 2, we used fMRI to test how this difference is reflected in neural processing of focus alternatives. Participants (n = 38, 19 female) listened to spoken sentences (e.g., “Angela put the coke in the fridge.”), followed by a written target word. Targets were either semantically unrelated to the sentence (e.g., “book”, UNR), or semantically related (e.g., “lemonade”). Semantically related items were either alternatives (relA) to the focused element of the sentence (e.g., “Angela put [the COKE]F in the fridge.” or not (relN, e.g., “[ANGELA]F put the coke in the fridge.”, where the alternative set consists of individuals). We expected semantic priming (related-unrelated) in bilateral temporal areas independently of gender, whereas focus alternatives processing (relA – relN) was predicted to differ between the sexes. The semantic priming contrast activated the superior temporal gyri in women. In contrast, in men only the relA condition showed semantic priming in these areas. The focus alternative contrast showed effects in areas associated with coherence processing (left SFG and precuneus) in women, whereas men showed semantic priming (unr < rel) in these areas. Finally, in men, activation patterns in left IFG and lingual gyrus reflected task difficulty whereas no significant differences between the conditions were found in women. To summarize: In women, semantic priming and focus alternative processing activate distinct networks, whereas both networks show semantic priming effects in men, convergent with their lack of a memory benefit for focus alternatives. Our results demonstrate the importance of individual differences for models of language and discourse comprehension. 197 Poster Session D D47 The neuro-cognitive interplay between respectfulness and lexical-semantics in reading Chinese: Evidence from ERPs Liyan Ji1, YaXu Zhang1; 1Peking University In the area of neurolinguistics, an interesting question is how neuro-cognitive activities underlying the use of pragmatic and lexical-semantic information interplay during language comprehension. Interlocutor identities are a type of pragmatic information and have usually been distinguished from lexical semantics. The present study is to investigate how pragmatic information relevant to interlocutor identity interacts with lexical-semantic information during Chinese sentence comprehension. In Mandarin Chinese, the status of the speaker and addressee can constrain the use of the second-person singular pronoun. Using event-related potentials (ERPs), the present study manipulated both the semantic coherence of a verb phrase (VP) and the respectful coherence of the object noun phrase (NP) in the VP, resulting in 4 types of critical sentences. The object NP was the critical word (CW) for ERP recording. The semantic incoherence was realized by substituting the correct verb of the VP with a semantically anomalous verb, resulting in a violation of verb’s selective restriction on its object at the CW. The respectful incoherence was realized by changing the relative social status of the interlocutors, resulting in an anomaly of respectfulness (being overrespectful) at the CW. In addition, for the double condition, there was a simultaneous incoherence of semantic phrase structure of VP and respectfulness at the CW. Participants read 160 critical Chinese sentences, together with 220 filler sentences. After the end of the ERP recording, they completed both AQ Communication Subscale and a task of sentence acceptability judgement. The ERP results showed that both semantic violation and double violation elicited N400 and P600 responses. However, there was no significant main effect of respectful coherence. We speculate that the ERP responses to respectfulness violation were modulated by individual differences in pragmatic abilities. Thus we spilt participants into two subgroups according to their AQ scores. In the early (150-250 ms) time window, there was a larger P200 to the respectful violation condition compared to the control condition among pragmatically skilled participants (as indexed by a low score on the AQ Communication Subscale, Low AQ-Comm). In contrast, semantic violation elicited a larger P200 among the pragmatically less skilled participants (High AQ-Comm). These results suggest that pragmatically skilled participants initially paid their attention to pragmatic information, whereas pragmatically less skilled participants focused on lexical semantic information. In 300-500 ms and 550-1000 ms time windows, respectful violations elicited a larger N400 and a late negative activity in the high AQ-Comm subgroup. In contrast, respectful violations elicited a more positive activity and a sustained late positive activity in low AQ-Comm subgroup. Crucially, the double violation condition elicited an ERP pattern (N400 + P600) that was similar to that of the semantic violation for both subgroups, suggesting that the respectful violation effects were present only when the VP was semantically 198 SNL 2019 Program coherent. Taken together, these results suggest that semantic violation can preclude readers from engaging in pragmatic inference or pragmatic information processing, regardless of participants’ pragmatic skill. The strategy of resolving the respectful violation and the corresponding brain activities vary according to participants’ pragmatic abilities. D48 Anaphoric distance dependencies in the sequential structure of wordless visual narratives Neil Cohn1; 1Tilburg University Language has long been characterized as a “unique” facet of human cognition, particularly because of complex characteristics like anaphoric relations and distance dependencies. However, recent work has argued that visual narratives of sequential images, like those in comics, use sequencing mechanisms analogous to syntax. Within this structure, visual narratives use “refiner” panels that “zoom in” on the contents of another panel in a metonymic relationship (part-whole), such as one panel showing a character (“A”) extending their hand, with a subsequent panel zooming-in on that hand (“a”). Similar to anaphora in language, refiners connect referential information in one unit (refiner, pronoun) with that of another unit, the “referring expression.” Also like in syntax, refiners can follow their referring expressions (anaphor) or precede them (cataphor). In addition, refiners can be presented both locally, or separated at a distance with intervening panels showing other characters. Crossing these traits creates four sequencing patterns, with anaphoric refiners following their antecedents either locally (AaB, where “A” and “B” represent images of single characters, and lowercase “a” represents a refiner of character A) or at a distance (ABa), and cataphoric refiners either local (aAB) or at a distance (aBA). We thus presented participants with these patterns (24 of each type) embedded within 6-panel long wordless visual narrative sequences, with the manipulation always occurring at positions 2, 3, and 4, depicting the “rising action” of the narrative. We measured event-related brain potentials (ERPs) to panels presented one at a time (1350ms duration, 300ms ISI) using a 32 channel BrainVision ActiChamp. We found that, at the final position of the 3-panel manipulation, distance dependencies evoked a late frontal negativity (ABa, aBA) compared to local dependencies (AaB, aAB), 400-1100ms, suggesting a cost for the distance connection, regardless of the information content (zoom “a”, and non-zoom “A”). This is consistent with late frontal negativities (Nref) to anaphoric relations in language processing. When looking at refiner panels alone (i.e., “a” panels across all patterns), cataphoric refiners (aAB, aBA) evoked larger widespread fronto-central N400s from 200-500ms than anaphoric refiners (AaB, ABa). While this attenuation by anaphoric refiners could have been caused by sequence position, with cataphoric refiners preceding the anaphoric ones, distant anaphoric refiners (ABa) also had a larger N400 than local anaphoric refiners (AaB). This suggested that accessing the meaning of zoomed-in information benefited from the repetition of immediately following its The Society for the Neurobiology of Language SNL 2019 Program  referring expression (local anaphoric: AaB) more than occurring before it (cataphoric: aAB, aBA) or separated at a distance (distant anaphoric: ABa). These findings demonstrate that phenomena characteristic of sentence structure—anaphora and distance dependencies—also manifest in visual narrative sequencing, and evoke similar neurocognitive responses as in language (Nref, N400). Such work raises questions about the domainspecificity of anaphora and distance-dependencies as linguistic representations, and the neurocognition of their concomitant processing. Methods D50 The Use of Language Proficiency Assessments in Neurobiological Studies of L2, Bilingualism, and Multilingualism: A Systematic Review Jamie Herron Lee1, David Corina1; 1University of California, Davis There is increasing interest in exploring the neurobiological effects of language proficiency in second language, bilingual, and multilingual speakers. Current studies investigate the impact of language proficiency on a wide range of issues, including patterns of functional and structural connectivity for language comprehension and production and cognitive control for language and non-linguistic behaviors. When studies are conducted on these topics, researchers often rely on one or multiple language proficiency assessments to define participant groups. However, the construct of language proficiency itself is often poorly defined and difficult to operationalize, leading to the potential for variability in the treatment of proficiency in the experimental methodology of such studies. This systematic review investigates the types of language proficiency assessments utilized in functional neuroimaging studies of L2, bilingual, and multilingual speakers in order to characterize the ways in which language proficiency is conceptualized and applied in experimental research. A survey comprising 59 studies was collected via the PUBMED database. Selected studies met the following eligibility criteria: 1) involved healthy adult subjects; 2) investigated an issue or issue(s) related to bilingualism or multilingualism and proficiency as a construct; 3) utilized at least one of the following functional neuroimaging methods: fMRI, EEG, PET, or MEG; 4) were published between the years 2010 and 2018; 5) were published in a peer-reviewed journal. We sought to address the following questions: 1) what proficiency assessments are being used in this research, and 2) whether there are biases in the use of particular proficiency exams as a function of neuroimaging modality. Findings to date indicate a great deal of variability in research practice. The number of proficiency assessments reported in each study varied between zero and six, and many specific assessments were used in only one study. Despite this, several categories of similar assessments ranging from informal assessments (e.g., self-reporting of language history or ability) and indirect measures (e.g., reliance upon results of assessments not directly administered by researchers) to more formal standardized tests (e.g., University of Cambridge Quick Placement Test, Versant English Test) were characterized. Initial findings The Society for the Neurobiology of Language Poster Session D based on the reported use of assessments from these categories suggest differences as a function of imaging modality. For example, MRI studies tended to utilize standardized tests more often than informal assessments (standardized 47.8% vs. informal 39.1%), while EEG studies showed an opposite pattern (standardized 35.9% vs. informal 46.2%). Additional analyses will explore the use of proficiency assessments as a function of cognitive task (e.g. comprehension, production, cognitive control) and target language(s) examined. Taken together, our findings suggest that a more consistent definition of language proficiency and methodological standards for language proficiency should be sought in order to avoid variable conclusions in research. These data draw attention to the need for better accuracy and consistency of language proficiency assessments. Multilingualism D51 Markedness modulates person agreement differently in L1 and L2 speakers: An ERP study José Alemán Bañón1, David Miller2, Jason Rothman3; 1Centre for Research on Bilingualism, Stockholm University, 2Department of Hispanic and Italian Studies, University of Illinois at Chicago, 3UiT The Arctic University of Norway INTRODUCTION: Current theoretical models of L2 acquisition make different claims regarding how adult L2ers represent and utilize syntactic features. For example, McCarthy (2008) argues that adult L2ers cannot acquire the full specification of morphological features due to a representational deficit, and instead overuse morphological defaults, such as supplying third-person verbal morphology with first-person subjects, even at high levels of proficiency. Other proposals (Grüter et al., 2012) assume that adult L2ers can represent L2 features in a native-like manner, but may have reduced ability to access them online and use them predictively. STUDY/METHODS: We address these issues in an ERP study investigating how markedness modulates person agreement processing in L1-English L2-Spanish learners. The study builds on previous work by Alemán Bañón & Rothman (2019) with L1 Spanish speakers. In that study, we manipulated person markedness by probing both firstperson singular subjects (marked for person: speaker; 1a) and third-person singular ones (unmarked: default person; 2a). Agreement was manipulated by crossing first-person subjects with third-person verbs (1b) and vice-versa (2b). STIMULI: (1a) Yo a menudo lloro en las películas (I often cry-1ST-PERSON-SG in the movies) (1b) Yo a menudo *llora… (I often cry-3RD-PERSON-SG...) (2a) La viuda a menudo llora en la capilla (the widow often cry-3RD-PERSON-SG in the chapel) (2b) La viuda a menudo *lloro… (the widow often cry-1ST-PERSONSG...). The study included 40 items/condition (RSVP: 450/300ms). RESULTS/DISCUSSION: Native speakers (n=28) showed a P600 (500-1000ms) for both error types relative to grammatical sentences. “Marked subject + unmarked verb” errors (1b) yielded a larger P600 than the reverse error type (2b). Since the P600 is argued to reflect the reanalysis processes triggered by violations of top-down expectations, we interpreted these findings 199 Poster Session D as evidence that person-marked subjects allow the parser to generate stronger predictions regarding the form of upcoming verbs, via feature activation (Nevins et al., 2007). When that prediction is unmet, the result is a larger P600. The same design was used with 22 Englishspeaking learners of Spanish (intermediate/advanced). Similar to L1 speakers (Alemán Bañón & Rothman, 2019), learners elicited a P600 for both error types. Unlike L1 speakers, the P600 was reduced for “marked subject + unmarked verb” errors (1b) relative to the opposite error type (2b), failing to provide evidence for markednessdriven predictive processing. Although the reduced P600 for “marked subject + unmarked verb” errors appears consistent with claims that L2ers over-rely on defaults (McCarthy, 2008), regression analyses showed that P600-size for this violation type increased as a function of development (score in standardized proficiency test), speaking against permanent representational deficits. These results suggest that markedness modulates both L1 and L2 processing, but differently. In natives, person markedness at the subject allows the parser to generate predictions regarding upcoming verbs (stronger predictions than with unmarked subjects), a mechanism that L2ers are less likely to use. In L2ers, markedness impacts (but does not constrain) agreement when the dependency is established (at the verb). Overall, this is more consistent with claims that L2ers can fully represent features but are less likely to use them predictively. D52 Early crosslinguistic ERP effects in bilinguals: Automatic parallel access of both L1 and L2 lexicons Anna Petrova1, Nikolay Novitskiy2,3, Andriy Myachykov1,4, Yury Shtyrov1,5; 1Center for Cognition and Decision Making, Higher School of Economics, Russia, 2Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, 3 Brain and Mind Institute, The Chinese University of Hong Kong, 4 Department of Psychology, Northumbria University, Newcastleupon-Tyne, 5Center of Functionally Integrative Neuroscience, Institute for Clinical Medicine, Aarhus University In the modern globalized world many people are multilingual: recent estimates suggest that bilinguals constitute the majority of the world’s population (Bialystok et al, 2012). One of the most debated research questions in the field (for review, see Dunabeitia et al., 2015) is whether two or more languages coexisting in the multilingual mind have separate lexico-semantic storages (Perani et al., 2003; Grosjean, 2014), or a common storage and activation mechanism (Costa et al, 2008; Van Heuven & Dijkstra, 2010; Kroll et al., 2010). This question can be addressed by using a priming task to examine how primes from L1 influence the processing of target words in L2. The present study investigated the crosslinguistic phonological and semantic similarity effects on the bilingual lexicon in late unbalanced bilinguals. Our masked priming paradigm used L1 (Russian) words as masked primes and L2 (English) words as targets. The primes and the targets either overlapped – only phonologically, only semantically, both phonologically and semantically – or did not overlap at all. Participants had to maintain the targets in memory and match them against occasionally presented catch 200 SNL 2019 Program stimuli. Language-related differences in N170 and N400 components were previously reported (Novitskiy et al.. 2019); however, recent investigations into L1 processing suggest that lexico-semantic access commences much earlier, around 30-80 ms (Kimppa et al, 2015, 2016; Shtyrov et al., 2014; Shtyrov and Lenzen 2014), i.e. in the P50 component interval. This raises a question whether the L2 lexical access and interactions between L2 and L1 lexicons may also commence at a much earlier time than previously suggested by studies focused on later components. Our analysis of amplitudes in a 40-60 ms post-stimulus time window demonstrated a marginal main effect of semantics (F=3.62 , p=0.057), as well as a reliable interaction between semantic and phonological overlap (F=21.09, p<0.0001), underpinned, as confirmed by posthoc analyses, by а specific semantically-driven increase of responses to L2 targets that shared phonological similarity with masked L1 primes. These findings suggest that lexico-semantic co-activation of the two lexicons happens as early as 50 milliseconds after visual word presentation, and a semantic match between prime and target may facilitate the perception of the target. We interpret the increased positive peak as an allocation of resources for processing of a meaningful stimulus, as opposed to mismatched stimuli that do not elicit such an enhancement. The effect of interaction between semantics and phonology is driven by the differences in amplitudes to different semantic conditions, while phonology does not show significant differences in and of itself. This evidence of ultra-rapid cross-linguistic effect in a masked priming task suggests a high degree of automaticity and parallelism in access of both L1 and L2 items. In fact, if a word in a one language can prime the neural processing of a word in another one, this supports the notion of a shared storage with common access. We conclude that the semantic and phonological interplay between L1 and L2 suggest an integrated bilingual lexicon. D53 ERP responses to simple vs. complex English morphology in native Mandarin speakers Chu-Hsuan Kuo1, Lee Osterhout1; 1University of Washington Much of neurolinguistics research has attempted to clarify the factors that influence the acquisition of a second language. Chinese, unlike English, has very little morphology, which presents a linguistic contrast between the two languages. Using event-related potentials, we investigated whether native Mandarin speakers can acquire English morphology to the same degree as native English speakers. Twenty-eight native Mandarin speakers who first arrived in the United States to attend college were recruited to complete a sentence judgment task consisting of English sentences that were either wellformed or grammatically ill-formed, in which the critical word was either morphologically simple or complex. Analyses revealed no main effects but an interaction between grammaticality and morphological complexity, such that the P600 amplitude was significantly larger for ungrammatical simple stimuli but was absent for ungrammatical complex stimuli. This contrasts with prior work showing larger P600 amplitudes for ungrammatical The Society for the Neurobiology of Language SNL 2019 Program  complex stimuli in native English speakers. The present findings cannot be explained as a function of accuracy, as accuracy was comparable between the simple and complex stimuli. Our results suggest that native Mandarin speakers may be able to learn English morphology but have difficulty acquiring native-like brain responses to complex morphological grammatical errors. D54 Effects of bilingualism and lifestyle choices on the neural indices of cognitive control in older adults Caitlin E. O’Riordan1, Gretta Bunn1, Megan Woodruff1, Debra L. Mills1; 1 Bangor University, Wales. Older adult bilinguals often outperform monolinguals on non-verbal tasks of executive functioning, including measures of inhibitory control, task switching and working memory. Some research suggests bilingualism can even delay the onset of symptoms associated with dementia. Bialystok argues that the use of domain-general executive functions to control and suppress language results in a “bilingual advantage” on tasks of executive function. However, a bilingual advantage is not consistently observed. For example, it is not observed in behavioural measures of executive function nor in the delay of the onset of dementia in Welsh-English older adults. The factors associated with the presence or absence of the bilingual advantage are not well understood. The present study examined how age, several lifestyle factors, and facets of bilingualism such as age of second language (L2) acquisition, percentage of L2 daily use and L2 proficiency affected brain activity associated with cognitive control. We employed the event-related potential (ERP) technique with a visual Go/NoGo paradigm. The EEG from adults aged 65 to 85 years was recorded as individuals detected a cartoon target and executed a manual response (Go stimuli) or withheld the button press to any other characters (NoGo stimuli). Previous research suggests that conflict resolution associated with withholding a response to NoGo stimuli is linked to larger N2 amplitudes for young adult bilinguals relative to monolinguals. Moreover, it has been reported that older adults have attenuated N2 amplitudes in comparison to younger adults on a visual Go/NoGo paradigm. In the current research, participants included monolingual speakers of English (N =16), and Welsh-English bilinguals who reported using their L2 at least 25% of their daily life (N = 20). Participants also completed a questionnaire on alcohol intake, smoking, sleeping and exercise habits, which was used to generate a lifestyle score. Monolinguals and bilinguals did not differ in terms of SES, lifestyle score, age and scores on the Montreal Cognitive Assessment (MoCA).Monolinguals and bilinguals achieved comparable accuracy rates and reaction times. However, analysis of the brain activity associated with conflict resolution revealed significant differences. Consistent with previous research, monolinguals over age 65 did not show larger N2 amplitudes to the NoGo than Go trials. In contrast, for bilinguals, success in withholding a prepotent manual response (NoGo trials) was associated with significantly larger mean amplitudes for the N2 relative to Go trials. Analysis of questionnaire responses revealed a significant The Society for the Neurobiology of Language Poster Session D correlation with lifestyle score and N2 effect for bilinguals, wherein an increased N2 effect (larger NoGo-N2 relative to Go-N2) was correlated with a healthier lifestyle score. This correlation was not present for monolinguals.These findings suggest that bilinguals who utilise their L2 at least 25% of the time show patterns of brain activity associated with conflict resolution similar to those reported for younger adults. Lifestyle factors such as alcohol intake, smoking frequency, sleep quality and exercise interact with the advantage elicited by bilingualism. The results have implications for targeted interventions to stimulate second language use in older adults. Language Production D55 Syllabic and phonemic effects in Chinese spoken language production: Evidence from ERPs Qingqing Qu1,2, Chen Feng1,2, Markus Damian3; 1Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 2Department of Psychology, University of Chinese Academy of Sciences, Beijing, 3School of Psychology Science, University of Bristol Existing models of language production generally appeal to phonemes, but the bulk of evidence comes from speakers of European languages in which the orthographic system codes explicitly for sound-sized segments. By contrast, in languages with non-alphabetic scripts such as Mandarin Chinese, individual speech sounds are not orthographically represented, raising the possibility that speakers of these languages do not use phonemes as functional units. Indeed, recent studies from behavioral measurements have suggested that language rely differentially on different phonological planning units in spoken word production: Speakers of alphabetic languages use phoneme as primary processing unit when they produce words, whereas syllable constitutes primary units of phonological encoding for speakers of Chinese. However, in the previous work with electrophysiological measurement (EEG) (Qu, Damian, Kazanina, 2012, 2013, PNAS), we have provided preliminary evidence for phonemic representations for Chinese, contrasts with a number of reported null behavioral findings concerning the role of the phoneme in Chinese. This preliminary evidence required further confirmation. More importantly, the critical question arises, then, as to in which ways and when are the two types of phonological representations activated. In Experiment 1, we used event-related potentials combined with the form preparation task that is widely used in the literature on spoken word production, and has provided critical evidence regarding phonemic presentations in Indo-European languages. Chinese speakers named pictures which were blocked by initial phoneme overlap so that picture name shared the initial phoneme or were phonologically unrelated in a block. Whereas naming latencies were unaffected by phoneme overlap, ERP responses were modulated from 230-360 ms after object onset. In Experiment 2, we adopted the identical task in which object names overlapped in their word-initial syllable or word-initial phoneme, or not. Participants’ naming responses were faster when object names in a block shared their word-initial syllable, 201 Poster Session D and were not modulated by word-initial phonemic overlap, which is consistent with previous findings from behavioural studies. ERP responses were influenced by syllabic overlap from 190-300 ms after object onset and from 180-340 ms by phonemic overlap. We interpret these results as evidence for the claim that phonemic segments constitute fundamental units of phonological encoding in Chinese spoken word production. The finding that access to syllabic information began in parallel with phonemic processing around 180 ms indicates early parallel activation of syllabic and phonemic representations in Chinese word production. D56 Optimization in non-native speech sound production Clara Martin1,2, Kirk Goddard1, Maria Koutsogiannaki1, Natalia Kartushina3; 1Basque Center on Cognition, Brain and Language (BCBL), 2Ikerbasque, 3University of Oslo Several studies on speech motor control have shown that online alteration of auditory feedback often leads to motor adjustments in speaker’s phoneme production. For example, when a speaker produces /pep/ but hears through her headphones a more open vowel, such as / pap/ (via an online perturbation of her own voice), the speaker tends to adjust her production towards a more closed vowel, as in /pip/ (i.e., in the opposite direction to the alteration). This is a manifestation of a speaker unconsciously adjusting their speech sound production via altered auditory feedback. The present study investigated whether this type of production training can be used to improve non-native speech sound production, and, if so, whether production improvements transfer to perception. Thus, unlike traditional trainings, the current study attempted to influence and improve production of a non-native sound (i.e., the articulatory target) without going through the classical perception-to-production transfer. We used the online feedback alteration device (FAD) to lead native Spanish speakers (N=11) with low English proficiency to produce the English vowel /ɪ/ (frequently assimilated to the Spanish /i/). During three one-hour FAD sessions (on three consecutive days), participants had to repeatedly produce the Spanish pseudoword /pip/, while their auditory feedback was altered online to induce articulatory adaptation towards / pɪp/. Participants were tested on a series of perception and production tasks, before and after the FAD sessions, and one week later. At each testing session, we measured their production (picture naming), their discrimination (AXB task), and their categorization (3-alternative forcedchoice task) of the target English vowel /ɪ/ as well as the closest vowels /i/ and /e/. A control group (N=10) performed the exact same sequence of tasks and sessions but their auditory feedback was never altered. Over the three-day training with FAD, we found that participants unconsciously shifted their production towards the target English vowel /ɪ/. By comparing pre- and post-tests, we observed that participants improved their production of the target sound during picture naming, and this improved production generalized to other words. In addition, we found that training improved their ability to categorize the English vowel /ɪ/ when presented with the vowels /i/ 202 SNL 2019 Program and /e/. No significant training effect was observed on discrimination, due to a ceiling effect already at pre-test. Importantly, improvements in speech sound production and categorization remained one week later. The control group did not significantly improve in any of the tasks, showing that the effect was due to feedback alteration and not task repetition. These results have important implications for theories on motor control: Compensation to feedback alteration takes place with non-native speech sounds, can be long-lasting, and transfers to perception. Plus, this study opens interesting avenues for foreign language learning: By showing that foreign accent in phoneme production can be minimized, we propose that FAD can be used as a training tool to efficiently drive second language speakers towards native-like pronunciation. This is essential given that foreign accent is known to be associated with negative judgments from native listeners (lack of credibility and intelligence). D57 The frontal aslant white matter tract (FAT) and semantic selection in word production Joanna Sierpowska1,2, Nikki Janssen1,2, Roy P.C. Kessels1,2, Ardi Roelofs1, Vitória Piai1,2; 1Radboud University, Donders Institute for Brain, Cognition and Behaviour, 2Radboud University Medical Center, Donders Institute for Brain Cognition and Behaviour, Department of Medical Psychology, Nijmegen, The Netherlands [Introduction] The frontal aslant tract (FAT) is a white matter structure joining the anterior supplementary and pre-supplementary motor area (SMA and pre-SMA) with the inferior and middle frontal gyri. The FAT shows leftward asymmetry (Catani et al., 2012), suggesting a role in language processing. Although previous studies showed the contribution of the FAT in speech motor control (Kinoshita at al., 2015; Kemerdere et al., 2016), the exact character of the FAT as a functional connection remains poorly understood. Previously, we reported that direct electrical stimulation applied at the level of the left FAT during brain surgery provoked overregularization errors in verb generation (Sierpowska et al., 2015). This case study motivated us to examine the relationship between the microstructural properties of the FAT and performance in a demanding verb generation task in healthy individuals and to test whether it follows a leftlateralization pattern. [Methods] 50 healthy participants (mean age=43.7±21.6) were scanned with high-resolution diffusion-weighted imaging (DWI) and underwent behavioral testing using a verb generation task (dataset shared by Janssen et al., in prep). Stimuli consisted of 100 high-frequency concrete Dutch nouns (e.g. “hond” [dog]). Verbs could not be formed by the morphological derivation of the noun. Based on the responses given by the participants, we sorted the items into high and low selection demands (Thompson-Schill et al., 1998). The FAT connection was obtained using two regions of interest (ROI) and one exclusion mask. The ROIs were defined within the MNI space according to the following neuroanatomical landmarks: (1) the SMA and pre-SMA, conjoined and (2) the pars triangularis and opercularis of the inferior frontal gyrus (IFG, conjoined). The exclusion mask was defined around the midline on a sagittal The Society for the Neurobiology of Language SNL 2019 Program  section. Subsequently, the masks were transferred to each individual’s diffusion space and their corresponding white matter circuitry was calculated using a probabilistic approach (FSL probtrackx). All individual results were then binarized and used as masks to derive fractional anisotropy (FA) values, further used for comparison with behavioral data (Spearman correlations). [Results] FA values of the left FAT correlated with accuracy in verb generation, but only for the high selection condition (r(49)=.278; P=.025). Additionally, strong correlations were found between the right FAT FA values and accuracy for both high (r(49)=.611; P<.001) and low selection conditions (r(49)=.459; P<.001). [Conclusion] This study confirms that the previously found relationship between the left FAT and performance on verb generation is also apparent in healthy individuals. Additionally, the fact that this relationship only holds for items with high selection demands suggests that semantic control mechanisms underlie the left FAT role in language. Finally, the newly observed relationship between verb generation and the same tract in the right hemisphere, typically non-dominant for language, opens a new direction for further studies. In the neurosurgical context, these results may aid in further establishing language monitoring protocols for protecting white matter connectivity in surgeries for removal of tumors subjacent to both left and right (pre)SMA. D58 The time-course of lexical and sub-lexical processing in language production versus perception as revealed by event-related brain potentials. Kristof Strijkers1, Amie Fairs1, Amandine Michelas1, Sophie Dufour1; 1Université Aix-Marseille & CNRS, Laboratoire Parole et Langage While our knowledge on the brain basis of language production and perception is impressive, one major drawback is that the different language behaviors are typically studied in isolation. Language production and perception were long considered to rely on mainly separate processing systems and by consequence followed their own research traditions where evidence from one modality entered theories from the respective other modality only sparsely. Nonetheless, by now most researchers would agree that there is more interaction and potential overlap between the production and perception of words than originally assumed. Therefore, an important question concerns the degree of overlap (and dissociation) between the different linguistic behaviours in order to establish brain language models that embrace both modalities. In the current study we systematically compared the temporal dynamics of word component activation between production and perception. The same participants (N=26) spoke (object naming) and listened to (semantic classification of spoken words) the same stimuli while recording their electroencephalography (EEG) online. To assess lexico-semantic processing we manipulated the lexical frequency of the words and to assess phonological-phonetic processing we manipulated the biphone frequency of the words. The objective was to track ‘when’ the lexical versus biphone frequency effects would emerge in the event-related brain potentials (ERPs) and compare that time-course between the The Society for the Neurobiology of Language Poster Session D language production and perception tasks. Preliminary analyses of the data show that in the production task the lexical and biphone frequency effects emerged simultaneously between 210 and 270 ms after stimulus onset. In the perception task the biphone frequency effect emerged 180 to 335 ms after stimulus onset while the lexical frequency effect did not reach significance. Taken together, these preliminary results suggest that lexical and sub-lexical processing manifest in parallel in speech production and with an overlapping time-course for sub-lexical processing in speech perception. Such data pattern is hard to reconcile with traditional hierarchical sequential models of language production and perception, respectively, which would have predicted that lexical access precedes sub-lexical encoding in production and vice versa in perception. Instead, the data may indicate that both language behaviours rely on similar parallel processing dynamics. D59 When Do Neurophysiological Correlates of Word Production Change Across the Adult Lifespan? Giulia Krethlow1, Tanja Atanasova1, Raphaël Fargier2, Marina Laganaro1; 1 Faculty of Psychology and Educational Sciences, University of Geneva, 2Institute of Language, Communication and Brain Aix-Marseille University, Laboratoire Parole et Langage Language skills are among those most maintained during aging when compared to other cognitive functions. Nevertheless age dependent differences in word production performance can be observed in both accuracy and production latencies (1), especially in older populations (2,3). Neurophysiological activity underlying word production has also been shown to vary between young and older adults (4,5). For instance, Valente & Laganaro (2015), reported different brain activations between young and older adults in picture naming in the time period compatible with lexicalsemantic processes and suggested that word production modifications in aging could be influenced by the agerelated changes affecting the semantic system and its processing dynamic. As these studies compared groups of young adults (usually 20-30 years-old) to older adults it is unclear when such neurophysiological changes occur in the adult life-span. In this study we aimed to investigate the electrophysiological (EEG) event-related (ERP) patterns in a picture naming task, not only across the two extremities of the adult lifespan, but also by including intermediate age-groups. High-density EEG was recorded in 80 French native speakers aged 20 to 80 years-old divided into four age groups. Behavioral results show that only the older group (70-80 yearsold) displays slower production latencies relative to the other groups. However, ERP microstates differ only in the group of young adult speakers (20-30 years-old) relative to the other age-groups. In the time window between 125 and 155 milliseconds after the image onset a specific microstate is present in all age-groups over 40 years but not in the group of young adults. Hence, distinct behavior is observed only in the older adults, but different neurophysiological patterns, in the time-window associated to lexical-semantic processes, is observed 203 Poster Session D only in the youngest adults. This unexpected pattern may indicate either that changes in brain activity start in the forties but without behavioral decline until the age of 70 years-old, or, more likely, that the group of 20-30 yearsold presents a particular pattern relative to the rest of adulthood, possibly due to still ongoing maturation (6). These results may also call into question the relevance of investigating mental processes using mostly groups of undergraduates (20-30 years-old) to study language processing. References: 1. Kavé, G., Knafo, A., & Gilboa, A. (2010). The rise and fall of word retrieval across the lifespan. Psychology and Aging, 25(3), 719. 2. Kavé, G., & Yafé, R. (2014). Performance of younger and older adults on tests of word knowledge and word retrieval: independence or interdependence of skills?. American Journal of Speech-Language Pathology, 23, 36–45. 3. Stine-Morrow E, Shake M (2009) Language in aged persons. Encycl Neurosci 5:337–342. 4. Valente, A., & Laganaro, M., 2015. Ageing effects on word production processes: an ERP topographic analysis. Language, Cognition and Neuroscience, 30:10, 1259-1272. 5. Mohan, R., & Weber, C. (2018): Neural activity reveals effects of aging on inhibitory processes during word retrieval, Aging, Neuropsychology, and Cognition. 6. Lebel, C., Beaulieu, C. (2011). Longitudinal development of human brain wiring continues from childhood into adulthood. Journal of Neuroscience, 31, pp. 10937-10947. D60 Motor imagery of speech increases activation of tongue motor area – a Transcranial Magnetic Stimulation study Gwijde Maegherman1, Helen E. Nuttall2, Joseph T. Devlin1, Patti Adank1; 1University College London, 2Lancaster University Motor imagery is hypothesised to involve motor cortex activation in the absence of muscle movement. Most research in this area focuses on manual actions, with a particular emphasis on motor rehabilitation. However, people engage in other types of imagery, such as imagery of speech, yet this type of orofacial motor imagery has not been the focus of intensive research. Motor imagery of speech is similarly thought to involve activation of speech motor areas - those of the articulators. In the current study, we investigated the activation of tongue motor cortex by testing if motor-evoked potentials (MEPs) would be facilitated during motor imagery of speech. Finding such facilitation would have important implications for theories of auditory hallucination and elucidate the role of forward models of speech. Twenty participants took part in an experiment using transcranial magnetic stimulation (TMS) to evoke 240 MEPs from tongue motor cortex during three conditions: articulation observation, articulation imagery, and a baseline condition. The articulation observation condition involved listening to an exemplar of the non-native consonant cluster /tr/. Participants had been trained to produce this cluster prior to the start of the experiment, even though they were not required to actually produce it. The articulation imagery condition involved imagining producing this sound, while the baseline condition instructed participants to maintain their baseline position. Participants were shown a countdown, ending on a cue to perform the required 204 SNL 2019 Program action. Electrodes were fitted to a tongue depressor which the participants pushed lightly onto the roof of the mouth during the TMS session. MEPs were evoked at 200ms and 500ms post-cue to assess the temporal characteristics of motor activation during imagery. We performed a Repeated-Measures Analysis of Variance (ANOVA) on the MEP data to establish the effect of condition (articulation observation, articulation imagery, baseline) and time point (200ms & 500ms) on the size of the area-under-the-curve of the MEPs. The results showed increased MEPs in the articulation imagery condition versus the other conditions, but did not find such a difference for the articulation observation condition. No effect of time point was found and no interactions. This study aimed to examine whether we could see differences in tongue motor cortex activation in a task involving tongue articulation observation, articulation imagery and a baseline condition. We conducted an experiment to compare these three conditions using MEPs, which show facilitation of motor cortex of relevant effectors. The results show a primary effect of condition, indicating that articulation imagery facilitated MEPs more than in the articulation observation or baseline conditions. Our results suggest that motor imagery of speech involves motor plan activation in primary motor cortex. This has important implications for the understanding of generation of forward models in speech processing, suggesting that motor simulation may take place not only in somatosensory areas but also articulator-specific motor areas. Additionally, these results help clarify the origins not only of egocentric speech imagery but also the imagined perception of speech from a non-self source, as in auditory hallucinations. D61 Grey and white matter structures associated with temporal aspects of speech: Are different genres supported by distinct brain networks? Georgia Angelopoulou1, Dimitrios Kasselimis1,2, Michel Rijntjes3, Marco Reisert4, Dimitrios Tsolakopoulos1, Georgios Papageorgiou1, Anna-Maria Psaroba5, Christina Petrakou5, Maria Varkanitsa6, Georgios Velonakis7, Efstratios Karavasilis7, Dimitrios Kelekis7, Dionysios Goutsos8, Micheal Petrides9, Cornelius Weiller10, Constantin Potagas1; 1Neuropsychology and Language Disorders Unit, 1st Neurology Department, Eginition Hospital, Faculty of Medicine, National and Kapodistrian University of Athens, 2 Division of Psychiatry and Behavioral Sciences, School of Medicine, University of Crete, 3Department of Neurology and Neurophysiology, University Hospital Freiburg, 4Medical Physics, Department of Diagnostic Radiology, Faculty of Medicine, University Freiburg, 5Panteion University of Athens, 6 Massachusetts General Hospital - Harvard Medical School, 7 Radiology and Medical Imaging Research Unit, University of Athens, 8Department of Linguistics, School of Philosophy, National and Kapodistrian University of Athens, 9Montreal Neurological Institute, Department of Neurology and Neurosurgery and Department of Psychology, McGill University, 10Department of Neurology, University Medical Center, University of Freiburg Introduction During the last decades, accumulating evidence from different disciplines indicates that there are quantitative and qualitative differences between narratives derived from various elicitation tasks in clinical The Society for the Neurobiology of Language SNL 2019 Program  populations, as well as healthy speakers (Armstrong, 2000; Efthymiopoulou et al., 2017). It has been argued that such discrepancies may reflect disparate cognitive demands and are possibly associated with discrete brain networks(. This study aims to investigate grey and white matter correlates of silent pauses, a measure thought to reflect speech planning and word retrieval, in two distinct narrative tasks, i.e. a picture description and a personal event narration in healthy individuals. Methods Sixty healthy participants, monolingual Greek speakers (27 males), 19-65 years old, with 6-24 years of education and no history of neurological or psychiatric disorders participated in the study. Narration tasks consisted of Cookie theft picture description and a personal medical event. Speech samples were recorded and then transcribed and silent pauses were annotated with ELAN software. Cortical surfaces from the 3D T1-weighted were reconstructed using the automated pipeline of FreeSurfer 6.0.0. (http://www.surfter.nmr.mgh.harvard. edu/). We run separate whole brain general linear models for pause frequency and duration for each narration task and three brain metrics (surface area, cortical thickness, grey matter volume). Monte Carlo simulations were used to correct all vertex-wise results at an individual vertex level of p < 0.05 (Hagler, Saygin, & Sereno, 2006). 30-directional DTI protocol was also acquired, and white matter fibers were reconstructed using the global tractography approach implemented in DTI&Fibertools (Reisert et al., 2011). Results Picture description: Whole brain GLMs revealed an inverse association between pause duration and surface area in two clusters (pars opercularis, p = 0.0002 and superior frontal gyrus, p = 0.0026). Pause duration was also negatively correlated with arcuate fasciculus mean FA (r= -0.348, p = 0.009) and positively correlated with arcuate fasciculus volume (r= 0.361, p = 0.006), while no significant correlation appeared for pause frequency. In all analyses, age, years of education and total duration of narratives were used as nuisance variables. Personal story: Whole brain analysis GLMs revealed a weak positive association between pause frequency and surface area in one cluster (superior parietal lobule, p=0.048). Pause duration was positively correlated with temporofrontal extreme capsule (tfEmC) mean FA (r=0.418, p=0.001). Additionally, pause frequency was negatively correlated with tfEmC volume (r= -0.342, p = 0.010) and positively correlated with tfEmC mean FA (r= 0.348, p = 0.009). In all analyses, age, years of education and total duration of narratives were used as nuisance variables. Discussion Our results clearly indicate a dissociation pattern regarding pauses’ grey and white matter structural correlates in the two narration genres. Pauses during picture description are associated with dorsal stream areas, while pauses during free narration are correlated with cortical and subcortical structures involved in the ventral stream (Saur et al., 2008). Overall, we argue that silent intervals during speech follow distinct patterns, depending on task types possibly reflecting different cognitive processing routes, which in turn may be supported by different brain networks. The Society for the Neurobiology of Language Poster Session D D62 Alpha Band ERD Associated with the Increased Use of Formulaic Language in a Naturalistic Production Task Seana Coulson1,2, Claudio Hartmann1, Jared Gordon1, Jacob Momsen1,2; 1University of California San Diego, 2San Diego State University Here we examine event related spectral perturbations (ERSP) associated with the emergence and use of formulaic language in a demanding speech production task. In this task, participants viewed videos of actors performing simple actions and were asked to verbally describe the clips as a sportscaster might do. Actions included standing up, sitting down, picking up a box, putting down a box, jumping, and walking, performed by a variety of different actors in six different manners. Although all videos were unique, we hypothesized that the structured nature of this video corpus would induce participants to make increasing use of speech formulas, that is, sequences of words or structures including slots for variable content. In view of previous evidence that the retrieval of well-integrated semantic information is associated with event-related desynchronization (ERD) in the alpha band of the electroencephalogram (EEG), we hypothesized that the use of speech formulas (retrieved and produced as integrated units) would be indexed by changes in the amplitude of induced alpha-band activity. Participants were 12 adults, 18-30 years of age, who were fluent English speakers, and had no history of neurological or psychiatric disorders. As the study was designed to capture qualitative changes in speech over time and the corresponding shifts in participants’ ERSPs, we continuously recorded participants’ speech and EEG as they described the action in a total of 200 videos, divided into four blocks of 50 clips apiece. Two seconds after the onset of each clip, a tone sounded to signal participants to begin their narration, and the appearance of a fixation cross after the offset of the video signaled participants to stop. Linear mixed effects models suggested the number of words per utterance (video clip) increased as a function of Block (F=11.25, p<0.0001), with coefficient of 0.7 for the second block, 1.5 for the third block, and 1.8 for the final block. However, linguistic diversity, expressed as the percentage of unique words used by each participant went down overall (Block F=13.63, p<0.0001), with significantly negative coefficients emerging only in the third (-1.78) and fourth (-2.6) blocks. Behavioral data are thus consistent with our prediction that experience with the task would lead participants to increasingly utilize speech formulas. Oscillatory brain activity was examined by extracting EEG from 500ms before the onset of the beep that prompted speech production (the baseline interval) until 3 seconds afterwards. Artifactual activity due to eye movements and overt speech production was minimized via adaptive mixture ICA and blind source separation with cross-correlation analysis (BSSCCA), and remaining noisy trials were rejected by hand. Alpha band activity was measured between 7-11Hz during the time period 700-1200ms for analysis with linear mixed effects models. Relative to the pre-stimulus baseline, all measurements were negative, indicating ERD. Alpha band activity differed significantly as a function of Block 205 Poster Session D (p<0.001), with a positive coefficient for the second block (0.2), and negative coefficients in the third (-0.30), and fourth (-0.38) blocks. Greater alpha band ERD during speech production thus paralleled the decrease in linguistic diversity due to formulaic language. Speech Motor Control D63 Failure to replicate the effects of transcranial Direct Current Stimulation (tDCS) on articulation of tongue twisters: a pre-registered study Charlotte Wiltshire1,2, Emily West1, Kate E. Watkins1,2; 1University of Oxford, 2Wellcome Centre for Integrative Neuroscience, University of Oxford Introduction: tDCS has been shown to modulate the cortex in a polarity-specific way. Local cortical excitability, measured by the size of transcranial magnetic stimulation (TMS)-induced Motor Evoked Potentials (MEPs) is upregulated by anodal stimulation and down regulated by cathodal (Nitsche and Paulus, 2000). When used in combination with a behavioural task, a single session of tDCS can also modulate performance (Stagg and Nitsche, 2011). Fiori et al., (2014) showed that a single session of 2-mA tDCS applied to the left inferior frontal gyrus (IFG) leads to modulated performance on a speech motor task (repetition of tongue twisters) during stimulation. Here, we aimed to replicate this finding and extend it by also measuring MEPs from the lip representation of the motor cortex. The study design and analysis plan were preregistered on the Open Science Framework (https://osf. io/p84ys/). Method: Sixty right-handed, native-English speaking volunteers (age: M=22.3, SD=4.8) took part in a double-blind randomized sham-controlled study. Three gender-balanced groups received: 1) Anodal tDCS to the left hemisphere IFG/LipM1 and cathodal tDCS to the right hemisphere homologue; or 2) cathodal tDCS (the reverse montage) or 3) sham stimulation. In the active stimulation groups, 1-mA current was applied via 5 x 7cm salinesoaked electrodes for 13 minutes. In the sham group, the current was ramped up to 1-mA over the first 15 seconds and then turned off. The behavioural task was performed prior to, during and 10 minutes post, the stimulation. Participants heard and repeated 36 sentences with complex articulation (tongue twisters; TT) and 36 syllableand word form-matched simple sentences (SS). The primary outcome of the study was the change in response duration (post minus pre-stimulation), which was measured from the offset of the recorded sentence to the end of the utterance. MEPs were elicited using TMS over the lip representation in left primary motor cortex before and immediately after the tDCS. Here, change (postminus pre-stimulation) in peak-to-peak amplitude was the dependent measure. Results: There were no significant differences in response times for either TT or SS among the three stimulation groups (anodal vs. cathodal vs. sham). The magnitude of reduction in response time was significantly greater for TT than for SS (p=.003). For the change in MEP size, neither the anodal nor the cathodal group differed from sham (all p>.15), and none were different from zero (no change). Conclusion: We failed to replicate previous findings that tDCS modulates performance on a tongue twisters task (Fiori et al., 2014). 206 SNL 2019 Program There was greater learning for TT compared with the SS, as expected, but there was no difference in learning among the three stimulation groups. Furthermore, the tDCS applied concurrently with the behavioural task had no measurable effect on motor excitability measured with MEPs from the lip. The failure to replicate the behavioural findings of Fiori et al., (2014) could be due to changes in the stimulation protocol. Nevertheless, our large sample size gave us 80% power to detect a medium sized effect at p<.05 should one exist. Signed Language and Gesture D64 Neural correlates of American Sign Language production revealed by electrocorticography Jennifer Shum1, Lora Fanda1, Beenish Mahmood1, Daniel Friedman1, Patricia Dugan1, Werner K. Doyle1, Devinsky Orrin1, Flinker Adeen1; 1New York University School of Medicine The spatiotemporal dynamics underlying sign language production remains difficult to study as the majority of the literature in this area uses techniques with limitations in either spatial or temporal resolution. Here we report a unique case of electrocorticography (ECoG) recordings obtained from a neurosurgical patient with intact hearing and bilingual in English and American Sign Language (ASL). The patient suffered from pharmaco-resistant epilepsy requiring surgical implantation of electrodes in the left hemisphere to clinically identify seizure onset zones. We designed a battery of clinically relevant cognitive tasks to capture multiple modalities of language processing and production and which mirrored the clinical paradigms employed during electrical stimulation mapping. The tasks involved picture naming, visual word reading, auditory word repetition, auditory naming, and auditory sentence completion, with the patient either responding in spoken English or ASL. We focused our analyses on changes in high gamma activity as this has been previously shown to be a robust marker of local cortical activity. Here we show activation maps during ASL and speech processing, identify regions with preferential activity during ASL versus speech production, and show the neural propagation map during ASL output. The patient also underwent electrical stimulation mapping, which revealed face and limb sensorimotor findings that matched with preferentially active ASL and speech production regions, respectively. To our knowledge, this study is the most extensive investigation into the spatiotemporal differences between a spoken and signed language utilizing direct cortical recordings in a hearing intact ASL bilingual patient, and offers a unique window into the neural underpinnings of ASL and speech production. Perception: Auditory D65 Steady use of phonological detail from preschool to 2nd grade Anne Bauch1, Claudia Friedrich1, Ulrike Schild1; University of Tuebingen 1 Previous research indicated that readers appear to process speech in more detail than illiterates. In this longitudinal study we investigated plasticity of implicit The Society for the Neurobiology of Language SNL 2019 Program  lexical representations in relation to growing reading proficiency in children under a developmental perspective. 31 children were invited at preschool, after the 1st grade and after the 2nd grade of school to complete a spoken word identification task. We collected response time latencies and event-related potentials (ERPs) at all points of measurement. We aimed to evaluate facilitated processing of words (targets) which were preceded by identical syllables (primes, e.g. ki – kino, engl. cinema) in comparison to prime – target combinations with phonological variation in the onset phoneme (e.g. gi – kino). Response times as well as well as ERPs revealed that priming was less effective when the prime diverged phonologically from the target. We found the same activation pattern across all age groups. Already at preschool age, children depicted high sensitivity for small phoneme mismatch, indexed by an early N100 effect. Enhanced reading and writing skills did not correspond to more detailed phonological processing in 1st and 2nd graders. Together these findings imply that phonological sensitivity in preschool age might relate to the development of pre-cursor functions of reading, such as enhanced phonemic awareness. D66 Neural sensitivity to speech distributional information underlies statistical learning Julie Schneider1, Yi-Lun Weng1, Violet Kozloff1, An Nguyen2, Zhenghan Qi1; 1The University of Delaware, 2John Hopkins University Human minds are apt to detect and extract statistical regularities from the environment (Saffran et al., 1996; Conway & Christiansen, 2005). The ability to rapidly learn frequencies, variabilities, and co-occurring information embedded in the inputs, known as statistical learning (SL), is foundational for various aspects of cognition including language development (Newport & Aslin, 2004). Conditional and distributional statistical regularities are the two types of information encoded by learners in numerous SL tasks. Conditional statistics refer to how frequently adjacent or non-adjacent elements co-occur in the inputs, while distributional statistics refer to the bare frequency of the occurrence of exemplars (Thiesson, 2017). The current study seeks to understand the neural processes underlying speech distributional SL and how it relates to the behavior of conditional SL. Using eventrelated potentials (ERPs), we measured participants’ neural sensitivity to speech distributional statistics in a passive auditory oddball task (N = 27). We manipulated the types of deviant stimuli (syllable or voice), as well as the global probability (rare or frequent) and the local probability (preceded by a long or short sequence of standard stimuli) of deviant stimuli. Participants also completed a target detection task while being exposed to a continuous stream of speech stimuli containing conditional statistical information (i.e. triplet patterns; Saffran et al., 1996). The linear acceleration slope of participants’ response time was computed as an index of conditional SL performance. The cluster-based mass univariate analysis between all deviants and standards resulted in two significant negativity effects: 22-180 msec (early window) at central electrodes and 324-500 msec The Society for the Neurobiology of Language Poster Session D (late window) at fronto-central electrodes. A repeatedmeasure ANOVA on the negativity amplitude in the late window yielded a significant main effect of deviant type (F(1,26) = 67.40, p = 0.02), with syllable deviants eliciting a greater negativity than voice deviants, and a significant interaction between local and global probability (F(1,26) = 82.74, p = .005). Post-hoc pairwise comparisons revealed a significant global probability effect (rare vs. frequent) when deviants were preceded by a short sequence of standards (t(53) = -2.27, p = .03), and a significant local probability effect (long vs. short distance) when deviants occurred frequently (t(53) = -1.95, p = .05). Individual participants’ ERP effects were then extracted to correlate with their performance in the conditional SL task. A greater sensitivity to global probability was associated with steeper acceleration of response time and faster response time in the conditional SL task (RT slope, rs = .58, one-tail p = .003; RT mean, rs = .55, one-tail p = .006). Greater sensitivity to the syllable deviants (as opposed to the voice deviants) was moderately related to steeper acceleration of response time (RT slope, rs = .43, one-tail p = .03). These findings suggest that adults are sensitive to distributional statistical information embedded at both the local and global contexts. The relationship between neural sensitivity to deviant type, local/global probability, and conditional SL behavior provide new evidence highlighting the important role of distributional statistical processing in learning conditional statistics embedded in speech. D67 Neural indices of voice stream segregation in monolinguals and bilinguals Melissa Baker1, Clara Liberov2, Katherine Wang2, Yan Yu3, Valerie Shafer2; 1NSF REU Site Intersection of Linguistics, Language, and Culture, 2City University of New York Graduate Center, 3St. John’s University The purpose of this study is to examine speech processing in the auditory context of competing background speech “noise” to elucidate how early Spanish-English bilingual experience modulates speech processing. We measured the Mismatch Negativity (MMN) and Late Negativity (LN) components of event related potentials (ERPs). MMN reflects discrimination of auditory or speech contrasts (Näätänen, et al., 2007) and can be elicited at a fairly automatic level, but can be modulated by attention. The LN appears to reflect reorienting to the stimulus change. We hypothesized that bilinguals would monitor the auditory environment differently than American English monolinguals because of their different auditory experience and because studies indicate differences in performance between bilinguals and monolinguals on executive function tasks. Specifically, we predicted that bilinguals would show less “suppression” of a non-target speaker voice. ERPs to speech stimuli were recorded from 64 scalp sites in two conditions. In the Passive condition, participants’ attention was directed away (by watching a muted movie) from the speech, which consisted of a female voice uttering /ɑpə/ as standard and /æpə/ as deviant mixed with a male voice uttering /epə/ as standard and /ɑpə/ as deviant. In the Attend condition, participants were required to focus on the female voice / 207 Poster Session D æpə/ by counting the deviants and to ignore the male voice. Preliminary results with six monolinguals and four bilinguals revealed that attention enhanced neural discrimination of the target /æpa/ for all participants, as expected. For the non-target deviant (male voice / ɑpə/) in the Attend condition, the MMN and LN were 60% smaller for monolingual listeners. In contrast, the bilingual listeners showed no reduction in amplitude. These findings suggest that bilingual experience leads to differences in monitoring speech information in the auditory environment. Further manipulations will be necessary to determine whether this pattern is related to differences in how monolingual and bilingual listeners inhibit interfering information in the Attend condition or rather to differences in how monolingual and bilingual participants attend to speech in the classic “passive” condition, where they are instructed to ignore the speech and watch a movie. D68 Effects of structural complexity on sentence comprehension: An fMRI study of late second language learners Kaoru Koyanagi1, Hyeonjeong Jeong2, Fuyuki Mine1, Yoko Mukoyama3, Hiroshi Ishinabe2,4, Haining Cui2, Kiyo Okamoto2, Ryuta Kawashima2, Motoaki Sugiura2; 1Sophia University, 2Tohoku University, 3Musashino University, 4Higashiosaka Junior College Previous neuroimaging studies on first languages (L1) have demonstrated that processing complex sentences (e.g., scrambled word order; O-S-V) enhances heavier cognitive demands for linking syntactic and semantic information than processing their canonical (S-O-V) counterparts (Kim et al., 2009; Makuuchi & Friederici, 2013). These kinds of cognitive demands are processed in the core language systems, such as the left inferior frontal gyrus (LIFG) and the posterior temporal areas. However, this may not be true for L2 processing, as it requires additional cognitive resources due to various factors, such as the age of acquisition and L2 proficiency levels. The current fMRI study first attempted to investigate the effect of structural complexity (i.e., scrambled sentences vs. canonical sentences) on the brain during L2 sentence comprehension by comparing L1 comprehension. Then, the study attempted to identify the effect of L2 proficiency levels on the brain mechanism. Participants were 33 healthy right-handed Chinese and 20 Japanese native speakers, respectively. The Chinese speakers learned Japanese as L2 (mean age 24.21, 20 females). Participants’ L2 proficiency level was tested with Tsukuba Test-Battery of Japanese. The Japanese speakers were undergraduate and graduate students at a university (mean age 21.95, 10 females). They were asked to perform a semantic-plausibility judgment task with auditory-presented Japanese sentences during fMRI scanning. A total of 168 Japanese sentences were created along with 56 semantically implausible sentences as a filler condition. After dividing two sets of sentences for canonical and scrambled words order, we counterbalanced these sentences among participants. We modeled four regressors of Canonical, Scramble, Filler, and Error (incorrect responses) in each participant. For second-level analyses, we tested 2 (Group: L2 and 208 SNL 2019 Program L1) x 2 (sentence complexity: Canonical and Scramble) factors and interaction with the flexible two-way ANOVA implemented in SPM12 (corrected to p<0.05 by cluster size). Furthermore, to examine the effects of L2 proficiency levels on the brain during complex sentence comprehension, correlation analysis was conducted in the contrast [Scramble>Canonical] for the L2 group with the proficiency scores of each participant at the whole brain level. First, as a main effect of the group, higher activation was observed in the left middle frontal gyrus and left caudate for the L2 than L1 groups, but the semantics areas such as the left middle temporal gyrus (LMTG) and ventral part of IFG were involved much more heavily for the L1 than L2 groups. Second, the main effect of complexity [Scramble > Canonical] was observed in the left middle frontal gyrus (LMFG), LIFG, and supplementary motor area. Third, there was no significant interaction effect between group and complexity. However, significant positive correlation between activation in L2 [Scramble>Canonical] and proficiency levels was found in the LMFG, left hippocampus, and LMTG (p<0.001, uncorrected). Taken together, these findings suggest L2 learners rely on heavier cognitive control and working memory demands during L2 sentence comprehension than L1 speakers do. As L2 proficiency increases, L2 learners may efficiently integrate working memory and semantics areas in order to comprehend meanings of sentences with complex structures. Prosody D69 Is developmental dyslexia children’s prosody processing modulated by syntactic structure? An ERP study on Chinese developmental dyslexia children Jiexin Gu1,2, Qiu Meng1, Yamei Wang1, Yiming Yang1,2; 1Jiangsu Normal University, Xuzhou, 2Collaborative Innovation Center for Lanuage Ability, Xuzhou It was proved that a deficit occurred during the prosody processing on the developmental dyslexia children (DDC). But whether or not the DDC’s prosody processing is modulated by syntactic structure still remains unclear. Therefore, we investigated the influence of syntax (normal vs. violation) on Chinese DDC’s prosody (normal vs. violation) processing through event related potentials (ERPs). In present ERP study, each thirteen 9-12 years old Chinese DDC and typical developmental children (TDC) were instructed to perform a semantic decision task on the sentences they listened. It was found that a different and earlier influence of syntax on Chinese DDC’s prosody processing (300-600ms) compared to the syntactic influence on Chinese TDC’s prosody processing (600900ms). Specifically, for the DDC, only when the syntax was violated, the sentences with violated prosody elicited a more negative anterior negativity (AN, 300-600ms) than that of sentences with normal prosody; while for the TDC, only when the syntax was normal, the sentence with violated prosody elicited a more positive P800 (600900ms) than that of sentences with normal prosody on the electrodes over the right hemisphere. Furthermore, be differ from the TDC, a main effect of prosody in the 300-600ms time window with a more negative AN was The Society for the Neurobiology of Language SNL 2019 Program  not found on the DDC. In addition, later than the TDC, the sentences with violated prosody elicited more positive P800 (900-1200ms) for the DDC relative to the sentences with normal prosody. The results showed that syntactic structure does modulate the prosodic processing for both the DDC and the TDC, but the pattern is distinct. It possibly indicates that a different priority on syntax and prosody occurred during the sentence processes across the two subject group, the DDC syntax priority whereas the TDC prosody priority. And the DDC’s phonological deficit affected the Chinese prosodic construction processes, which was reflected by the absence of prosodic effect on AN (300-600ms) for the DDC relative to the TDC. Reading D70 Shared Cortical Activation for Children’s Phonological Awareness in Signed and Spoken Language: A Functional Near Infrared Spectroscopy Study Diana Andriola1, Clifton Langdon1; 1Gallaudet University Introduction. Phonological awareness (PA) is the ability to identify and manipulate the phonological structure of words, and decades of research indicate that PA is highly correlated with reading outcomes in children. This correlation is interpreted as arising from the mapping of phonological segments to orthographic representations; thus, PA skill in a signed language without its own orthographic system does not support learning to read in a distinct spoken language (e.g., American Sign Language PA is not predictive of English reading outcomes). Results from recent behavioral studies challenge this view, finding that PA skill in a signed language is correlated with reading ability (Corina et al., 2014; McQuarrie & Abbott, 2013; Holmer et al., 2016). This raises the question: Does a common set of neural mechanisms underlie this behavioral correlation across signed and spoken languages? Here we use functional near-infrared spectroscopy (fNIRS) to examine cortical activation during PA tasks in deaf children. We test the hypothesis that signed and spoken language PA are supported by a common mechanism. If this is true, we predict 1) correlations between reading and American Sign Language (ASL) PA task performance will be similar to correlations between reading and English PA task performance, and 2) similar patterns of cortical activation during ASL PA and English PA tasks in regions associated with phonological awareness processing (e.g., left inferior frontal gyrus) (Burton, 2001; Katzir, Misra, & Poldrack, 2005; MacSweeney et al., 2008). Methods. Deaf children who are fluent in ASL and enrolled in K-6th grades (n = 17; mean age = 9.14, SD = 2.0) participated in this study. Participants were administered a battery of reading comprehension, language, and cognitive assessments. During fNIRS scanning, children completed two novel picture-based PA decision tasks designed to elicit PA processing for ASL and English. The tasks required the children to make phonological similarity judgments about the signs or words corresponding to pairs of pictures (i.e., Do the ASL signs for the the pictures have a similar location? Do the English words for the pictures The Society for the Neurobiology of Language Poster Session D rhyme?). Results. Reading comprehension has similar strong positive correlations with ASL PA and English PA. We also found positive correlations between ASL proficiency and English PA and between ASL proficiency and reading comprehension. A conjunction analysis of fNIRS data reveals shared activation for ASL PA and English PA in left frontal regions. Conclusion. This study is the first to investigate the relationship between PA and reading development at the brain and behavioral levels, and finds that both signed and spoken language PA are supported by a shared neural mechanism. These findings indicate that early sign language experience supports the development of PA for both signed and spoken language as well as early reading skills in deaf children. Thus, it is necessary to revisit classic views of the necessity of sound-to-print mapping mechanisms for reading success. Instead, it may be necessary to consider a broader level connection between PA skills and reading outcomes. D71 Changes in resting-state functional brain connectivity and reading ability following reading intervention Alexandra Cross1, Christine L. Stager2, Karen A. Steinbach3, Maureen W. Lovett3, Jan C. Frijters4, Lisa M.D. Archibald1, Marc F. Joanisse1; 1University of Western Ontario, 2Thames Valley District School Board, 3The Hospital for Sick Children, 4Brock University Past studies of adults and children have demonstrated changes in the structure and function of the brain following reading intervention. However, neuroimaging studies examining effects of reading intervention have relied on explicit reading tasks, making it difficult to determine whether brain differences are due to differences in task-based performance or in the underlying neural organization supporting reading. Here we use resting-state fMRI to measure inter-regional correlations of spontaneous fluctuations in neural activity in children with reading disability (dyslexia). Using a longitudinal design, we examine the significant variability in the degree to which poor readers benefit from explicit reading intervention. This permits us to better understand neural correlates of response to intervention. Participants were children aged 8 to 12 years who have been identified with reading disability and enrolled in a 110-hour smallgroup intervention for struggling readers (Empower Reading). The intervention has been previously shown to result in significant and generalizable gains in decoding, word recognition, reading accuracy, reading rate, and reading comprehension. Prior to beginning the reading program, children completed standardized and laboratorydeveloped measures of reading subskills. They next participated in two resting-state fMRI sessions, spaced nine months apart and corresponding to starting and completing the Empower program. The behavioural measures of reading were also administered following the Empower program. Analyses examined how changes in connectivity within the brain’s functional reading network were related to behavioural changes in reading ability pre- and post-intervention. Changes in behavioural reading ability were positively associated with increased resting-state functional connectivity within both left hemisphere and right hemisphere nodes making up 209 Poster Session D the classical reading network. Interestingly, changes in reading ability were negatively associated with changes in functional connectivity between hemispheres. Specifically, this included changes in connectivity between the right inferior frontal gyrus and left hemisphere areas including left inferior frontal gyrus, left intraparietal sulcus, and left thalamus, as well as changes in connectivity between the right intraparietal sulcus and the left thalamus, and between the right superior temporal gyrus and the left intraparietal sulcus. These results confirm that intrinsic functional networks of the brain are changed as a result of reading intervention, and suggest that growth in reading skills is related to both increased functional connectivity within the left hemisphere and reduced interhemispheric connectivity. We also comment on how we might use connectivity patterns prior to intervention to identify individual differences in subsequent response to intervention. D72 Print related activation in left superior temporal gyrus predicts future reading outcomes in Chinese beginning readers Tian Hong1,2, Lan Shuai1, Kaja K. Jasińska1, Stephen J. Frost1, Kenneth R. Pugh1,3, Hua Shu2; 1Haskins Laboratories, Yale University, 2State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, 3University of Connecticut, Department of Psychological Sciences Reading acquisition is a process of building mappings between printed symbols and spoken sounds. However, there has been a debate on the cross-language universality of the role played by print-speech mapping in reading development. In this longitudinal study, we followed a group of young Chinese children for three years from a pre-reading stage (kindergarten, Time 1) to a beginning reading stage (Grade 2, Time 3). During this period, we measured how young Chinese children (7-year-olds) processed print and speech using functional near-infrared spectroscopy (fNIRS) at the end of the first semester of Grade 1. We found that the activation in left superior temporal gyrus (STG) induced by print stimuli had a positive correlation with future’s reading outcomes (Time 3), whereas no significant correlation between speech-related activation and later literacy skill was observed. In addition, the print-related neural activation in the left STG was also correlated with early phonological awareness (PA) score in kindergarten (Time 1). In this Chinese early reading developmental study, we found that the print-related activation in left STG mediated the effect of PA at kindergarten on literacy skill two years later. These findings point to the centrality of automatic print-speech mapping in reading acquisition at the beginning reading stage even in a language with opaque orthography. D73 Resolving language processing from decision making: a multimodal imaging approach Mia Liljeström1,2, Annika Hulten1,2, Jan Kujala1,3, Riitta Salmelin1,2; 1Aalto University, Aalto NeuroImaging, 3University of Jyväskylä 2 210 SNL 2019 Program Language use is intimately connected to cognitive systems such as attention, decision making, and memory. Yet, how language-specific vs. domain general processes contribute to performing language tasks is still an open question in studying the organization of language. We examined how task (semantic vs. perceptual decisions on the same written words) and choice difficulty affect the neural response by utilizing the complementary views provided by MEG and fMRI. To disentangle processes related to the underlying decision making, we model the data as a drift diffusion process. 14 healthy volunteers participated in three sessions, performing the tasks (300 trials per task) either overtly (reaction time measurements) or covertly (MEG and fMRI measurements). The task was either a semantic (size) judgment task (e.g. “Is this item smaller or larger than a rubber boot?”), or a perceptual (color) judgment task (“Is this font written in blue or green?”). Within each category, the choice difficulty was varied (small/medium/large size; blue/turquoise/green color). A hierarchical Bayesian drift diffusion model of decision making was used to estimate the non-decision time and the drift-rate of the two tasks and the different difficulty levels (HDDM, Wiecki et al., 2013, Front. Neuroinform. 7:14). MEG evoked responses were analyzed using cortically constrained minimum norm estimates in MNE-Python. fMRI data analysis was performed in SPM12. An effect of choice difficulty was determined using a parametric model with reaction times for each item as a regressor. The reaction times for the semantic decision task were significantly slower (951ms +- 286ms) than for the perceptual decision task 727ms +270ms (one-way ANOVA on ranks p<0.001). Drift-diffusion modeling of semantic vs. perceptual decision making revealed a distinction in non-decision time between the two tasks, suggesting that slower reaction times for the semantic vs. perceptual decisions are due to additional processing stages in extracting semantic information to inform choices regarding item size. In addition, significantly slower drift-rates as a function of choice difficulty indicated that the slower reaction times for difficult choices were linked to the decision process. FMRI results (FDR, p<0.05) showed that attention to words and performing the semantic judgment task increased activation within the left lateral prefrontal cortex, inferior frontal gyrus, and inferior occipito-temporal cortex, and the bilateral ACC. Attention to colors and perceptual judgment increased the activation in the ventromedial prefrontal cortex and the precuneus bilaterally. Choice difficulty modulated activation within the left lateral prefrontal cortex and bilateral ACC in the semantic judgment task, and in the ventromedial prefrontal cortex, precuneus and posterior parietal cortices in the perceptual judgment task. The results suggest that the two judgment tasks draw partially on different neural substrates underlying decision making. MEG showed the time course of the decision process with early activation of the visual cortex, followed by activation in the superior/ middle temporal gyrus and middle/inferior frontal cortex. The results highlighted processing within the left middle The Society for the Neurobiology of Language SNL 2019 Program  temporal cortex around 400ms, similarly in both tasks, possibly reflecting highly automatized lexical processing, irrespective of task demands. D74 Phonological Processing During Implicit Task in Chinese Deaf Readers: An ERP Investigation Jian’e Bai1,2,3, Jing Wang1,2,3, Yiming Yang1,2,3,4; 1School of Linguistic Sciences and Arts, Jiangsu Normal University, 2Jiangsu Collaborative Innovation Center for Language Ability, 3Jiangsu Key Laboratory of Language and Cognitive Neuroscience, 4Institute of Linguistic Science, Jiangsu Normal University Previous studies showed that deaf people had difficulties in reading alphabetic writing scripts and many researchers attributed deaf readers’ reading difficulties to their difficulties in phonological processing which plays an important role in alphabetic reading. However, whether Chinese deaf readers can process phonological information of logographic Chinese characters is still unknown. In this study, we recruited Chinese deaf readers and their hearing controls to investigate this question by comparing the results of the two groups. Fourteen deaf readers with high school or higher educational level and fourteen hearing readers matched them with non-verbal intelligence, educational level and Chinese literacy took part in our study. We conducted a priming experiment. According to the phonological relation between prime and target, two conditions were included. On the experimental condition, prime and target were homophones; on the control condition, prime and target were characters with different phonology. Subjects were asked to do a lexical decision task and to response with keyboard. We recorded the behavioral and event-related potentials (ERP) data of the two groups. Behavioral data showed that significant phonological priming effect occurred for both groups. The response time to targets on homophonic condition was significantly shorter than on control condition for hearing people and also deaf people. ERP data showed that N250 and N400 components were induced both in deaf and hearing readers. We compared the ERP waveforms elicited by targets on homophonic condition with on control condition for both group respectively. In the time window of 200-250ms, no significant N250 effect was observed for deaf readers, while significant effects were observed at the electrodes in the midline of the left and right hemispheres for hearing readers. In the time window of 250-300ms, significant N250 effects were observed at the central parietal region and parietal region of the right hemisphere for deaf readers and marginal significant N250 effects were observed at the electrodes in the parietal region of the right hemisphere for hearing readers. In the time window of 300-400ms, no N400 effect was found for both groups. The results showed that deaf readers could process phonological information automatically during implicit task, which mainly reflected in the N250 effect in ERP waveforms. According to the behavioral and ERP results, we concluded that deaf Chinese readers with high school or higher education level could process phonological information in Chinese characters automatically. Compared with hearing readers, phonological processing The Society for the Neurobiology of Language Poster Session D in deaf readers started later and reflected in ERP waves elicited in the parietal middle area and the parietal area of the right hemisphere. These results implied that the mechanism of phonological processing in deaf Chinese readers and in hearing Chinese readers may be different. The work was supported by grants from National Natural Science Foundation of China (31400866), Natural Science Foundation of Jiangsu Province (14KJB180006), Jiangsu Provincial Foundation for Philosophy and Social Sciences (12YYC015), Foundation for Doctors in Jiangsu Normal University (12XLR009), Jiangsu Province Postdoctoral Foundation. Corresponding Authors:Jiane Bai (baije9972@ gmail.com) and Yiming Yang (yangym@jsnu.edu.cn). First co-authors: Jiane Bai and Jing Wang. D76 In search for intergenerational similarities in reading-related white matter tracts Maaike Vandermosten1,2, Cheng Wang1, Klara Schevenels2, Maria Economou2, Fumiko Hoeft1,3; 1brainLENS, Department of Psychiatry, UCSF, 2 Experimental ORL, Department of Neuroscience, KU Leuven, 3 Department of Psychological Sciences, University of Connecticut Parents have large genetic and environmental influences on offspring’s cognition, behavior, and brain. These intergenerational effects are observed in the domain of literacy, including relationships between parents’ and offspring’s reading and phonological skills (van Bergen et al., 2014). In addition, at the neural level, intergenerational relationships have been observed between parental reading and a child’s early white matter (Vandermosten et al., 2017) and grey matter (Black et al.,2012) brain development. Yet, no study has thus far directly examined the intergenerational similarities between reading-related brain circuits. Given that a complex tasks such as reading relies on wide spread brain regions, we will focus on the intergenerational transfer of reading-related white matter tracts, which can capture this connectivity. More specifically, we examined parent-offspring association in the reading white matter network by means of diffusion MRI, focusing on two major reading-related white matter tracts, i.e. the dorsally running left Arcuate Fasciculus (AF) and the ventrally running left Inferior- Fronto-Occipital Fasciculus (IFOF). In addition, we investigated their right hemispheric homologues since lateralization for reading seems not clearly established in early readers. In a total of 35 healthy families, consisting of parents and their biological offspring, we analyzed the diffusion MRI (dMRI) data by means of diffusion tensor deterministic fibertracking. We found positive associations of fractional anisotropy in the ventral reading circuit between children and fathers (left IFOF: r=.561, p=.009; right IFOF: r=.458, p=.038) and between children and mothers (left IFOF: r=.435, p=.039; right IFOF: r=.560, p=.006). In contrast, no significant intergenerational correlations were observed for the dorsal reading circuit. In sum, these results show that the ventral reading tracts of offspring show similarities with both parents whereas the dorsal reading tracts provide no evidence intergenerational transmission. Stronger intergenerational correlations in the ventral relative to the dorsal reading circuit might imply that the impact from parents to offspring’s ventral tracts is 211 Poster Session D more genetically mediated whereas it is more driven by environmental factors for the dorsal tracts. In a next step, we will now investigate how these intergenerational relationships in white matter are mediated by environmental and/or reading-related cognitive measures. At the long term, this type of research can provide insights in gene expression in the brain and clinical reading-related outcomes. D77 Unfolding the temporal dynamics of phonological interference (PI) effects in meaning access with ERPs Lin Zhou1,2,3, Charles Perfetti1,2,3; 1Department of Psychology, University of Pittsburgh, 2Learning Research and Development Center, University of Pittsburgh, 3Center for the Neural Basis of Cognition Phonological interference (PI) effects during meaning access is one line of evidence that automatic phonological activation occurs during meaning access. Such effects have been reported in Chinese reading (e.g. Perfetti and Zhang, 1995): In judging whether two sequentially presented Chinese characters were related in meaning, participants required more time and were less accurate when the character pairs were homophonic (but not related in meaning). However, the locus of these PI effects has remained unclear: some research concludes that PI effects arise from phonological processing at the lexical level, whereas other research shows that they arise at sub-lexical level. In an attempt to decide between these alternatives, we carried out an ERP study to examine the temporal dynamics of PI effects by manipulating orthogonally the second characters’ key orthographyto-phonology mapping properties, which were defined over lexical (pronunciation consistency) and sub-lexical (pronunciation regularity) levels. The results showed that PI effects occurred in (1) the N170 that was modulated by both consistency and regularity of the characters, (2) the P200 and N400 that were modulated by consistency only, and (3) the P600 that was modulated by neither consistency nor regularity. These results suggest that PI effects arise from phonological processing at both lexical and sub-lexical levels. We interpret these results in a twostage framework of meaning judgment. The first stage begins with lexical access of the second character, where its consistency and regularity play a role (indexed by their impacts on N170, P200 and N400). The second stage is a meaning comparison process that requires reactivation of the first character, where neither the second character’s consistency nor its regularity matters (indexed by their null impact on P600). Writing and Spelling D78 Left Perisylvian Cortex Damage Selectively Impairs Pseudoword Spelling. Brenda Rapp1, Jennifer Shea1, Gianni Petrozzino1, Robert Wiley1, Jeremy J. Purcell1; 1Johns Hopkins University SNL 2019 Program has used lesion symptom mapping to identify regions associated with OLTM in the inferior frontal gyrus and ventral occipitotemporal cortex and with OWM in the left parietal cortex (Rapp et al., 2015), this work did not test for regions uniquely associated with PGC. In this study, we address this knowledge gap by using a recently refined multivariate lesion-symptom mapping technique (DeMarco and Turkeltaub 2018) to examine the brain basis of PGC, while simultaneously accounting for impairments in either OLTM or OWM (or both). Methods. Participants were 19 individuals with post-stroke, chronic dysgraphia (age range 45-80; 8 female). The JHU Dysgraphia battery (Goodman and Caramazza, 1985) was used to measure pseudoword spelling, length effects (i.e. worse spelling for long vs short words) and frequency effects (i.e. worse spelling for low vs high frequency words). Linear mixed effects models were used to obtain beta estimates for the severity of impairment in pseudoword spelling, frequency effects, and length effects in each participant. These served as estimates of damage severity to the PGC, OWM, and OLTM systems respectively. Although no participant had abnormal auditory word comprehension, to account for any variability in phonological input processing, we also included measures of auditory comprehension (NNB; Thompson & Weintraub, 2014) and minimal pair pseudoword discrimination (PALPA1; Kay et al., 1992). Each participant had a T1-weighted MRI scan and MRIcron was used to draw each lesion. Enantiomorphic normalization to standard MNI space was carried out using SPM12 (Nachev et al. 2008). To identify brain regions associated with PGC while accounting for OLTM and OWM deficits, we used support vector regression lesion-symptom mapping (SVRLSM; DeMarco and Turkeltaub, 2018). This approach estimated the relationship between the lesion status of all voxels simultaneously and the independent variable of pseudoword spelling ability. Further, the analysis accounted for other variables such as frequency effect, length effect, lesion volume, spelling severity, age, NNB and PALPA1. Label scrambling permutation testing was used to evaluate chance (corrected threshold of 0.05). Results. Severity of impairment in PGC was associated with damage to left perisylvian regions including the pre/ post central gyri, insula, and anterior superior temporal gyrus. Discussion. Using a multivariate lesion symptom mapping technique, we determined that portions of the left perisylvian cortex are associated with the severity of PGC impairment. These findings fit with previous literature reporting that impairments in PGC in spelling - along with phonological input deficits - were associated with damage to left perisylvian cortex (Henry et al. 2007). We extend this work by demonstrating that damage to the left perisylvian cortex selectively impaired PGC in spelling independent of auditory comprehension deficits. Introduction. Spelling involves retrieving orthographic knowledge from orthographic long-term memory (OLTM) or computing phoneme-to-grapheme correspondences (PGC), and then processing the letter information via orthographic working memory (OWM) prior to producing a written or oral spelling response. Although recent work 212 The Society for the Neurobiology of Language SNL 2019 Program  Speech Perception D79 Neuromagnetic evoked responses to unattended speech as indices of language processing in young adults and in healthy aging Rasha Hyder1, Andreas Højlund1, Mads Jensen1, Karen Østergaard2, Yury Shtyrov1; 1Center of Functionally Integrative Neuroscience (CFIN), Aarhus University, 2Department of Neurology, Aarhus University Hospital (AUH) Assessing the brain activity related to language comprehension is required in a range of situations (e.g. clinical or developmental assessment). Particularly in cases when the subjects’ cooperation with instructions cannot be guaranteed (e.g., to neurological conditions, disability etc.), a protocol is needed that could evaluate the neurocognitive status of the language system independently from overt attention and behavioural tasks. To scrutinise the functioning of language neural circuits without relying on focussed attention and behavioural responses, we designed a novel paradigm which allows quantifying a range of neurolinguistic processes by recording the brain’s automatic responses to different speech sounds with carefully manipulated linguistic properties. This procedure is carried out using magnetoencephalography (MEG) combined with individual MR images to guide the source reconstruction of the event-related brain responses. In the first experiment, this paradigm was tested in healthy young participants who were presented with a sequence of speech stimuli, which included meaningful words of different semantic categories, meaningless pseudowords as well as (morpho)syntactically correct and incorrect forms, while focusing on watching a silent movie and ignoring the auditory language input. Meaningful words included action verbs, abstract verbs and concrete nouns. Syntactic word properties were manipulated using counterbalanced word-affix combinations expressing the definiteness of nouns as well as the past participle of the verbs. All stimuli were tightly controlled acoustically and presented equiprobably and pseudorandomly in a short sequence of (~30 minutes). Event-related responses were analysed using minimum-norm source estimates computed on individual MRI-based boundary element models. Permutation tests were employed for assessing the significance of each linguistic contrast in an a priori defined set of language-related cortical areas of the left hemisphere. The results of this experiment validated the applicability of our proposed paradigm for an objective assessment of a range of language functions, including lexical access (visible as enhanced word response vs. pseudoword, reflecting automatic word memory trace activation in the brain), referential semantics (evident through more frontal cortical distribution of responses to action vs. non-action word, likely due to semanticallyspecific motor cortex involvement) and morphosyntax (asyntactic inflections led to activity increase over syntactically correct forms, indicating grammatical/ morphological processing). In our second experiment, this paradigm was applied to healthy elderly participants. The results from elderly group revealed a range of effects of aging on different levels of linguistic processing: reduced, The Society for the Neurobiology of Language Poster Session D delayed and topogprahically shifted lexico-semantic ERFs and fully absent correlates of automatic syntactic parsing. In conclusion, the results from both healthy groups indicate that the new paradigm may be a subjectfriendly and time-efficient tool to test multiple language comprehension processes in the brain in the absence of attention on the linguistic input or any stimulus-related tasks, which makes it potentially applicable for assessing neurolinguistic status of individuals/patients who are unable to comply with an active behavioural assessment protocol. We will discuss implications of this approach to the study of neurolinguistics processing in healthy ageing and in neurological conditions. Signed Language and Gesture D80 Neural representations for spoken words are influenced by the iconicity of their sign translation equivalents in hearing, early sign-speech bilinguals Samuel Evans1,2, Cathy J Price3, Joern Diedrichsen4, Eva Gutierrez-Sigut2, Mairéad MacSweeney2; 1Psychology Department, University of Westminster, 2Institute of Cognitive Neuroscience, University College London, 3Wellcome Trust Centre for Neuroimaging, University College London, 4Brain and Mind Institute, University of Western Ontario How do representations in one language influence the structure of representations in another? Sign-speech bilinguals use languages that differ in their articulators and modality of expression. They also differ in many of their linguistic features, for example, in speech, words rarely sound like the meaning that they convey, whereas signs have greater potential to exploit visual iconicity. Despite these differences, there is evidence of significant co-dependence between speech and sign in speech-sign bilinguals, such that the properties of sign translation equivalents influence performance on tasks involving speech or written words (Morford et al. 2001; Shook & Marian 2012; Giezen & Emmorey, 2016). Here, we extend these findings, by testing the hypothesis that experience of sign language changes the structure of neural representations of translation equivalent spoken words in sign-speech bilinguals. We scanned seventeen right-handed participants in a 3T MRI scanner. All participants were hearing, British Sign Language (BSL) users that learned BSL before 3 years of age and selfreported a high level of BSL proficiency. In the scanner, participants took part in a semantic monitoring task whilst they attended to signs and equivalent spoken words produced by male and female language models. Outside the scanner, participants rated the iconicity of the signs (mean across participants = 3.98/7, min = 2.24, max = 5.44). Representational similarity analysis (RSA) was used to quantify the dissimilarity between neural patterns evoked by the same lexical items presented as spoken words and signs. The observed speech-speech and signsign representational distances were correlated with two models of brain function. Model (1) an item-based dissimilarity model that predicts greater dissimilarity between different as compared to the same lexical item and model (2) an orthogonal, iconicity-based dissimilarity model generated by calculating the absolute difference 213 Poster Session D between the mean iconicity rating provided for each item. Note that the iconicity model did not correlate with a model expressing the semantic dissimilarity between items (r = -0.126, n = 17, p=0.465). To identify regions of interest, we used a searchlight analysis to find speech and sign specific neural responses, defined as regions in which there were larger representational distances for sign relative to speech, and vice versa. Correcting for tests in five regions, the response in the left V1-V3 (peak at [-6 -98 16]) showed a fit to both the item-based and iconicitybased models in the sign-sign distances, indicating a sensitivity to sign iconicity in the primary and secondary visual cortices. For speech, correcting for tests in four regions, the response in the left STG (peak at [-56 -8 2]) showed a significant fit to the item-based model and also crucially to the iconicity-based model in the speechspeech distances, indicating that neural patterns evoked by spoken words are influenced by the iconic properties of the equivalent signs. These findings suggest that the structure of neural representations for speech may be changed by long-term, early exposure to sign language. Further work, comparing these findings to hearing nonsigners, with a larger number of lexical items, is necessary to confirm this. Speech Perception D81 An electrophysiological marker of unexpected interruptions of natural speech flow. Irina Anurova1, Aleksandra Dobrego2, Alena Konina2, Nina Mikusova2, Nitin Williams1, Anna Mauranen2, Satu Palva1; 1University of Helsinki, Neuroscience Center, 2University of Helsinki, Department of Languages As suggested by a theoretical model of Linear Unit Grammar (Sinclair & Mauranen 2006), efficient comprehension of continuous speech assumes segmentation of speech flow into manageable chunks. Furthermore, it has recently been shown that people’s choices regarding the locations of boundaries separating two consecutive chunks tend to converge with high probability (Vetchinnikova & Mauranen 2017). In the present study, we recorded brain activity during two contrasting types of pauses, ‘natural’ and ‘unnatural’, inserted into auditory speech stimuli in order to test whether unexpected interruption of speech flow affects natural speech processing. We conducted simultaneous EEG–MEG recordings in 21 healthy volunteers during the performance of a comprehension task. In a forced-choice paradigm (yes/no), the participants had to answer a comprehension question presented after each auditory speech stimulus. The stimuli were 10-45-second extracts selected from natural speech events from a corpus of academic English, and reproduced by a trained speaker, who mimicked the original intonation patterns with high precision. We inserted 2-second silent gaps into the stimuli at predictable locations (natural boundaries) separating successive chunks, and at unpredictable locations (unnatural boundaries). Selection of natural boundaries was based on the results of a prior behavioral experiment conducted in a separate group of 53 subjects. A half of selected natural boundaries was marked in the behavioral experiment with high probability (by more than 214 SNL 2019 Program 75% of participants), and a half – with medium probability (around 50%). Unnatural boundaries were selected at the locations where the probability of boundary markings did not exceed 5%. We found that unexpected interruption of speech flow elicited both electric and magnetic counterparts of the emitted potential, a prominent negative peak with a latency of around 200 ms. The emitted potential was not observed during natural boundaries behaviorally marked either with medium or high probability. The present results suggest that the emitted potential may be considered as a reliable marker of unexpected interruptions during natural speech flow. D82 The use speech temporal cues in phonetic processing: an electrophysiological study with infants and adults Monica Hedge1, Laurianne Cabrera1; 1CNRS-Université Paris Descartes The current project explores the interaction between auditory and speech perception abilities during early development. Before 10 months of age, we know that infants are not yet attuned to the consonant contrasts of their native language, meaning that, as compared to adults, they are sensitive to certain non-native phonological contrasts. We know, however, that the auditory system will continue to develop until late adolescence. How, then, are young infants able to detect such varied phonological contrasts? Psychoacoustic models suggest that the auditory system decomposes a complex speech signal in a series of narrowband signals modulated over time. According to these models, speech information is mainly conveyed by the temporal modulations at the output of cochlear filters. Particularly, these modulations can be described at two different time scales: a relatively fast one, the frequency modulations (FM) and a relatively slow one, amplitude modulations (AM). Speech analysis-synthesis tools called “vocoders” are used to generate a continuum of speech sounds with increasing/decreasing spectro-temporal complexity in order to assess the role of these modulations in speech perception. A myriad of studies have shown that adults are able to rely only on the slowest AM (< 8Hz) of speech to discriminate syllables. Recent behavioural studies have shown that 6-month-old infants may require faster fluctuations of AM cues in order to discriminate consonants. Such prior results suggest that infants weight fast AM cues more heavily than adults for speech perception. Yet, the neural underpinnings as to how such modulation cue weighting changes over the course of development are still unknown. To tackle this question, we used an Electroencephalography (EEG) paradigm to measure the Acoustic Change Complex (ACC) underlying auditory detection of native and non-native consonants in French-learning 6-month-old and nativeFrench adult listeners. We used vocoders to process three Vowel-Consonant-Vowel syllables (VCV): French voiced /aba/, French unvoiced unaspirated /apa/, and an English aspirated /apha/. Three vocoder conditions were designed to: i) preserve original FM and AM, “Intact condition”, ii) reduce FM and preserve original AM, “Full AM condition”, and iii) reduce both FM and fast AM, “Slow The Society for the Neurobiology of Language SNL 2019 Program  AM Condition”. We hypothesize that with age, listeners rely more on modulation cues important for their native language as exposure to native phonological contrasts increases and their auditory system develops. Preliminary analyses with 8 infants show different ACC patterns for the different VCVs (e.g., later positive wave for “apa” compared to “aba”). Moreover, the vocoder manipulation seems to affect the ACC responses in more frontal regions (near F3, F4). Further analyses will be carried out on EEG data collected with 20 adults to investigate whether modulation cue weighting changes with age. D83 The environment makes a difference: Disrupted processing of speech sounds within natural auditory environment is reflected in the left auditory cortical activity Hanna Renvall1, Sebastian Silfverberg1, Lauri Jahkola1, Riitta Salmelin1; 1Aalto University Introduction Humans can automatically attend and react to behaviorally relevant perceptual features such as speech in our natural environment. The special nature of attended speech stimuli has been under intensive scrutiny in several recent neuroimaging studies. These studies have shown that higher-order auditory areas track the spectrotemporal features of attended speech signals (Kerlin et al. 2010; Ding and Simon 2012; Mesgarani and Chang 2012; Vander Ghinst et al. 2016) while suppressing those of unattended speech (Puvvada and Simon, 2017). Here we addressed whether the speech-specific cortical processes would be affected by simultaneously presented, unattended non-speech natural sounds. For this, we used superimposed speech and environmental sound excerpts. We hypothesized that the cortical activity to speech would be modified by the simultaneously presented environmental sounds, and such modulation would depend on the semantic agreement of the attended sounds and the auditory surroundings. Methods In our magnetoencephalography (MEG) study, 18 native Finnish-speaking subjects were presented with auditory “miniscenes” that consisted of superimposed 4- to 6-word sentences and short fragments of auditory environments. The sentences were modified so that the last word of the sentence was—on the basis of extensive behavioral testing—either i) highly expected and semantically appropriate, ii) improbable but semantically appropriate, or iii) semantically inappropriate. The superimposed environmental sounds were selected to match the expected last word or not, and they consisted of e.g. sounds of traffic, household, animal calls, and non-speech human sounds. The combined sounds were presented with +10 dB speech-to-environmental sound intensity ratios. At this intensity ratio, both sound excerpts within the stimuli were clearly distinguishable. The subjects were instructed to attend to the sentences and respond with a finger lift when they heard a predefined target word at any position in a sentence (7% of the stimuli). Brain activity was recorded with a 306-channel neuromagnetometer (Vectorview, Neuromag Ltd). Results The inappropriate sentence-final words evoked a significantly stronger sustained response (> 400 ms) than expected final words, especially in the left temporal areas. At 550- The Society for the Neurobiology of Language Poster Session D 700 ms after the final word onset, the size of this effect was influenced by whether the auditory surroundings matched the expected or the actual (inappropriate) final word: when the auditory environment matched the presented inappropriate sentence ending, the amplitude of the sustained MEG response was smaller than when the auditory environment agreed with the expected (but not presented) word (p = 0.001). Conclusions The present results suggest strong top-down modulation from the natural, unattended auditory surroundings for processing of attended speech sounds. Such processes are likely to play an important role in real-life-like auditory environments, and appear to rely on the activation of the predominantly left auditory cortex at around 550-700 ms after the stimulus onset. D84 The Neurobiology of Speech Perception in Noise in Amateur Singers and Non-Singers Maxime Perron1,2, Valérie Brisson1,2, Émilie Belley1,2, Josée Vaillancourt1, Johanna-Pascale Roy1, Philip L. Jackson1,2, Pascale Tremblay1,2; 1Université Laval, Quebec City, 2CERVO Brain Research Centre, Quebec City INTRODUCTION. Compared to non-musicians, professional musicians show evidence of functional and structural neuroplasticity (Fauvel et al., 2014; Sluming et al., 2007) that has been associated with auditory and cognitive benefits (Alain, et al., 2014; Parbery-Clark et al., 2011), including better sensitivity to speech in noise. In contrast to professional musicians, amateur musicians, such as choral singers, have been less extensively studied. In addition to being cognitively demanding, singing relies on language and speech functions, and is likely to be associated with plasticity within and beyond the neural system supporting language. The aim of this study was to investigate speech perception in noise in younger and older adult choral singers and non-singers in relation with brain structure. METHODS. 41 choral singers and 41 non-singers aged 20 to 87 years underwent cognitive (MOCA) and hearing (pure tone thresholds) evaluations and completed an auditory syllable discrimination task under three conditions of babble noise (no noise, high intelligibility, low intelligibility). For each condition, sensitivity (d’), response bias (c), reaction time (RT) and overall accuracy were calculated. MPRAGE (1 mm3) sequences were acquired on a Philips 3.0T MRI and were processed with Freesurfer 6. Volume, thickness and surface were calculated for each region and corrected for head size (region value/total hemisphere value). ANALYSES. Preliminary analyses focused on a subgroup of 10 singers and 10 non-singers that were matched for age (t = -.339, p = .739), sex (χ2 = .202, p = .653) and MOCA (t = -.290, p = .775). A series of moderation analyses was conducted (one for each dependent variable d’, c, RT, and overall accuracy) with group (singers, nonsingers) as the independent variable, ratio of gray matter measures as the moderator, and hearing as a covariate. FINDINGS. Main effects of group were found for overall accuracy (β = -111.24, t = -2.88, p = .012) and d’ (β = -2.20, t = -2.82, p = .013) at low intelligibility, with better performance for the singers. Interactions between group and thickness was found at high and low intelligibility for 215 Poster Session D overall accuracy (high: β = 68.12, t = 2.18, p= .046; low: β = 97.61, t = 2.93, p = .010) and d’ (high: β = 1.35, t = 2.16, p = .048; low: β = 1.93, t = 2.86, p = .01) in the left premotor (LPM) cortex. These interactions were characterized by a positive relationship between thickness and accuracy for singers and a negative (low) relationship or no relationship (high) between thickness and accuracy for non-singers. DISCUSSION. Our results suggest that amateur singing was associated with better speech perception in noise due to structural changes within the neural speech system, in particular within the LPM. It has been proposed that LPM contains sublexical speech representations (Guenther & Vladusich, 2012). Amateur singers may have access to richer top-down information from the LPM, which could facilitate speech perception in degraded listening conditions, supporting modern versions of the motor theory of speech perception (for a recent review, see e.g. McGettigan & Tremblay, 2018). D85 Similarities and differences in the cortical processing of melodies in speech and music Mathias Scharinger1,2, Valentin Wagner2, Christine A. Knoop2, Daniela van Hinsberg2, Winfried Menninghaus2; 1Philipps University Marburg, 2 Max Planck Institute for Empirical Aesthetics Recent work in the neurosciences suggests that music and speech share neural bases for certain levels of processing. Speech prosody, that is, stress and intonation patterns, has been found to recruit right-temporal processing areas in close vicinity to primary auditory regions. Deepening our previous study on speech melody in poems, we here address a direct comparison of spoken poems and their musical settings from a neurobiological perspective. We hypothesize that the recurrence structure of syllable pitches and durations as determined by autocorrelation analyses will modulate brain activity in areas dedicated to musical processing. We are furthermore interested in brain regions that support the aesthetic evaluation of speech and music. For this purpose, 42 participants listened to randomly presented spoken poems and their sung musical settings while they were laying in a 3-T Magnetic Resonance Tomography (MRT) scanner. Importantly, in order to exclude voicespecific effects, the poems and their musical settings were recorded by the same professional speaker and singer. During the echo-planar imaging (EPI) sequence, participants provided continuous liking ratings on a MRT-compatible pressure sensors with their left and right index fingers. Our functional magnetic imaging (fMRI) analyses first compared overall activations of speech (poems) vs. music (musical settings) and included parametric modulations of the blood oxygenation level dependent (BOLD) response by continuous liking ratings and autocorrelations of syllable pitches and durations derived from both speech and music recordings. Our results show that poems (compared to musical settings) elicited activity in posterior parts of the mid-temporal gyrus, extending into the superior temporal sulcus in the left hemisphere, whereas musical settings elicited activity in the middle part of superior temporal gyrus, involving Heschl’s gyrus, in the right hemisphere. Second, 216 SNL 2019 Program continuous liking ratings modulated brain activity in posterior parts of the right middle temporal gyrus for poems and in right supramarginal gyrus for musical settings. Finally, autocorrelations of syllable pitch and durations for poems correlated with brain activity neighboring on and overlapping with regions of musical processing in middle parts of the right superior temporal gyrus while autocorrelations of syllable duration for musical settings covaried with activation in left superior temporal gyrus in vicinity to speech-dedicated processing areas. These results imply several similarities and differences of processing speech and music: First, speech melody as analyzed in terms of the similarity of recurrent pitch contours by pitch recurrences recruits areas that support music perception, while rhythmic aspects of music as approximated by duration recurrences recruit areas that support speech perception. These findings are in line with hemisphere specializations for acoustic information on fast (left) and slower (right) time scales. Differences between speech and music were seen in the regions modulated by aesthetic evaluations: For speech, these regions were in vicinity to those modulated by pitch structure, whereas for music, they were further apart and involved regions commonly associate with pitch memory. D86 Investigating the neural bases of self-monitoring of speech in people with aphasia: an ERP study Heather Ouellette1, Francesco Usai1, Aaron, J. Newman1; 1Dalhousie University Aphasia often impairs the ability to detect errors in incoming speech, limiting the effectiveness of communication and the chances to benefit from treatment. While self-monitoring of speech production is important in aphasia recovery, its cognitive and neural underpinnings are not well understood, either in healthy people and older adults with stroke. A better understanding of such processes would benefit the design of treatment aiming to remediate this deficit. In healthy people error detection has been associated to activity in the mid-prefrontal cortex (PFC), as typically indexed by the error-related negativity (ERN) - an eventrelated potential (ERP) component that is sensitive to response accuracy. It is not clear whether in people with aphasia—who typically have higher error rates and low detection rates—the relationship between the ERN and self-monitoring abilities is consistent with what is observed in healthy subjects. To address this question, we conducted an ERP study to assess whether the presence, or lack thereof, of the ERN was contingent on participants’ ability to detect their mistakes. Five people with aphasia (2 females) participated in this study (with recruitment ongoing). We recorded EEG while participants performed a picture-naming task, and observed the effect of response accuracy on the ERN. To do that, the time courses and scalp topographies of ERPs (locked to voice-onset time) were contrasted for correct and incorrect responses. Vocal responses were recorded and later categorized based on naming accuracy, wherein errors were considered as any type of paraphasia and hesitations, even when subjects corrected The Society for the Neurobiology of Language SNL 2019 Program  an initially erroneous response. To gauge self-monitoring abilities, participants were prompted to signal whether they thought they made a mistake by button pressing. In addition, we analyzed vocal responses categorized as erroneous, distinguishing those showing evidence of error-detection (i.e correcting, or rejecting a response) from others. ERP responses - defined as the mean amplitude of the ERP in the time window from 0 to 300 ms after response-onset - were analysed using linear mixed effects models, with channel location and response accuracy as fixed effects (expressed as an interaction between the two terms). Analysis of behavioral responses showed that people with anomia (n=3) made the least amount of errors, both in absolute and relative terms, compared to the other two participants. Participants also showed little awareness of their mistakes, with detection rates lower than 20%, except for one participant who detected 17/36 errors. Statistical analysis showed no effect of response accuracy on ERP amplitude, suggesting that the ERN was not elicited by speech errors in this group, and more generally that the neural responses associated with correct and incorrect naming were not distinguishable. The poor self-monitoring abilities of people with aphasia and the absence of the ERN component observed in the data acquired to date confirm prior reports of poor self-monitoring in aphasia, but provide insufficient evidence to draw conclusions about the role of the ERN in this population. We are currently recruiting people with more severe aphasia and increasing task difficulty in an effort to increase error rates. D87 What can machine learning tell us about human categorical perception? Sara Beach1,2, Dimitrios Pantazis2, Poster Session D and at each timepoint by performing five-fold crossvalidated binary classification of the MEG sensor-level data for each pair of stimuli in the Passive and Active conditions separately. First, focusing on the time window (229-253 ms after sound onset) in which ‘ba’ and ‘da’ prototypes were robustly decoded in both Passive and Active conditions, we observed greater overall neural dissimilarity (i.e., better decoding) in the Active condition, but no significant difference in the correlation of individuals’ Passive vs. Active neural matrices with the perceptual matrix. On the other hand, the average Active neural matrix had a significantly higher correlation with perception than did the average Passive neural matrix. This suggests that while neural decoding of MEG responses to an acoustic continuum may be too noisy for individual-difference analyses, the effect on decoding of attention to a categorical judgment, previously reported in fMRI, is also evident in MEG. Second, examining the entire 1-s trial window in the Active condition, decoding participants’ perception of ‘ba’ or ‘da’, regardless of stimulus identity, was significantly above chance for sustained periods, corresponding to an early window and a late window. Temporal generalization analysis revealed that phoneme representations in the late window (~400-600 ms) were sustained and stable, perhaps due to the task requirements of categorical judgment and delayed response. Preliminary analysis of sensor patterns contributing to the phonemic representations suggest dynamic involvement of different cortical regions in supporting categorical decision-making about an acoustic continuum. D88 Data-driven meta-analysis of the structuralfunctional parcellation of temporal and temporoparietal cortex Alex Teghipco1, Gregory Hickok1; 1University of California, Ola Ozernov-Palchik2, Sidney May2,3, Tracy Centanni2,4, John Gabrieli2; 1Harvard University, 2Massachusetts Institute of Technology, 3Boston College, 4Texas Christian University Irvine Categorical perception, the phenomenon by which stimuli that vary continuously on any number of physical dimensions are nevertheless perceived as members of discrete classes, has organized thinking about phonemic processing for decades. In this study, we asked how faithfully a linear support vector machine classifier trained on high temporal resolution human neural data would reproduce human psychophysical performance and whether it would reveal the emergence of phonemic representations over time as a function of task demands. We recorded magnetoencephalography (MEG) from 48 adult volunteers who were exposed to 40 tokens each of 10 steps of an acoustic continuum ranging from ‘ba’ to ‘da’, presented via earphones, in pseudorandom order, in each of two conditions. During the Passive listening condition, participants performed a visual target detection task to maintain arousal but were told they could ignore the sounds. During the Active listening condition, participants were made to label each stimulus as either ‘ba’ or ‘da’ via counterbalanced and delayed button-press. A 10x10 perceptual dissimilarity matrix was constructed for each participant from the differences between each pair of stimuli in the percent of each labeled ‘ba’. A 10x10 neural dissimilarity matrix was constructed for each participant Speech processing occurs along a hierarchy, with each computational step engaging different yet interacting regions of the brain— from acoustic analysis in primary auditory cortex to phonetic processing, word recognition, and semantic analysis in neighboring higher-order areas. What remains uncertain, however, are the precise locations of these functional subregions. For instance, models of auditory processing diverge on whether phonemes are analyzed in mid-posterior STS or midanterior STG. Here, we sought to clarify the areas of the brain dedicated to different aspects of speech processing by taking a data-driven meta-analytic approach that compared patterns of activity in temporal and temporoparietal cortex associated with the 3,107 features (i.e., ~cognitive processes) contained in Neurosynth. By gauging the similarity between activity associated with each pair of features, we extracted a set of abstract, lower-dimensional networks that represent the structure of unique brain-behavior relations embedded in this database.We then compared these gross networks to each other by using them to functionally parcellate the temporal and temporoparietal cortex, and evaluated whether any of them delineated components of speech processing. To facilitate interpretation of these networks, The Society for the Neurobiology of Language 217 Poster Session E and to validate their differences, we independently parcellated our ROI using its structural connectivity. Lower-dimensional functional networks were extracted by clustering features with affinity propagation, which recursively sends messages between pairs of data points to identify exemplars. Notably, this analysis distinguished activity associated with auditory processes from speech perception, sentence-level comprehension, semantics, linguistics, orthography, word class, and word processing. The main pattern of activity for each cluster was extracted with PCA, and similarity between voxels with respect to these core functional networks was used to cluster voxels. Structural connectivity was used to parcellate the same ROI by performing probabilistic tractography for each voxel in 40 subjects from the Human Connectome Project. Tractograms for voxels were averaged across subjects and overlapping tracts that distinguished portions of the ROI were identified with ICA. Functional networks revealed a number of graded interactions, including a gradual shift from speech and auditory processing in Heschl’s gyrus (HG), to comprehension and speech processing along the ventral STG posterior to HG. Comprehension loaded exclusively on areas anterior to this ventral STG cluster, while posterior areas along the STS, STG and MTG loaded strongly on comprehension and moderately on speech. Remarkably, structural connectivity also dissociated these areas: anterior areas associated purely with comprehension were connected by a pathway spanning the STS/MTG; posterior areas loading mostly on comprehension but also speech were connected to the MTG and precentral gyrus; areas loading on speech and comprehension were part of a pathway connecting midSTS/ventral STG to most of posterodorsal STG; and areas loading on speech and auditory processing connected to the insula and precentral gyrus. These results indicate that speech perception is associated with patterns of activity spanning mid-posterior ventral STG, and mid-posterior STS/MTG. Further, this structural-functional parcellation illustrates that although a large number of subtle differences in the patterns of activity that are reported in the neuroimaging literature are difficult to interpret, they are grounded in tangible differences between structural networks. Poster Session E Thursday, August 22, 2019, 3:45 – 5:30 pm, Restaurant Hall Control, Selection, and Executive Processes E1 The role of the left frontal aslant tract in lexical selection: data from picture-word interference and sentence completion tasks Andrey Zyryanov1, Svetlana Malyutina1,2, Olga Dragoy1,2; 1National Research University Higher School of Economics, Moscow, 2Federal Center for Cerebrovascular Pathology and Stroke, Moscow Interference resolution (IR) between the target lemma and co-activated semantically related lemmas during lexical selection involves the left inferior frontal gyrus (IFG), as suggested by fMRI studies of picture-word 218 SNL 2019 Program interference (PWI) contrasting naming with a semantically related distractor (high IR demands) against naming with an unrelated distractor or without a distractor (low IR demands). However, evidence from lesion studies is somewhat mixed: although damage to the IFG selectively impairs sentence completion (SC) task performance in low-constraint sentences, i.e. under high lexical selection demands (Robinson et al., 1998), it does not result in selectively longer latencies of naming with semantically related distractors compared to unrelated distractors in the PWI task (Piai & Knight, 2018), which questions its critical contribution to the IR during lexical selection. Based on studies showing the role of presupplementary motor area (pre-SMA) in lexical selection (Alario et al., 2006; Piai et al., 2014), we hypothesized that the connectivity between the IFG and pre-SMA via the frontal aslant tract (FAT) underlies lexical selection. To test our hypothesis, we investigated the effect of FAT disconnection on SC and PWI performance. Twenty patients with a single left-hemisphere stroke involving the frontal lobe (age: mean 57.4 years, range 42–70; months post-onset: mean 24.6, range 1.9–88) performed the PWI task with semantic, unrelated and congruent distractors and the SC task with high versus low semantic constraint. Based on diffusion MRI (1.5T scanner, 64 diffusion directions, b-value 1000 s/mm2), we performed tensor-based whole-brain tractography, manually reconstructed the left FAT and extracted its volume in individual participants, corrected for whole-brain volume. T1-, T2- and FLAIR-weighted images were used to delineate lesions and calculate the total lesion volume and lesion load in pars triangularis and pars opercularis of the IFG. PWI naming latencies, PWI accuracy and SC accuracy were analysed in linear mixed-effects models including condition, anatomical predictor (FAT volume or IFG load), their interaction and total lesion volume as fixed effects, and participant and item as random effects. As expected, in the PWI task, semantically related condition was associated with slower (p<0.001) and less accurate (p<0.001) naming compared to congruent condition, with no difference between semantically related and unrelated condition. We found significantly lower PWI accuracy (p=0.007) and a trend for slower PWI naming latencies (p=0.072) in participants with lower FAT volume but no interaction between FAT volume and condition, suggesting that FAT mediates PWI performance regardless of IR demands. No effects of IFG damage on PWI performance were observed. By contrast, lower SC accuracy was predicted by greater IFG damage (p=0.002), low semantic constraint (p<0.001) and their interaction (p=0.002), but not lower FAT volume (p=0.293), consistent with the established role of IFG in lexical selection. A posthoc error type analysis of PWI data revealed that lower FAT volume was associated with increased likelihood of suppression (p=0.003), but not semantic (p=0.091), errors. Our findings do not support the critical contribution of FAT to IR during lexical selection, but rather point to its role in PWI-specific distractor processing. The Society for the Neurobiology of Language SNL 2019 Program  Writing and Spelling E2 White matter tracts underlying orthographic processing: Evidence for the role of the left vertical occipital fasciculus Celia Litovsky1, Nomongo Dorjsuren1, Kerry Qualter1, Brenda Rapp1; 1Johns Hopkins University Introduction. Orthographic processing has been shown to recruit multiple cortical regions, including the left fusiform gyrus, inferior frontal gyrus and posterior parietal cortex (Rapp et al., 2016). However, it is still unclear which white matter structures underlie orthographic comprehension (reading) and production (spelling/writing). Previous studies have shown that the left arcuate fasciculus (AF), inferior fronto-occipital fasciculus (IFOF), and inferior longitudinal fasciculus (ILF) play a role in orthographic processing (Vandermosten et al., 2012a; Epelbaum et al., 2008), although these results are not consistently found across studies (see Vandermosten et al., 2012b, for a review). Of additional interest is the vertical occipital fasciculus (VOF), a newly (re)discovered tract connecting the orthographic processing areas of the posterior parietal cortex and the fusiform gyrus (Yeatman et al., 2014). In order to identify the white matter tracts that specifically support orthographic processing but not spoken language processing, we evaluated if white matter integrity (measured by tract volume) of these tracts predicted the degree of impairment on measures of spelling, reading, naming, and sentence comprehension in participants with chronic post-stroke language deficits. Methods. Twenty-one participants (7 females, age 60 +/- 2.3 years, 81 +/- 12.3 months post-stroke) with a single left hemisphere stroke underwent T1-weighted and diffusion-weighted imaging (b=0, b=1500 s/mm2) and completed behavioral assessments of reading, spelling, spoken naming, and auditory sentence comprehension. Whole-brain probabilistic tractography was performed in ExploreDTI (Leemans et al., 2009) using constrained spherical deconvolution (Jeurissen et al., 2011), and 10 white matter tracts (right and left: long and posterior segments of the AF, VOF, IFOF, ILF) were segmented according to the Catani and Thiebaut de Schotten atlas (2008) and the segmentation protocol of Takemura et al. (2016). Each tract’s volume was entered into four stepwise regression analyses to predict each of the four language domain scores. Results. Regression models significantly predicted each of the four language domain scores [spelling: F(3,10) = 8.28, p=0.005, Adjusted R2=0.627; reading: F(7,6)=7.75, p=0.01, Adjusted R2=0.784; naming: F(4,9)=6.83, p=0.008, Adjusted R2=0.642; sentence comprehension: F(2,11)=3.47, p=0.07, Adjusted R2=0.276]. With regard to the specific tracts, spelling and reading scores were significantly predicted by a bilateral white matter network that largely overlapped with the network supporting spoken naming, including the left IFOF and right posterior segment of the AF. However, only the reading and spelling scores were significantly predicted by the volumes of the left and right vertical occipital fasciculus (VOF). Conclusions. The white matter structures that underlie orthographic processing largely overlap with those supporting other language skills, however, The Society for the Neurobiology of Language Poster Session E we report novel evidence for the unique role of the VOF in orthographic processing. These findings are generally consistent with previous research identifying differences in temporoparietal anisotropy in individuals with dyslexia compared to typical readers (Klingberg et al., 2000). Development E3 Language Abilities as a Function of Resting-State EEG Trajectories: an 11-Year Longitudinal Study Lars Meyer1, Xenia Dmitrieva2, Caroline Beese2,3, Vadim Nikulin4,5,6, Claudia Männel2,4,7, Angela D. Friederici2, Gesa Schaadt2,4,7; 1 Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, 2Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 3Center for Lifespan Psychology, Max Planck Institute for Human Development, 4Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 5Centre for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, 6Neurophysics Group, Department of Neurology, Charité-University Medicine Berlin, Campus Benjamin Franklin, 7Day Clinic for Cognitive Neurology, University of Leipzig Language development rests on complex neurophysiological development. Here, we sought to predict language outcome from the 11-year trajectory of children’s resting-state electroencephalogram (RS EEG), indexing the state of an individual’s brain networks. Fortysix children were followed across three recording time points (i.e., 2 months, 4.5 years, and 11 years of age). We focused on alpha-band oscillations, which have been implied in early cognitive development previously. We first identified individual alpha-band peak frequency (IAF) and power (IAP), using an automatic frequencydomain search within age-adapted frequency ranges (2-months range = 4–9 Hz, 4.5-years range = 6–12 Hz, 11-years range = 7–13 Hz). We then modeled individual longitudinal trajectories of IAF and IAP with latent growth curves. Based on the linear appearance of the IAF trajectory, we fitted a linear model to IAF data; in contrast, based on the inverted-u-shape trajectory of the IAP data, a quadratic model was fitted to the IAP data. Latent scores from the IAF and IAP growth curves, as well as their interaction, were then entered as predictors into a multivariate multiple regression analysis on the data from each recording electrode. Dependent outcome measures were general cognitive abilities (i.e., non-verbal intelligence) and language abilities (i.e., phonological awareness, age of writing onset, writing skills, reading speed and comprehension, and vocabulary size), which had been acquired at 10–13 years of age. After correcting for multiple comparisons across electrodes, we found the quadratic IAP trajectory to predict non-verbal intelligence, phonological awareness, writing skills, and vocabulary size. Specifically, high behavioral performance was associated with a continuing IAP increase into late childhood, rather than a downward slope of the quadratic function. To characterize the structure amongst these correlations, we performed a systematic analysis of the correlation matrix. We found that non-verbal intelligence and phonological awareness correlated more strongly 219 Poster Session E with the IAP trajectory than with the other behavioral measures. In contrast, correlations between the IAP trajectory and the other behavioral measures were weaker than the correlations amongst the behavioral measures themselves. In particular, writing skills were most strongly correlated with phonological awareness and vocabulary size was most strongly correlated with writing skills. Our results suggest that a u-shaped trajectory of IAP from birth to late childhood associates with general cognitive outcome and basic linguistic outcome (i.e., phonological awareness), cascading into more specific linguistic abilities. While we replicated the inverted-u-shaped IAP trajectory across the first decade of life, we also note that a good cognitive outcome in late childhood may require a continuing IAP increase, rather than a downward slope. E4 Rule learning and sleep: An fNIRS study of non-adjacent rule learning by 6-month-olds Anna Martinez-Alvarez1, Judit Gervain1; 1CNRS-Université Paris Descartes What is the role of sleep in infant language learning? At 6 months, infants sleep between 12 and 15 hours a day. Although the effects of sleep on adult learning and memory are starting to be understood (Diekelmann & Born, 2010), little is known about how sleep impacts learning in infancy. To address this issue, we test how infants learn non-adjacent linguistic regularities before and after sleep. Previous research suggests that infants can learn such regularities at 10 months (Martinez-Alvarez et al. 2018). In the current study, we test the hypothesis that sleep may help infants consolidate learning and when tested after sleep, infants younger than 10 months may also be able to learn. Three groups of 6-montholds are presented with sequences following an AXB rule (e.g. “pedibu”, “pegabu”) and otherwise similar random controls (e.g. “dibupe”, “bugape”). Infants’ brain activity (hemodynamic response) is measured in the temporal, parietal, and frontal cortices using functional near-infrared spectroscopy (fNIRS). All participants are tested twice: before and after overnight sleep (Group 1), a nap (Group 2) or in the absence of sleep (control, Group 3). Data collection is still ongoing, but preliminary analyses of the pre-sleep session of Group 1 (n=10) shows larger activation (oxyHb) for the rule than for the random condition in bilateral temporal and frontal areas, similarly to the bilateral fronto-temporal activation previously found in 8-10-month-olds (Martinez-Alvarez et al. 2018), suggesting that even before sleep, infants as young as 6 months of age may be able to learn nonadjacent dependencies. We are currently in the process of completing the overnight group and acquiring data in the other two groups. Direct comparisons between the pre- and post-sleep results in all groups, as well as comparisons across groups, will be conducted once the three groups are completed. E5 Temporal and topographical changes in theta power between middle childhood and adulthood during sentence comprehension Mandy Maguire1, Julie M. Schneider2, Yvonne Ralph1, Sonali Poudel1, Tina Melamed1; 1University of Texas at Dallas, 2University of Delaware 220 SNL 2019 Program Introduction. Time frequency analysis of the EEG is increasingly used to study the neural correlates of language comprehension. Although this method holds promise for developmental research (Maguire & Abel, 2013), most existing work focuses on adults. Theta power in particular is consistently found to correspond to semantic processing of individual words (Bastiaansen et al., 2008) and in ongoing text (Davidson & Indefrey, 2007). Developmental differences in theta topography have been reported in response to individual words (Spironelli & Angrilli, 2010). It is unclear, however, how these differences in theta engagement manifest during sentence comprehension. Here we study the timing and topography of theta engagement during word-by-word sentence comprehension in middle childhood, adolescence, and adulthood. Participants and procedure. Fifty-seven children ages 8-10 years, 31 children ages 13-15 years, and 25 adults participated in a word learning from context task using EEG. Participants were right-handed, monolinguals with no history of developmental delay or deficit. The task included 50 visually presented sentence triplets, each ending with a nonsense word to be learned. Only the words preceding the unknown target word are included in the analysis. EEG Processing and Analysis. EEG data were analyzed using the EEGlab toolbox of Matlab. Continuous data were cleaned using the Clean Raw Data plug-in and Independent Components Analysis. Missing channels were spherically interpolated and average referenced. Data were epoched from -500 to 4200 msec around the first word in the sentence. A morlet wavelet analysis and hanning-tapered window were applied and was averaged across all trials, then the mean baseline power was subtracted (Delorme & Makeig, 2004). We investigated changes in the theta (4-8 Hz) band over the course of the sentence, focusing on the time period around each word. A monte-carlo cluster correction permutation analysis was used to determine statistically significant differences (p = 0.05). Results. We observed significant topographical and temporal differences in theta engagement between age groups. Topographically, the response became localized with age. Theta power was broadly distributed in 8-10-year-olds, more localized to left central and posterior areas in 13-15-year-olds, and maximal over left central electrodes in adults. Temporally, in adults, theta power increased most clearly between 200-400 msec after each word’s onset. In both groups of children theta engagement peaked between 200-400 msec after each word’s onset but continued throughout the presentation of each word and between words. This response was most temporally prolonged in the youngest age group. Conclusions. These findings support previous studies showing theta localization during word retrieval through early adolescence and adulthood. We expand on past work by showing that theta in adults seems to be specific to word retrieval. In children, the prolonged theta engagement between words may support unification processes necessary to comprehend the sentence either via semantic integration (Maguire et al., 2010) or general task processing demands (Meyer et al., 2019). Such a finding may support theories that in late childhood and The Society for the Neurobiology of Language SNL 2019 Program  early adolescence children rely more heavily on semantics to aid in sentence comprehension than adults (Schneider et al., 2016; 2018; Schneider & Maguire, 2018). E6 Association of speech perception and production in 2-month-olds: Relating event-related-potential and vocal reactivity measures Gesa Schaadt1,2,3, Angela D. Friederici2, Hellmuth Obrig1,3, Claudia Männel1,2,3; 1Medical Faculty, University of Leipzig, 2Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 3Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences Infants’ phonological abilities are key features of successful language development, with a functional connection between speech perception and production. For 10-month-olds, it has been shown that babbling –a form of vocalization– shapes infants’ speech processing. However, precursors of babbling (e.g., imitation of mouth movements and vocalization) already develop around the second month of life, yet the association of speech perception and production has not been investigated for this early developmental period. In the present study, we investigated speech perception and production in 2-month-olds. For speech perception, we evaluated infants’ brain responses in a multi-feature paradigm with four deviant stimulus categories, namely consonant (/ga/), vowel (/bu/), pitch (F0; /ba+/), and vowel length changes (/ba:/) that were contrasted against the standard stimulus /ba/. For speech production, we used the subscale Vocal reactivity of the parental Infant Behavior Questionnaire, defined as the amount of infants’ vocalization exhibited in daily activities along a 7-point scale. Analyses of brain responses revealed infants’ (N = 25) significant positive Mismatch Responses (MMR; i.e., brain responses to deviants minus responses to standards) for all deviant categories, with the positive polarity of the MMR being typical for this young age. Moreover, we found a negative correlation (r = –0.38, p < 0.03) between the MMR to vowel changes and vocal reactivity, but no correlation between the MMR to any other deviant category and vocal reactivity. Specifically, a more negative MMR to vowel changes was associated with infants’ higher amount of vocalization. The fact that specifically the MMR to vowel changes was associated with vocal reactivity might be explained by findings showing that the perception and production of vowels emerge earlier in development, compared to the perception and production of consonants. Our results suggest that the developmental transition from a positive to a negative polarity of the MMR, with negative MMRs indicating more mature responses, might be influenced by infants’ expressive abilities. In conclusion, our study indicates that speech perception and production are shaping each other already at an early age. Poster Session E E7 Perceptual anchoring: Infants benefit from auditory predictive coding Claudia Männel1,2,3, Hellmuth Obrig1,2, Arno Villringer1,2, Merav Ahissar4, Gesa Schaadt1,2,3; 1Medical Faculty, University of Leipzig, 2Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 3Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 4Department of Psychology, The Hebrew University of Jerusalem Listeners predict upcoming events through experience. The mechanism of predictive coding is functional in infants and adults, as evidenced by prediction errors in the event-related brain potential (ERP). Moreover, adults show behavioral advantages in frequency discrimination in the context of repeated reference tones, which serve as anchors for the processing of subsequent sounds. Despite the evidence of this so-called perceptual anchor effect in adults, the immediate benefit of predictive coding for the processing of new information has not yet been tested in infants. We propose perceptual anchoring to be present in early infancy, potentially serving as a learning mechanism in language acquisition. We therefore presented 2-monthold infants (N = 28) in an ERP study with tone pairs in two anchor blocks and two no-anchor blocks, consisting of 40 tone pairs each. The first tone of the anchor blocks was always constant in frequency (i.e., 279 Hz), while it varied for the no-anchor blocks (i.e., 460 Hz, 358 Hz, 217 Hz, 169 Hz). Crucially, the second tones (i.e., 358 Hz, 316 Hz, 246 Hz, and 217 Hz) had the same frequency across anchor and no-anchor blocks. This experimental design allowed for the evaluation of ERP responses to second tones that were identical across conditions, but either preceded by constant anchor or random (no-anchor) first tones. ERP results for the first tones revealed attenuated obligatory ERP components for the anchor compared to the no-anchor condition, indicating that infants recognized tone repetitions across stimulus pairs. ERP results for the second tones revealed a modulation of infants’ obligatory ERP components, with more positivegoing fronto-central responses in the anchor compared to the no-anchor condition. This finding implies that infants processed physically identical stimuli differently depending on the given stimulus environment (i.e., constant vs. variable information). Moreover, the observed ERP effect resembled the adult P2 component that was shown to be modulated by selective attention and training in adults, resulting in faster auditory discrimination. Thus, the constant anchor might have acted as signal in guiding infants’ attention towards the processing of upcoming information. In sum, our study demonstrates for the first time that infants do not only apply predictive coding, but show processing benefits from repeated information in their learning environment, suggesting perceptual anchoring as an essential learning mechanism in language acquisition. E8 Hearing Parents’ Use Multimodal Cues to Establish Joint Attention as a Function of Children’s Hearing Status Heather Bortfeld1, Allison Gabouer1; 1University of California, Merced The Society for the Neurobiology of Language 221 Poster Session E The capacity to engage in joint visual attention lays the groundwork for language learning. Initially, researchers used gaze-following to measure engagement in joint attention (Scaife & Brunner, 1975). Since then, other visual cues have been tracked (e.g., point-following (Mundy, Hogan, & Doehring, 1996); hand-following (Yu & Smith, 2017)) to assess children’s ability to follow an adult’s lead. In all cases, auditory information is assumed primary to establishing joint attention. Parent-child dyads in which parents are hearing and children are deaf provide an interesting context in which to examine how parents accommodate their children when the dominant communication modality is not shared. Our previous work demonstrated that hearing parents of deaf children incorporate multimodal cues when engaging in joint attention with their child, and do so to a greater degree than hearing parents of hearing children (Depowski, Abaya, Oghalai, & Bortfeld, 2015). However, it is unclear how the parents and children enter the engaged state. Here we examine parents’ patterns of modality use when initiating joint attention with their deaf children (all of whom were candidates for cochlear implantation), and compare these to those produced by hearing parents engaging with their hearing children. We focused on multimodal communication patterns produced by hearing parents while they engaged in free play with their children. Participants were nine severely to profoundly deaf children (females = 3) aged 22 months (M = 22.2, SD = 9.4) and their hearing parents (females = 9). Nine typically developing, age-matched children (females = 5) aged 24 months (M = 24.2, SD = 11.3) and their hearing parents (females = 5) were included as a comparison group. The videos were coded for initial instances of parentinitiated joint attention using ELAN, a software tool for the creation of complex annotations on video. ELAN allows for multimodal, second-by-second behavioral coding of videos of parent-child interactions. Parent-initiated bids for joint attention – both successful and failed – were coded and quantified, as was the cue or combination of cues that parents used (e.g., using their hand or a toy to tap or touch the child, deliberately waving their hand or toy within the child’s visual field, and vocalizing) for both types of bids. We tallied the total raw number of bid occurrences (both successful and failed) across dyad types. These were used to calculate proportional data for successful and failed bids for joint attention. First, we found no differences by dyad type in overall attempts to establish joint attention, nor in successful or failed bids for joint attention. Overall, hearing parents of hearing children provided cues in the auditory domain alone (unimodally) or paired with a visual cue. Likewise, the most used, successful cue in the hearing parent-deaf child dyads was a multimodal one that included both auditory and visual information. Overall, our finding of multimodal (auditoryvisual) cuing being used more frequently than unimodal cuing shows that an important component of what goes into establishing joint attention appears to be missing from currently accepted measures of joint attention. 222 SNL 2019 Program Disorders: Developmental E9 Disentangling the contribution of family risk on reading precursors in pre-readers Lauren Blockmans1, Fumiko Hoeft2,3,4, Jan Wouters1, Pol Ghesquière1, Maaike Vandermosten1; 1KU Leuven, 2University of Connecticut, 3 University of California, San Francisco, 4Haskins Laboratories Developmental dyslexia has a prevalence of 7% and a heritability rate of 40%. The chance to develop dyslexia is thus larger in children with a family risk (FR). Previous longitudinal studies identified specific predictors in prereaders for developing dyslexia. Phonological skills such as phonological awareness (PA) and rapid automatized naming (RAN) combined with letter knowledge (LK) are considered robust predictors (Snowling and MelbyLervåg, 2016). Additionally, though less consistently, pre-reading deficits have been observed in auditory processing and speech perception skills, more specifically in discriminating amplitude changes at speech sound onset, i.e. rise time (RT), and speech perception in noise (SPIN). However, these studies relied on the heritability rate and selected mostly FR pre-readers to obtain a sufficient large number of dyslexics, yet they do not represent the entire dyslexic population. Due to this bias towards FR children, it remains unknown whether the same predictors are applicable to children without a FR. Neuroimaging studies have suggested that decreased response to auditory stimuli (Hakvoort et al. 2015) and speech (Vandermosten et al. under revision) are related to FR instead of later reading outcomes. This is supported by a structural neuroimaging study demonstrating that differences in planum temporale, known to be involved in speech processing, are related to FR of dyslexia (Vanderauwera et al. 2018). Therefore, we aim to disentangle the contribution of FR on speech perception and auditory precursors. First, we selected pre-readers based on FR, defined as having a first-degree relative with dyslexia, and searched for individually matched prereaders without a FR. Then we created groups based on their cognitive risk (CR), namely performance below percentile 30 for two of the three predictors (PA, RAN, LK). This resulted in four groups of pre-readers: 18 with FR and CR (FR⁺CR⁺), 27 with FR but without CR (FR⁺CR⁻), 34 without FR and CR (FR⁻CR⁻) and 8 without FR and with CR (FR⁻CR⁺). Since the latter group was very small because of our selection procedure, we performed a second large-scale screening in 1224 children to recruit pre-readers with a CR. Based on the same criteria as in the first screening, we were able to increase the size of our groups with a CR. This resulted in a sample of 157 five-year-olds (81♂/76♀): 34 FR⁺CR⁺, 27 FR⁺CR⁻, 34 FR⁻CR⁻ and 62 FR⁻CR⁺. Once selected, they performed a RT and SPIN task. The fitted GLM revealed a significant main effect of FR in the SPIN task (p = 0.007), showing poorer speech perception skills in FR+ than FR-, and a significant main effect of CR in the RT task (p = 0.007), showing poorer auditory processing skills in CR+ than CR-. No other significant main and interaction effects were observed. These results suggest that impaired auditory processing at the pre-reading stage does The Society for the Neurobiology of Language SNL 2019 Program  not depend on FR. In contrast, and more in line with previous neuroimaging studies, FR does influence speech perception performance of pre-readers. This study is part of a longitudinal project allowing to further disentangle FR contributions on the neural level in the near future. E10 Neural Stability and Language in Autism Spectrum Disorder Lisa Tecoulesco1, Erika Skoe1, Letitia R Naigles1; University of Connecticut 1 Auditory brainstem responses (ABRs) are auditory evoked potentials reflecting neural activity in the auditory nerve and brainstem. Previous studies associated low ABR stability with poor reading ability, suggesting stability (i.e. the extent to which the same signal produces the same neural response) might be predictive of language ability. Variability in language performance is a wellknown characteristic of Autism Spectrum Disorder (ASD), and recent work also reports low ABR stability in this population. Work from our lab, however, revealed no group-level differences in ABR stability between typically-developing (TD) children and children with ASD, but found that at the level of individual differences, stability was correlated with phonology and syntactic ability. This suggests ABR stability is an index of language ability but not of ASD. The literature linking ABR stability and language ability has focused on the repeatability of speech-evoked ABRs by analyzing the frequencyfollowing response (FFR) component of the ABR in the time domain. FFRs can also be analyzed in the frequency domain, revealing the fidelity with which specific speech frequencies, such as the fundamental frequency (F0), are faithfully encoded by the auditory system. The F0 imparts information about affective intent and speaker identity, two areas of documented differences in ASD. Building off evidence that neural tracking of the F0 contour of a speech stimulus is less faithful in children with ASD, the current study asks whether the stability of this spectral encoding is predictive of language ability in children with ASD. Twenty-four children participated in the study, twelve TD (M=11.25 years) and twelve with ASD (M=12.5 years), all with normal hearing thresholds. Speech-ABRs were recorded to a “da” (40ms) stimulus presented at 80 dB SPL and a rate of 10.9 per second. Two ABR subaverages (3000 trials each) were computed and the FFR component of each sub-average was converted to the frequency domain via Fourier analysis. From the FFR, two measures of F0 encoding were derived: the average FFR amplitude over the F0 range (75-175Hz); and the stability of spectral encoding, calculated as the absolute difference in amplitude between the two sub-averages (smaller values indicative of more response stability). Language ability was measured using two subtests of the Clinical Evaluation of Language Fundamentals (CELF5), and phonology was assessed using a novel word discrimination task. No group differences were found for the amplitude or neural stability of F0 encoding. However, different relationships to language were observed for each group. In the ASD group, Spearman’s rank correlations revealed greater F0 stability related to better phonological discrimination (rs=-.705,p=.01), semantics (rs=- The Society for the Neurobiology of Language Poster Session E .729,p=.007), and syntax (rs=-.718,p=.009). By contrast, F0 stability did not relate to any language measure for the TD group. This suggests that for children with ASD, language acquisition may be dependent on the stability with which the subcortical auditory system represents particular stimulus features, especially speech frequencies that carry information about who is speaking and affective intent. Future work will attempt to replicate this finding, while expanding the stimuli (to include multiple speakers and multiple syllables) to dissociate F0 effects of speaker vs. affect. E11 Networks of attention and general processing speed as potential predictors of late talker outcomes Anna Kautto1, Elina Mainela-Arnold1,2; 1University of Turku, 2University of Toronto The term “late talker” is used to refer to toddlers with late onset spoken words and small early vocabulary size in absence of other developmental or hearing impairments that would otherwise explain the delay. While some of these children appear to catch up by 5 years of age, some continue to develop long-term restrictions in language comprehension and production. It is commonly agreed that in late talkers under 4 years of age, it is difficult to predict which children continue to present long-term restrictions, requiring the use of label developmental language disorder (DLD, formerly specific language impairment) (Bishop et al. 2016; 2017). While the cause(s) of DLD are unknown, it has been suggested that capacity limitations, perhaps in the form of limitations in aspects of attention constrain language development (Leonard, 2014). Attention deficits are common in children with DLD which has been suggested to possibly arise from these areas of development sharing common neurobiological basis (Tomblin & Mueller, 2012). However, we know very little about capacity limitations in children with a history of late talking. In this study, we hypothesized that differences in attentional skills would modulate late talker outcomes so that small expressive vocabulary at 24 months combined with attentional difficulties would increase the risk for DLD. We used a visual Attention Network Test (ANT; Fan, McCandliss, Sommer, Raz & Posner, 2002) to compare subcomponents of attention in school-aged children with and without history of late talking. The ANT combines tasks of cued reaction time and the flanker task to evaluate alerting, orienting, and executive attention within a task. These three subcomponents of attention have been linked with neurobiological processes (Posner & Rothbart, 2007). Alerting is linked with thalamic, frontal, and parietal cortices as well as the neurotransmitter norepinephrine. Orienting is involved with posterior brain regions (superior parietal and temporal parietal junction), frontal eye fields, and the neurotransmitter acetylcholine. Executive attention is associated with the anterior cingulate, lateral prefrontal cortex, basal ganglia, and the neurotransmitter dopamine. 68 children (7;5–10;5) participated in this study, of which half had had expressive vocabulary below age expectations at 24 months of age. Performance on the ANT was examined as a function of vocabulary size at 24 months and 223 Poster Session E performance on linguistic tests at school age. Counter to our hypothesis, Generalized Linear Mixed Models analyses suggested that effects of alerting, orienting and executive attention were not meaningfully associated with late talking and school-age language impairment status. However, general processing speed across all trials was significantly associated with school-age language impairment status, but not late talker status. These results indicate that poor school age language outcomes in late talkers are not associated with limitations in attention, but slow general processing speed instead. Following these behavioural results, we are currently examining the extent to which global white matter volumes are associated with persistence of language difficulties to better understand the neural processes in DLD. E12 Investigating white matter tissue properties in dyslexia: A combined analysis of DTI and myelin water imaging Maria Economou1, Thanh Vân Phan1,2, Thibo Billiet2, Jolijn Vanderauwera3, Jan Wouters1, Pol Ghesquière3, Maaike Vandermosten1; 1Experimental Oto-rhino-laryngology, Department of Neurosciences, KU Leuven, 2icometrix, Research and Development, Leuven, 3Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven Structural organization of white matter (WM) plays a crucial role in the development of the reading network. Previous research has revealed WM differences in dyslexic individuals, mainly in left temporo-parietal connections. These findings were recently extended to pre-reading children, emphasizing the role of early white matter organization on the development of reading trajectories. Although the use of diffusion MRI (dMRI) has been invaluable in this domain, it does not provide sufficient information to characterize tissue-specific properties. This is primarily because dMRI indices such as fractional anisotropy (FA) are sensitive, but not specific to several tissue changes including myelination and axonal growth. In this study, we aim to address this limitation by combining information from dMRI as well as myelin water imaging (MWI), to elucidate the potential specific contribution of myelin. MWI allows the quantification of myelin water fraction (MWF) in the brain, an index that can be used as a proxy for myelin content. We hypothesize that combining MWF with typical diffusion indices such as FA will be more informative than FA alone. We tested this on a group of 69 children (9-10 years old) of whom 27 were dyslexics. All children underwent an MRI session (at 3T using a 32-channel coil) as well as a behavioral assessment, where various reading-related and perception skills were tested. The MWI dataset was acquired using a 3D GraSE sequence. Using the non-negative least squares algorithm to derive values per voxel, MWF parameter maps were calculated. The DWI dataset was acquired using a b-value of 1300 s/ mm2, 60 non-collinear directions, and 6 non-diffusionweighted images. Following appropriate pre-processing and corrections, the tensor model was fitted to the data and whole-brain tractography was conducted, followed by manual delineations of bilateral white matter tracts. We chose to focus on the direct segment of the arcuate 224 SNL 2019 Program fasciculus (AF) and the inferior fronto-occipital fasciculus (IFOF) given their previous implication in the reading network. Preliminary results revealed no significant group differences in neither FA nor MWF in dyslexic readers compared to their peers. Post-hoc correlations showed some weak associations between vocabulary scores and WM metrics in the left IFOF of typical readers (FA: rs=0.453, p=0.018; MWF: rs=0.395, p=0.041), which however did not survive appropriate testing corrections. This lack of significant findings could suggest that if group differences in white matter FA are observed in dyslexics, these are less likely to be driven by myelination. Nevertheless, given the dynamic changes in myelination in early development, this conclusion cannot be made with certainty from the present sample. Overall, we describe a novel multi-modal approach to characterize WM in children with (a)typical reading ability. Although we were unable to replicate previous WM findings in dyslexics, the analysis employed here and the use of MWI is an important contribution to understanding the neurobiological basis of typical and atypical reading. We aim to extend our analyses by looking at how different reading growth profiles might relate to MWF, and how this integrated approach can be used to study WM organization prior to reading onset. Disorders: Acquired E13 Language network re-organization associated with word- and sentence-level language interventions in chronic aphasia Elena Barbieri1,2, James Higgins1,3, Kaitlyn Litcofsky1,2, Kathy Xie1,2, David Caplan1,4, Brenda Rapp1,5, Swathi Kiran1,6, Todd Parrish1,3, Cynthia Thompson1,2; 1Center for the Neurobiology of Language Recovery, Northwestern University, Evanston, 2Aphasia and Neurolinguistics Research Laboratory, Northwestern University, Evanston, 3Parrish Neuroimaging Laboratory, Northwestern University, Chicago, 4Neuropsychology Laboratory, Massachusetts General Hospital, Harvard Medical School, 5Cognitive and Brain Sciences Laboratory, Johns Hopkins University, 6Aphasia Research Laboratory, Boston University Studies investigating the effects of language intervention on the re-organization of language networks in chronic aphasia have resulted in mixed findings, likely related to – among other factors – the language function targeted during treatment and the language task used to elicit brain activation [1]. Most studies have focused on naming, reporting greater activation (post- vs. pretreatment) within intact regions in the left hemisphere (LH); however, studies have also found recruitment of regions in the right hemisphere (RH), positively correlated with treatment outcome [2]. The present study investigated the effects of the type of treatment provided for patients with chronic aphasia on neural activation using an auditory story comprehension task that in healthy participants reliably recruits a left frontotemporo-parietal network [3]. We hypothesized that behavioral improvements associated with sentence processing, but not naming or spelling, treatments would result in shifts in activation using this task. Eighty-five individuals with chronic LH stroke-induced aphasia, recruited from three research laboratories (Northwestern The Society for the Neurobiology of Language SNL 2019 Program  University, NU; Boston University, BU; Johns Hopkins University, JHU) were assigned to either a language treatment (N=61) or control group (N=24). Participants in the treatment group received approximately 12-weeks of language treatment targeting one of three language domains: sentence comprehension/production (NU), naming (BU) or spelling (JHU). At baseline and posttesting, participants in both groups underwent language testing and performed an fMRI story comprehension task, which included alternating blocks of auditorilypresented short stories and a control condition (reversed speech) and required participants to listen to the stories and answer comprehension questions. At all recruitment sites, participants in the treatment, but not control, group evinced significant behavioral gains in the treated language domains. Region-of-Interest analyses of the fMRI story comprehension data revealed a significant increase in activation from pre- to post-treatment for the NU treatment group, but no changes in activation were found in the BU or JHU treatment groups, or in the control group. Results for the NU treatment group showed posttreatment upregulation of regions within the language network and were restricted to the RH. When overlaid onto activation maps derived from a group of healthy individuals performing the same task, post-treatment activation maps for the NU treatment group showed post- (vs. pre-treatment) recruitment of RH regions that were either active (i.e., posterior middle temporal gyrus, inferior frontal gyrus) or homologous to active LH regions (i.e., superior frontal, precentral gyri) in healthy individuals. Increased activation was positively correlated with behavioral change (i.e., verb comprehension), derived from a Principle Component Analysis of the behavioral data. These findings indicate that the language domain targeted for treatment affects re-organization of the language network. Sentence-level language intervention impacted comprehension of naturalistic narrative language and recruited regions with the normal language network in the RH, whereas (spoken or written) word-level treatments did not, indicating that treatment that exploits specific language processes impacts brain mechanisms associated with those process. 1. Kiran, S., & Thompson, C.K. (2019). Frontiers in Neurology, 10. 2. Barbieri, E., et al. (under revision). Cortex. 3. Wilson, S. M., et al. (2007). Cerebral cortex, 18(1), 230-242. E15 Clinical Implementation of Transcranial Direct Current Stimulation (tDCS): Speech-Language Pathologist’ Opinions Regarding the Translation of tDCS into Clinical Practice Lynsey Keator1, Alexandra Basilakos1, Christopher Rorden2,3, Jordan Elm4, Leonardo Bonilha3, Julius Fridriksson1,2; 1 Department of Communication Sciences and Disorders, University of South Carolina, 2McClausland Center for Brain Imaging, University of South Carolina, 3Department of Psychology, University of South Carolina, 4Department of Public Health Services, Medical University of South Carolina, 5Department of Neurology, Medical University of South Carolina Poster Session E stages of recovery (Wade et al., 1994). Many studies have investigated tDCS in aphasia rehabilitation and results support A-tDCS may be a promising adjuvant treatment for behavioral aphasia therapy (Baker et al., 2010, Fridriksson et al., 2011; Cherney et al., 2010; Meinzer, 2016; Marangolo et al. 2011; 2013; 2014). To consider implementation of tDCS into clinical practice, practicing SLPs working directly with patients with aphasia (PWA) across a variety of work settings were surveyed about their familiarity with tDCS, concerns about its use, and the amount of tDCS-related improvement (or “tDCS boost”) that would convince them to use tDCS. Methods: Two hundred and twenty-one SLPs returned a survey, with 155 valid responses retained for analysis. The survey polled SLPs about their familiarity with tDCS, concerns about adopting tDCS, and importantly, the “tDCS boost” in aphasia therapy outcome (measured as increase in Western Aphasia Battery-Revised Aphasia Quotient, WAB-R-AQ; Kertesz, 2007) needed to incorporate tDCS into clinical practice. Surveys were distributed online via email and social networking platforms using REDCap (Harris et al., 2009). Results: 71% reported being familiar with tDCS prior to completing the survey and importantly, 94.2% reported concerns related to at least one of five broad categories: training/continuing education (68.4%), administrative approval (60%), cost (47.1%), safety (45.8%) and insurance reimbursement (41.9%); 30.3% reported at least three concerns. With respect to the degree of “tDCS boost,” respondents reported a mean 22.9% desired boost (SD=20.1, range=0-100) in treatment related increase in AQ in order to consider adopting tDCS into practice. The 90th percentile corresponded to a 50% “tDCS boost”. There was a significant main effect of years of experience for the “tDCS boost” question, ((X2(3)=9.3, p=0.025), and a negative correlation between “tDCS boost” and percent of PWA caseload (rs = -0.17, p = 0.043). Conclusions: This is the first study to identify clinician familiarity with tDCS and quantify a behavioral change necessary to adopt tDCS as a part of post-stroke aphasia treatment in the rehabilitation setting. SLPs reported an average “tDCS boost” of 22.9% (equivalent to a 2 point increase in WABAQ; 90th percentile = 50% “tDCS boost and a 5-point increase) in treatment outcome would be necessary for consideration of clinical implementation. To illustrate, if a patient who improves by 10 WAB AQ points following conventional, behavioral aphasia therapy, SLPs would be likely to adopt tDCS if the patient improved an additional 2-5 points (total of 12-15 points). Data trends suggest clinicians in academic setting, with more experience, or with a greater number of PWA on caseload report a lower “tDCS boost” threshold. SLPs’ concerns related to the clinical adoption of tDCS can and should be used to guide translational research studies aimed to meet clinical needs. Furthermore, such reports will be crucial in informing policy decisions that can facilitate the adoption of tDCS into practice. Introduction: Following a stroke, 20-30% of survivors suffer from aphasia (Engelter et al., 2018; Laska et al., 2001) and for 15% aphasia continues into the chronic The Society for the Neurobiology of Language 225 Poster Session E E16 Do different language impairments have distinctive patterns of RS-fMRI as indexed by fALFF? Nicole Dickerson1, Robert Wiley1, James Higgins2, David Caplan3, Swathi Kiran4, Todd Parrish2, Cynthia Thompson2, Brenda Rapp1; 1Johns Hopkins University, 2Northwestern University, 3Harvard Medical School, 4Boston University Although most RS-fMRI studies examine functional connectivity, local activation strength in specific brain areas can also be investigated with measurements of local Fractional Amplitude of Low Frequency Fluctuations (fALFF) -the proportion of the total BOLD signal in a given brain region that falls within the low frequency range of .01-.08 Hz, (Zou et. al, 2008). Relatively little research has been directed at understanding these local properties of the RS-fMRI signal, although recent work (DeMarco & Turkeltaub, 2018) indicates this information may be used to distinguish lesioned from healthy tissue and index language deficits. The current investigation aimed to determine if fALFF values within the functional language networks used for spoken naming, syntactic processing and spelling reflect the severity of language deficits affecting these specific functions. Methods: Participants were 68 individuals (21 females) who suffered language impairment subsequent to a single left-hemisphere stroke. Participants were recruited from three sites (Johns Hopkins, Northwestern and Boston University). Language-domain severity was measured for syntactic processing (Northwestern Assessment of Verbs and Sentences; Thompson, 2011), spoken naming (Northwestern Naming Battery; Thompson, & Weintraub, 2014), and spelling (PALPA 40; Lesser & Coltheart, 1992). For RS-fMRI, 210 or 175 3D image volumes were collected, consisting of 41 slices (voxel size 1.7x1.7x3mm), with a TR of 2.4s. Preprocessing of the images was performed using the NUNDA “Robust fMRI preprocessing pipeline”. FALLF values were calculated for each voxel (all lesioned voxels and voxels with fALFF values under .098 were excluded from analysis). Voxels were grouped into ROIs (Harvard/ Oxford atlas; Desikan et al., 2006) constituting the three functional language networks (ROIs: spoken naming =13, syntactic processing=12, spelling =13). Using Linear Modeling (RStudio Software), 3 models were evaluated. For each, the dependent variable corresponded to the language-domain severity scores; fixed effects were: average fALFF for each ROI of the respective functional language network, months post-stroke, and lesion volume. Results: First, with regard to overall model fits, the three language networks resulted in the following Syntax: R2= 0.51, Naming: R2= 0.42, Spelling: R2 = 0.37. Second, the following specific ROIs predicted language deficit severity: 1) Syntactic processing: left inferior frontal gyrus pars opercularis (t=-2.1, p <0.04), 2) Spoken naming: left angular gyrus / left supramarginal gyrus(posterior division) (t=-1.9, p<0.07), right inferior frontal gyrus pars opercularis /middle frontal gyrus (t=1.9, p <0.07), and the right frontal orbital cortex (t=-1.7, p<0.1), 3) Spelling: left supramarginal gyrus (posterior division) (t=-2.7, p <0.02). Conclusions: This study identified specific brain regions in which BOLD response at rest shows sensitivity to language deficit severity. Overall, we found that lower 226 SNL 2019 Program fALFF values were associated with greater deficit severity. These findings help to advance our understanding of the consequences of lesions to the networks that support language processing and provide foundations for future research using the properties of the brain’s activity at rest to predict treatment outcomes and evaluate neural changes that support recovery of function. E17 Primary Progressive Aphasia Presenting as a Functional Neurological Disorder for More Than a Decade Aaron Hauptman1,2, Gretchen Reynolds1,2, Kim Willment1,2, Kirk Daffner1,2; 1Brigham and Women’s Hospital, Harvard Medical School 2 In 2007, Delis and Wetter proposed the term “Cogniform Disorder” to describe a conversion disorder-like condition within the neurocognitive domain. Broadly, 16% of general neurology outpatients demonstrate symptoms consistent with a DSM-5 diagnosis of functional neurological disorder (FND), previously labelled as conversion disorder. There is little evidence in the literature to guide diagnosis and management of possible FND within cognitive neurology, particularly of patients with insidious or atypical language symptoms. Furthermore, given the heterogeneity in presentation of neurodegenerative language disorders, misdiagnosis is a serious risk, particularly given the possibility of early, atypical or insidious onset. It is important to rule out possible neurodegenerative etiologies of language abnormalities and, even when findings are negative or inconclusive, to monitor closely as subjective symptoms may predate objective or biomarker findings of an underlying neurodegenerative disorder. We present the case of an ambidextrous man first seen at the age of 43 for subjective memory complaints who was subsequently monitored for over 15 years. Symptoms were suggestive of a nonneurodegenerative etiology and he was conceptualized as having underlying language and frontal-executive weaknesses exacerbated in the setting of identified psychosocial stressors, anxiety and a cluster of symptoms most consistent with a diagnosis of FND. Due to persistent complaints of worsening cognitive difficulties, extensive neurological work-up was done for possible contributory conditions or factors, including brain MRI, FDG-PET and lumbar puncture for biomarkers of Alzheimer’s disease, inflammatory etiologies and other causes. All of these were negative. Neuropsychological testing was completed 4 times during his 15 years of followup and demonstrated mild language weaknesses and moderate frontal/executive dysfunction, with the language difficulties presumed to be developmental and consistent with this formulation. In the last two years, he complained of worsening language difficulties. His speech was characterized by long response latencies, but otherwise normal language until his final neuropsychological examination in 2018. This neuropsychological testing was most notable for moderate deficits in processing speed and executive functioning. Regarding language, he made atypical spelling errors, mild grammatical errors in written and spoken spontaneous language and errors on phrase repetition. Testing demonstrated intact The Society for the Neurobiology of Language SNL 2019 Program  confrontation naming, comprehension, and low average category fluency. There was no appreciable speech apraxia or articulation deficit. He also demonstrated variable attention and working memory and performed inconsistently across several measures of performance validity. A second brain FDG-PET was repeated, 2 years following his initial normal PET study, and demonstrated abnormal hypometabolism of the left frontal lobe and insula. A repeat brain MRI was then obtained that demonstrated asymmetric atrophic changes of the left inferior frontal gyrus, widening of the left Sylvian fissure and left insular volume loss. These imaging and clinical findings are consistent with a diagnosis of agrammatic/non-fluent variant primary progressive aphasia. This case typifies the challenge of diagnosis of FND within the neurocognitive domain of language. Given the heterogeneity of patient presentations and the frequency of atypical presentations of neurodegenerative language disorders, this case emphasizes the need for close follow-up and ongoing monitoring of patients with neurocognitive complaints. E18 The effects of real and simulated lesions on the modular organization of the brain Brenda Rapp1, Yuan Tao1; Johns Hopkins University 1 Previous simulation studies directed at understanding the effects of lesions on functional organization have shown that damage to global hubs (nodes supporting cross-module integration) or local hubs (supporting within-module integration) have different effects on whole-brain modular organization (Sporns et al., 2007; Honey & Sporns, 2008). Specifically these studies found that greater damage to global hubs results increases modularity while greater local hub damage decreases it. However, the consequences of actual lesions have been scarcely studied. To better understand the basis of post-stroke functional re-organization, we examined the consequences of brain lesions in chronic stroke (n=18) and the impact of comparable pseudo-lesions in healthy individuals (n=23). For both participant groups, fMRI data were collected during performance of a spelling task and pairwise correlations of the residual time-courses between 235 nodes distributed throughout cortex (Power et al., 2013) were calculated. A reference modular structure was computed from the healthy control data and, on this basis, global (participation coefficient, or PC) and local (withinmodule degree, or WD) integration coefficients (Guimera & Amaral, 2005) were calculated for each node. For each lesion mask, overall PC and WD damage scores were computed by averaging the respective coefficients of the lesioned nodes. Pseudo-lesions were created by applying every lesion mask to each control participant’s dataset and modularity (Newman’s Q) was calculated for all data sets (lesioned and healthy participants). Finally, for both participant groups the lesion-mask PC and WD damage scores were correlated with the modularity values. We found that the comparable lesions across the two participant groups had very different consequences for modularity. Consistent with previous simulation studies, for the healthy participants with pseudo-lesions, greater The Society for the Neurobiology of Language Poster Session E WD (local hub) damage scores resulted in significantly lower overall modularity (r=0.38, p<0.05), although we did not find that the magnitude of PC (global hub) damage within the lesion mask was correlated with modularity values (r=0.08, n.s.). In contrast, for actual lesions, WD damage was uncorrelated with modularity (r=0.01, n.s) and, unlike previous simulation studies, greater PC damage resulted in significantly reduced modularity (r=0.61, p<0.001). Moreover, the negative correlation between PC damage and modularity was present in both ipsi- and contra-lesional hemispheres (LH: r=-0.43, p<0.05; RH: r=-0.64, p<0.001). Also, the two groups also differed in that modularity values in the pseudo-lesions data sets were driven by lesion volume (r=0.78, p<0.001) whereas there was no such relationship in the real lesion data set (r=0.23, n.s.). Our findings from pseudo-lesions in healthy controls generally align with previous simulation results indicating that damage to global hubs have widespread consequences on global modularity. Most significantly, the discrepancies between pseudo- and real lesions indicate that lesion-driven functional re-organization cannot be explained as a simple subtraction of nodes from the healthy brain. In this way, the discrepancies in global modular organization between pseudo and real lesion conditions provide clear evidence of widespread and complex post-stroke functional reorganization that affects both the lesioned and unlesioned hemispheres. E19 Transcranial direct current stimulation as an adjuvant to aphasia treatment may provide greater benefit to individuals with longer-term chronic aphasia Lorelei Phillip Johnson1, Alexandra Basilakos1, Leonardo Bonilha2, Chris Rorden1, Julius Fridriksson1; 1University of South Carolina, 2Medical University of South Carolina Over 2 million people in North America are living with aphasia (Simmons-Mackie, 2018). Recent studies indicate that long-term recovery in aphasia is possible (Holland et al., 2017; Johnson et al., 2018) even in individuals who have been living with aphasia for at least 5 years. However, very little is known about the nature of aphasia recovery occurring many years after a stroke has taken place. The goal of the current study was to examine differences in treated recovery for individuals five years or longer poststroke (described here as ‘long-term’ chronic aphasia) as compared to those who are earlier in the recovery process (i.e. <5 years or ‘shorter-term’ chronic aphasia). The data included here were collected as a part of a previously published phase II randomized controlled futility trial for anodal transcranial direct current stimulation (A-tDCS) as an adjuvant to aphasia treatment (Fridriksson et al., 2018). The results of the trial suggested that further study of tDCS in aphasia is not futile, and a follow-up analysis indicated that those who received A-tDCS showed significantly greater improvement on both treated and untreated stimuli compared to individuals who received sham tDCS (S-tDCS; Fridriksson et al., 2019). In a retrospective analysis, participants were grouped by time post-stroke. Of the 74 individuals in the original clinical trial, 73 (21 female, mean age=59.3) were included in the analyses detailed here. There were 21 individuals with 227 Poster Session E long-term aphasia (9 received A-tDCS), and 52 individuals with shorter-term aphasia (24 received A-tDCS). For each participant, the amount of proportional change from baseline to 1-week post-treatment was calculated for performance on the Philadelphia Naming Test (untreated items) and a subset of treated items (“Naming 80”). An ANCOVA was conducted to determine main effects of group (shorter-term vs long-term) and stimulation type (active vs sham) while controlling for differences in baseline severity as quantified by the quotient score (AQ) on the Western Aphasia Battery-Revised (WAB-R). Because of the relatively small group size and exploratory nature of the analysis, the p-value was set at .10. For the treated items, there were significant effects of WAB AQ (p<.001, partial eta2=.296), stimulation type (p=.011, partial eta2=.092), and the group by stimulation type interaction (p=.078, partial eta2=.045). The interaction revealed that the difference in proportional change for those receiving active versus sham treatment was larger in the long-term chronic aphasia group. For the untreated items, there was only a significant effect of WAB AQ (p=.021, partial eta2=.076) though the effect of stimulation type approached significance (p=.144, partial eta2=.031). An expected effect of stimulation was shown for naming of treated items. Though this same effect was not shown for untreated items, it is possible that with a larger sample of individuals with long-term aphasia, these differences may become clearer. Though only a first step in understanding long-term treated recovery in aphasia, these data are promising and warrant further investigation to understand how treatment response and underlying neural mechanisms may evolve years and even decades post-stroke. Meaning: Lexical Semantics E20 Using Concept Typicality to Explore the Semantic Neural Network in Healthy Ageing Mara Alves1, Patrícia Figueiredo2, Ana Raposo1; 1Faculdade de Psicologia, Universidade de Lisboa, 2Instituto Superior Técnico, Universidade de Lisboa Richer semantic repositories foster more interconnected networks by increasing the links among disperse information. Therefore, the growth of the semantic knowledge along ageing may support the processing of concepts less well integrated in the semantic network, as is the case of atypical objects. However, progressive loss in executive functions in older ages may affect semantic control necessary for the appropriate processing of atypical concepts. In a previous study, we found that optimal ageing (indexed by age- and education-adjusted MoCA scores) supports the successful categorization of atypical concepts, whereas in suboptimal ageing the difficulties in categorizing atypical concepts re-emerged. Here, we investigate functional neuroimaging changes in the semantic network associated with age and explore differences between optimal and suboptimal ageing trajectories. In an event-related fMRI study, healthy young (n=14; age range=19-28) and older participants (n=15; age range=58-76) were presented with typical (e.g., apple) and atypical (e.g., avocado) objects of various categories (e.g., fruit) and were asked to silently name 228 SNL 2019 Program each object and then press a button. In the behavioural analyses a threshold of p<.05 was used. The two groups were matched in years of education, verbal semantic knowledge, tested using the WAIS Vocabulary subtest, and semantic association abilities, assessed by the Camel & Cactus test. General cognitive abilities of older adults were assessed through MoCA. Outside the scanner, all participants showed high accuracy in naming atypical and typical objects, with no differences in typicality or between groups. Inside the scanner, the button-press data revealed a typicality effect, with longer response times for atypical objects relative to typical ones, but no age-related differences. An FWE corrected cluster-level threshold of p<.05 was used in the neuroimaging analysis. In young participants, object naming recruited a widespread network, including bilateral lingual and fusiform gyrus, middle frontal gyrus, middle cingulate cortex and right precentral gyrus. In older adults, activation was restricted to object recognition areas, notably left inferior occipital gyrus and bilateral fusiform gyrus, extending more anteriorly in the right hemisphere to the inferior temporal cortex. The positive association between Vocabulary scores and activation in inferior temporal cortex indicates greater recruitment of semantic processes by older participants with richer semantic repositories. Moreover, suboptimal ageing (indexed by MoCA) was associated with a predominantly right-lateralized network, including fusiform gyrus, inferior temporal cortex and precentral gyrus. Importantly, the processing of atypical concepts involved different brain regions depending on age and cognitive ability. Young participants engaged bilateral middle occipital gyrus. Such activation was negatively associated with Vocabulary scores, suggesting that young adults rely on visual search to support increasing demands on object naming, especially in the absence of richer semantic repositories. In contrast, in suboptimal ageing there was increased recruitment of orbitofrontal cortex. Overall, these findings suggest that, despite of similar performance, the neural network supporting semantic abilities changes along ageing. Throughout the lifespan, object naming seems to rely more on the activation of semantic representations, but also becomes more dependent on the recruitment of frontal executive control processes when object identification is more demanding and general cognitive abilities decrease. E21 Exploring the mechanisms of adult word learning by modulating temporal congruency and object modality Samuel H. Cosper1, Claudia Männel2,3, Jutta L. Mueller1; University of Osnabrück, 2Max Planck Institute for Human Cognitive and Brain Sciences, 3University of Leipzig 1 Everyday life is filled with objects and events occurring in different modalities. Recently, it has been shown that infants are capable of learning labels for auditory objects (e.g., thunder) similar to the way that infants associate labels for visual objects (e.g., bottle). Unlike infants, typically developed adults have a strong bias towards the visual modality. This preference for visual input leads to the question of whether the modality of an object has an influence on the learning mechanisms behind mapping The Society for the Neurobiology of Language SNL 2019 Program  novel labels onto objects in adulthood. Furthermore, does the timing in which the object-word pairs are presented also play a role in how labels are mapped onto novel objects within the different modalities? The current event-related potential (ERP) study investigates in four experiments the mechanisms behind adult word learning by applying a 2x2 study modulating the modality of the object and the temporal congruency of the object-word pairs. The four experiments consist of auditory object (environmental sounds)-auditory word (spoken word) stimuli presented with a 600 ms within-pair pause, visual object (pictures of novel objects)-auditory with a 600 ms within-pair pause, visual-auditory with a 500 ms overlap of stimuli presentation, and auditory-auditory with a 500 ms presentation overlap. Each experiment was divided into a training phase, where sets of consistently and inconsistently paired object and labels were presented, and a testing phase, where the consistent object-label pairs of the training phase were presented in matching or mismatching combinations. The testing phase ERP results only exhibited a N400 violated-expectation effect for mismatching over matching pairs for visual objects, regardless of temporal congruency. Adult ERPs did not yield any increased negativity for violated pairs including auditory objects in either temporal condition. These results provide evidence that the modality of the object influences the mechanisms behind adult word learning; however, temporal congruency does not affect learning in the same way. The data suggest that the dominance of the visual modality in adulthood modulates associative word learning and thus learning is not as flexible as in infancy. E22 Lexical pre-activation and post-lexical integration accounts of the N400 ERP effect: When words’ syntactic categories interact with semantic relational priming Alexandre Herbay1,2, Phaedra Royle2,3, Karsten Steinhauer1,2; 1McGill University, 2Center for Research on Brain, Language, and Music, 3Université de Montréal Introduction: Neurocognitive processes underlying the N400 ERP component, typically associated with semantic processing, are still controversial. Several mechanisms have been proposed to account for N400 amplitude reductions associated with semantic priming: (1) automatic spreading activation (ASA), (2) predictionbased priming and (3) post-lexical integration processes in working memory (WM). Recent proposals support (1) and (2) but question (3) (e.g., Lau et al., 2013). However, Steinhauer et al.’s (2017) priming experiment in French showed that related word pairs of a given semantic relationship (e.g., part-whole) showed stronger priming effects when embedded in a list with other prime-target pairs of the same (part-whole) relationship (consistent pair, CON) than a different relationship (e.g., antonyms; inconsistent pair, INC). Moreover, the consistency N400 difference (INC-CON) had a later onset (after 400ms) than the traditional relatedness effect (unrelated word pairs vs. INC) starting at 300ms. The consistency effect is compatible with WM-based post-lexical integration (‘relational priming’) but not with ASA, as semantic networks are arguably not organized according to The Society for the Neurobiology of Language Poster Session E abstract types of semantic relationships. Alternatively, the short 250 ms Stimulus Onset Asynchrony (SOA) used by these authors may have led to delayed predictionbased priming effects. Methodology: Using a similar experimental design as Steinhauer et al. (2017), but with a longer 450 ms SOA and nine lists of 80 prime-target pairs promoting five distinct semantic relationships (SR; antonyms, synonyms, part-whole, etc.), we hypothesized that N400 consistency effects tied to prediction-based priming (Lau et al., 2013) should now occur earlier (N250 and early N400 effects, respectively reflecting prediction of orthographic and semantic features) and eliminate lateonset effects. In contrast, WM-based post-lexical effects (Steinhauer et al., 2017) should still occur after 400 ms. We also sought to investigate the influence of the words’ syntactic category on semantic relational priming by having three lists for each of our three syntactic categories (SC; adjectives, nouns, verbs). Therefore, in each list, prime-target pairs could be consistent (promoted SC and SR), semantically inconsistent (different SR, SemInc), syntactically inconsistent (different SC, SynInc) or both (different SR and SC, SemSynInc). Results: ERP data from 36 participants were analyzed using mixed-effect linear models. When collapsing all types of inconsistent pairs, N400 relational priming effects still started late (after 390 ms) as in Steinhauer et al. (2017). However, SemInc pairs produced an early fronto-central negativity between 250 and 300 ms (p < 0.05) and a larger N400 between 400 and 450 ms at all sites (p < 0.001). SynInc pairs only produced a larger N400 after 400 ms in posterior regions (p <0.05). Finally, SemSynInc pairs produced a larger N400 between 380 and 500 ms in central and posterior sites (p <0.001). Conclusion: Words’ SC consistency significantly modulated the onset of relational priming effects. Violation of lexical pre-activations in WM for different SRs but consistent SCs (SemInc) elicits an early frontal negativity. In contrast, an inconsistent SC seems to disrupt predictive mechanisms and delays processing: only late integration effects for related pairs are found in SynInc and SemSynInc conditions. Data suggest a combination of prediction and post-lexical effects. E23 First language matters: An auditory ERP study of crosslinguistic influence effects on semantic processing Frida Blomberg1, Marianne Gullberg2, Annika Andersson1; 1Linnaeus University, 2Lund University Second language (L2) learners experience challenges when semantics differ across source and target languages, and often display crosslinguistic influence (CLI) in speech production and behavioral comprehension studies (e.g., Jarvis & Pavlenko, 2008). However, in studies using event-related potentials (ERP) CLI has rarely been reported, probably because these studies typically examine the processing of gross semantic violations (e.g., Kutas & Hillyard, 1980). If more fine-grained semantics are considered, semantic processing may well be subject to CLI. We explored how L2 learners of Swedish process fine-grained L2 verb semantics that are either shared or not shared with their first language (L1). We examined three Swedish placement verbs (sätta ‘set’, ställa ‘stand’, 229 Poster Session E lägga ‘lay’), obligatory for describing placement on a surface with support from below (Viberg 1998). Verb choice depends on the shape, orientation and presence/ absence of a base of the located object (Gullberg & Burenhult, 2012). In contrast, English has one general placement verb (put), whereas German has specific verbs similar to Swedish (stellen, legen; Narasimhan et al., 2012). In an auditory ERP study we compared English (n = 11) and German (n = 13) learners of L2 Swedish to native Swedish speakers (n = 12). We predicted that adult learners (~7 years of exposure) would differ in their processing of Swedish placement verbs depending on whether their L1 verbs were similar to (German) or differed from (English) Swedish in semantic granularity. German learners were thus expected to display more Swedish-like processing than English learners. Participants watched still images of objects being placed on a table while listening to sentences that were congruent/incongruent with the placement event. Participants performed offline appropriateness ratings of the picture-sentence pairs (1-6 Likert scale) after the ERP session. In both tasks object shape (symmetrical/asymmetrical), orientation (horizontal/vertical), and presence of base (with/without) were manipulated. While offline ratings were similar across groups, we found differences in the ERP effects. Native Swedish speakers displayed a biphasic response consisting of a larger anterior negativity and P600 when placement verbs were incongruent with object orientation (e.g. ställa ‘stand’ with a candle placed horizontally on a table). German learners processed the placement verbs similarly to Swedish native speakers—a biphasic ERP response when placement verbs were incongruent with object orientation relative to the ground. However, their responses consisted of an anterior positivity with the P600 rather than an anterior negativity. This anterior positivity has previously been related to learners allocating more attentional resources to an unexpected word. In contrast, English learners did not show an effect of verb congruency, but rather an anterior positivity and P600 for vertically placed objects regardless of the verb. This suggests a difficulty of processing verb semantics not present in L1 during online comprehension. The results overall suggest CLI in the online processing of fine-grained verb semantics, although no effects were detected in offline metalinguistic judgments. The findings are commensurate with results in the domain of L2 morphosyntax similarly suggesting that CLI is not a simple matter of presence or absence. Instead, different measures highlight different aspects. E24 Online build-up of neocortical memory traces for spoken words is facilitated by novel semantic associations: MEG data Alina Leminen1,2,3, Eino Partanen2,3, Andreas Højlund3, Mikkel Wallentin3, Yury Shtyrov3,4; 1Cognitive Science, University of Helsinki, 2Cognitive Brain Research Unit, University of Helsinki, 3Center of Functionally Integrative Neuroscience, Aarhus University, 4Laboratory of Behavioural Neurodynamics, St. Petersburg University 230 SNL 2019 Program Recent research has shown that the brain is capable of a rapid build-up of novel cortical memory traces for words during mere perceptual exposure to new lexical items. This has been shown as an online (within minutes) increase in the amplitude of electrophysiological responses to new word forms even when they have no specific meaning attached and are not attended to or actively rehearsed by the learners. However, the operation of this fast cortical language-learning mechanism in online acquisition of word meaning has not been sufficiently investigated yet. To address immediate plasticity caused by rapid learning of new words, we presented our participants with novel word forms in a word-learning task taking place during a short (10 minutes, 20 presentations of each item) magnetoencephalography (MEG) recording session. Novel word forms were either learned perceptually through auditory exposure only or were assigned a clear semantic reference using a word-picture association task, in which they were presented in conjunction with images of novel objects. Real familiar words were used as control stimuli. MEG responses were scrutinized as a moving average of three trials for each stimulus type (i.e. real words, perceptually learned novel word forms and semantically learned novel word forms). The results show that, already after approximately five presentations of each stimulus, novel stimuli learnt through semantic association demonstrated stronger activation over the left perisylvian cortices than perceptually acquired word forms that lacked semantic reference. Perceptual items also demonstrated a linear learning-related amplitude increase, but at a much slower pace, spread across the 10-minute recording session. This result suggests a more efficient process of online novel word memory trace buildup in the presence of semantic reference. This could be due to more widespread concurrent brain activations resulting in a more robust associative learning ultimately creating novel memory circuits. Our results confirm rapid formation of memory traces for novel words over a course of a short exposure and suggest facilitatory effects of acquisition of novel semantics on the neocortical memory trace formation. E25 Individual differences in the neural organization of language, and their relationship to language abilities Karla Rivera-Figueroa1, Michael C. Stevens2,3, Inge-Marie Eigsti1; 1University of Connecticut, 2Olin Neuropsychiatry Research Center, The Institute of Living/Hartford Hospital, 3Yale University School of Medicine Left hemisphere dominance for language functions has been well established. However, less is known about the impact of degree of lateralization on the fluency and efficiency of language processing. Typically developing adults are assumed to be fluent speakers of their native language, and effectively similar in their abilities. The current study takes an individual differences approach, examining the functional lateralization of languagespecific regions in the brain, and testing whether degree of lateralization is associated with behavioral language skills in 19 healthy adults ages 18-21 years. Method. This study employed an fMRI adaptive semantic The Society for the Neurobiology of Language SNL 2019 Program  matching paradigm (Wilson, Yen, & Eriksson, 2018). 3T fMRI data were collected on a Prisma scanner and processed using the Human Connectome Project (HCP) pipeline. Language-related parcellations were selected based on prior work (Glasser et al., 2016) and divided into frontal and temporoparietal clusters. Participants completed behavioral assessments of vocabulary (PPVT and Stanford Binet-Verbal Knowledge; VKN), syntax (grammatical judgment task; GJ), and a non-verbal fluid reasoning task (Stanford Binet- Object Series/Matrices; NVIQ). Functional degree of lateralization indices (LIs) were computed using the formula LI= abs[(Aleft-Aright)/ (abs(Aleft)+ abs(Aright))], where Aleft was the average activations from the fMRI general lineal model collapsed across vertices, within each pre-defined HCP parcel. A multiple regression analysis tested this relationship, including PPVT, VKN, GJ accuracy, and NVIQ as predictors. Results. The temporoparietal model was significant, F(4.925), p=0.014, such that higher behavioral task scores were associated with increased LI. The GJ task was a significant individual contributor to the model, p=0.005; PPVT approached significance, p=0.06. Adding non-verbal IQ (fluid reasoning) scores to the model did not improve the model, p=0.034, and IQ was not a significant individual predictor of laterality in this region. The frontal regression model was not significant, p=0.06, although PPVT and VKN scores were significant predictors in this model, p=0.021 and p=0.039, respectively; non-verbal IQ was again not a significant predictor. Discussion. Taken together, these scores suggest that greater degree of language specialization relative to homologous regions of the contralateral hemisphere in the temporoparietal area maps into better behavioral language skills in healthy adults with entirely normative language abilities. Both lexical/semantic knowledge (PPVT score) and syntactic processing (grammaticality judgment) contributed unique variance to the fMRI model, suggesting that this functional organization reflects enhanced language functioning across multiple linguistic domains; further, results were not driven by general cognitive ability . Results are consistent with findings of reduced lateralization (specialization) in individuals with developmental language disorders (Whitehouse & Bishop, 2008; Badcock et al., 2012; De Guibert et al., 2011; Herbert et al., 2002, Illingworth et al.,2009; Sun et al., 2010), suggesting that reduced language laterality may act as a risk factor that results in language impairments. Findings also provide a foundation for interpreting the neural reorganization observed in acquired language disorders. Meaning: Combinatorial Semantics E26 Catetholaminergic modulation of evoked power related to semantic processing Yingying Tan1, Ashley Lewis2, Peter Hagoort1,2; 1Max Planck Institute of Psycholinguistics, Radboud University 2 Introduction Catecholamine neurotransmitters have been shown to play a critical role in many cognitive functions, including language processing [1, 2]. However, the neural underpinnings of the link between catecholamine and language processing remain unclear. In this study, by The Society for the Neurobiology of Language Poster Session E combining electroencephalographic and pharmacological methods, we examined the modulation effect of a catecholamine agonist (i.e., methylphenidate) on neural oscillations previously linked to semantic processing. Based on previous results [3, 4, 5], our analyses focused on four frequency bands: theta (3-7 Hz), alpha (8-12 Hz), low beta (13 – 19 Hz), and high beta (20 – 30 Hz). Methods Forty-eight healthy participants were tested in two pharmacological conditions (20 mg methylphenidate vs. Placebo), using a within-subject, double-blind, randomized design. In each condition, participants read 180 sentences where for half of the sentences the target word (TW) was semantically congruent and for the other half the TW was incongruent. To further probe whether the catecholamine effect on language comprehension is task-dependent, in one block (90 sentences), after reading the sentence participants had to judge whether the sentence was semantically congruent (Semantic-task). In the other block, participants only had to judge whether a probe word presented after the sentence was of the same font size as the words comprising the sentence (Fonttask). Participants’ brain responses were recorded from 28 EEG electrodes. A sliding-window approach was used to compute time-resolved spectral power from 2 to 30 Hz. Statistical comparisons were conducted between 0 – 1000 ms after TW onset using cluster-based permutations statistics. Results In the Semantic-task, alpha/beta power was lower (600 – 1000 ms) and theta power was higher (410 – 730 ms) in the semantically incongruent condition. In the Font-task, only higher theta power in the semantically incongruent condition was evident (290 - 610 ms). Both effects have previously been linked to semantic congruency manipulations [3, 4], but interestingly the beta effect appears to depend on the task. Moreover, a modulation effect of methylphenidate was only observed in the Semantic-Task. Alpha power was lower (770 – 1000 ms), while high beta power was higher (570 - 970 ms) in the methylphenidate condition. No interaction between methylphenidate and semantic congruency was observed. Discussion Our results demonstrated a task-dependent effect of catecholamine on language processing. When semantic processing was taskrelevant, a higher level of catecholamine led to an overall suppression of alpha/beta across the entire task, possibly reflecting increased attention to semantic information [6, 7]. Importantly, a higher level of catecholamine did not influence semantic processing in the same way when semantic information was task-irrelevant. Speculatively, the striatum-PFC projections, which contain a large number of catecholamine receptors, may be responsible for these modulatory effects. References [1] Grossman, M., et al., (2001). JoNS. [2] Copland, D. A., et al., (2009). Cortex. [3] Lewis, A. G., et al., (2015). Brain & Language. [4] Bastiaansen, M. C. M., et al., (2006). Prog Brain Res. [5] Dockree, P., et al., (2017). Biological Psychiatry. [6] Jensen, O., et al., (2010). Front HumNeurosci. [7] Loo, S., et al., (2004). J Clinical Neurophysiology. 231 Poster Session E E27 The dynamic cognitive process and brain networks by which predictive processing facilitates language comprehension Xiaoqing Li1, Ximing Shao1, Zaizhu Han2; 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 2State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University Both language processing theories and experimental studies suggest that, during language comprehension, the human brain can use prior knowledge and contextual information to predict upcoming content, and these internally generated predictions are potentially able to provide constraints to the representation of new bottomup input, thereby facilitating language comprehension. The present fMRI (functional magnetic resonance imaging) study examined the dynamic cognitive process and brain networks by which predictive processing is conducted to facilitate language comprehension. Participants read Mandarin Chinese sentences for comprehension during fMRI data acquisition. Three versions of sentences were constructed (StrongTool, StrongBuilding, vs. Weak): the sentences had an either strong- or weak-constraint semantic context, and meanwhile the strong-constraint context set a high expectation for either a building- or tool-noun. The critical nouns (at the sentence-final position) were all the best completion of their context, and the critical verbs (immediately preceding these nouns) were exactly the same in the three versions. The effects of semantic prediction were examined by measuring both the anticipatory processing of the critical nouns prior to their onset and the integration processing of these nouns after their appearance. The results showed that, firstly, at both the anticipatory and integration stages of semantic prediction, left parahippocampal gyrus, which is considered to be correlated with the representations of buildings, displayed increased activations in the StrongBuilding condition (for both StrongBuilding-versusStrongTool and StrongBuilding-versus-Weak contrasts); left pMTG (posterior middle temporal gyrus) and left ITG (inferior temporal gyrus), which have been considered to be associated with tool-related representations, showed increased activities in the StrongTool condition (for both StrongTool-versus-StrongBuilding and StrongTool-versusWeak contrasts). These results indicated that, during the processing of strongly constraining sentences, brain areas associated with specific semantic representations displayed not only increased activities but pre-activations as well. Secondly, left MTG (middle temporal gyrus), which is considered to be correlated with lexical/semantic retrieval process, displayed decreased activations in the strong-constraint conditions (StrongTool+StrongBuilding versus WEAK) at both the anticipatory and integration stages of processing, suggesting facilitated lexicalsemantic retrieval in a strongly constraining context. Thirdly, left IFG (inferior frontal gyrus), which has been considered to related to semantic unification/binding process, demonstrated increased activation in the strong-constraint conditions (StrongTool+StrongBuilding versus WEAK) at the anticipatory stage of processing, but decreased activations at the integration stage 232 SNL 2019 Program of processing. These two stages of left IFG activities indicated that, while reading the strong-constraint sentence for comprehension, the human brain consumed more neural/cognitive resources to bind the currently available information to generate hypothesized representations of incoming words, and these neural/ cognitive cost got a beneficial effect at a later stage, as indicated by facilitated integration of the actually presented new language input. In addition, bilateral middle frontal gyrus, thalamus, and supplementary motor areas also showed decreased activations in the strongconstraint conditions at the integration stage. Finally, we discussed how the different brain areas (e.g., areas associated with semantic representations, semantic retrieval/unification, and general predictive coding processing) worked cooperately and dynamically to help us to perform predictive processing to facilitate language comprehension. E28 How are metonymy and metaphor different: neural processes and contextual effects Fan Pei Yang1, Maria Pinango2, Andy Zhang2, Yi-hsuan Chen1; 1Center for Cognition and Mind Sciences, National Tsing Hua University, 2Department of Linguistics and Interdepartmental Neuroscience Program, Yale University Metonymy is the use to refer to a term with a word that describes one of the features or qualities of the term. A metaphor is a figure of speech that refers to a term by means of verbal analogy with another word. Although the linguistic comparisons involved in creating metonymy and metaphors are different, there is a lack of consistent findings in neural processing difference of these two types of figurative speech. Previous research has reported that metonymy comprehension involved BA 8, 10 and 47, while metaphors activated more extensive regions in the bilateral frontal and temporal gyri in addition to the aforementioned areas . Besides, metonymy, depending on how it is used due to circumstantial need or systematic(conventional) usage, might elicit variations of regional involvement of language processing areas. The present study used event-related fMRI to investigate the difference in processing of circumstantial and systematic Chinese metonymy and metaphors as well as the connectivity analysis with the CONN toolbox (https://web. conn-toolbox.org). Fifteen participants (8 males, 7 females, mean age=22.37, SD=1.31) read pairs of sentences, with the first of each pair being a contextual prime and the second being either a circumstantial metnoymy, systematic metonymy, metaphor or a literal sentence. The main effect of circumstantial usage was shown in the left frontal gyri opercularis and triangularis and middle and inferior temporal gyri. The main effect of figurative speech was found in the bilateral inferior gyri and medial frontal gyrus. The main effect of metaphor was represented in the left middle and inferior gyri. The interactions of circumstantial usage and figurative speech was found in the left frontal gyrus and inferior temporal gyrus. The connectivity analysis revealed significant connectivity between the medial and inferior frontal gyri in both hemispheres in the metonymy conditions. The metaphor The Society for the Neurobiology of Language SNL 2019 Program  condition showed significant connectivity of regions in the typical language network, including the left inferior frontal, left superior, middle, inferior temporal gyri. The results supported that figurative speech, depending on the contextual usage, may involve distinctive regions for comprehension. The connectivity analysis results suggest that metaphors might recruit more interactions between subregions in the language network for analogy formation, whereas metonymy might require contextual or systematic inference. This research not only provided more refined analysis of metonymy and metaphor but also revealed the influence of context on figurative speech processing E29 Visual and auditory semantic processing converges in the anterior temporal lobe Akihiro Shimotake1, Riki Matsumoto1,2, Kiyohide Usami1, Takayuki Kikuchi1, Kazumichi Yoshida1, Masao Matsuhashi1, Takeharu Kunieda1,3, Ryosuke Takahashi1, Matthew Lambon-Ralph4, Akio Ikeda1; 1Kyoto University, 2Kobe University, 3Ehime University, 4University of Cambridge Introduction: There is growing evidence that the anterior temporal lobe (ATL) play a critical role in semantic processing, especially as a multimodal semantic hub. The implantation of intracranial electrodes provided us with a rare opportunity to explore the cortical function directly to investigate the multimodal semantic processing in the temporal lobe. We aimed to delineate that the ATL underpins a modality-invariant representational semantic hub by using the semantic judgement task with intracranial electrodes. Methods: We studied 4 patients with intractable epilepsy, who underwent presurgical evaluation with subdural grid implantation over the lateral and ventral ATL (Language dominant:3, non-dominant:1). Visual and auditory semantic judgement tasks were performed. The stimuli were presented visually or auditory by color photos or sounds every 2 seconds. The stimuli included 48 items: half living and half nonliving objects. They are randomly presented with 96 trials per session and 3 sessions were performed for visual and auditory stimuli respectively. Patients were asked to answer whether the stimulus is living or nonliving by pressing the button. As a control task, high-low judgement task was also performed by judging whether the location of the scrambled image is higher or lower for visual control, and whether the tone of noise sound is higher or lower for auditory control. Sampling rate of ECoG was 1000 or 2000 Hz and time-frequency representation of ECoG power was calculated using a short-time Fourier Transform (STFT) with a Hanning window of 100 points (frequency resolution of 10 or 20 Hz). ECoG data was averaged with the stimulus onset from -200 ms to 1800 ms across trials and converted to the logarithmic scale. Results: Compared with control tasks, a robust gamma activity (40-50 Hz) was observed in the electrodes along the anterior to middle fusiform gyrus and inferior temporal gyrus for visual semantic judgement task. For the auditory semantic judgement, gamma activity was observed in the anterior to middle part of the middle and inferior temporal gyri, and the high gamma activity (80-100 Hz) was also observed in the anterior to middle fusiform gyrus in the 2 The Society for the Neurobiology of Language Poster Session E patients. Gamma activity was partly overlapped between visual and auditory semantic judgement in the anterior to middle part of the inferior temporal gyrus. Conclusions: The visual and auditory semantic processing converged in the ATL, especially in the inferior temporal gyrus. The ATL can be the important area for a modality-invariant representational hub within the semantic system. For further analysis, comparison with the gamma activities related to other semantic tasks (expressive, receptive) will be needed. Meaning: Discourse and Pragmatics E30 Discourse management during speech perception – towards a cognitive-neuroanatomical model Susanne Dietrich1, Ingo Hertich1, Verena Seibold1, Bettina Rolke1; 1University of Tübingen The present functional magnetic resonance imaging (fMRI) study addresses context-dependent auditory speech comprehension. Specifically, we manipulated discourse coherence by using presupposition (PSP) phrases triggered by an indefinite or definite determiner that either corresponded or failed to correspond to items in a preceding context sentence. Our aim was to investigate how the cognitive operations required for PSP processing, such as reference processing and eventual handling of violation, can be assigned to neuroanatomical structures of the language network in the brain. The stimulus materials comprised pairs of context and test sentences including definite determiners (“the”) presupposing uniqueness and existence or indefinite (“a”) determiners suggesting non-uniqueness or novelty of an item. Each subject performed an acceptability rating during fMRI scanning. Hemodynamic responses were modelled at PSP onset. Discourse violations yielded bilateral hemodynamic activation within angular gyrus (AG), inferior frontal gyrus (IFG), pre-supplementary motor area (pre-SMA), and basal ganglia (BG). These findings suggest different cognitive aspects of PSP processing: (i) a reference process requiring working memory (IFG), retrieval, and integration of semantic/ pragmatic information (AG), and (ii) cognitive control for inconsistency management (pre-SMA/BG) in terms of “successful” comprehension despite PSP violations at the surface. These results provide first fMRI evidence toward a functional neuroanatomical model for context-dependent speech comprehension based on the example of PSPs. E31 How working memory capacity modulates the time course of semantic integration at sentence and discourse level Xiaohong Yang1,2, Xiaoqing Li1,2, Jinfeng Ding1,2, Ying Zhang1,2, Qian Zhang1,2; 1CAS Key Laboratory of Behavioral Science, Institute of Psychology, 2Department of Psychology, University of Chinese Academy of Sciences Language comprehension requires language users to integrate information from prior context. The one-step model of language comprehension argues that during comprehension, language users not only immediately integrate information from local sentence context, but also information from global discourse context. Underlying the one-step model is the immediacy assumption, according 233 Poster Session E to which both sentence and discourse context will be immediately used to co-determine the interpretation of the message (Hagoort & van Berkum, 2007). In the present study, we test the immediacy assumption by examining whether the time course of these integration processes is constrained by language users’ working memory capacity. Sentence and discourse stimuli were constructed. For the sentence stimuli, each sentence contained a critical word that was either congruent or incongruent with its preceding sentence context. For the discourse stimuli, each discourse contained four sentences with a target word embedded at the final sentence and the target word was either congruent or incongruent with the information provided at the first sentence of the discourse. Participants of high and low working memory span (N=20 for each span group) were instructed to read for comprehension. Our results showed that while the high span readers showed the N400 and P600 effects to semantically incongruent words, the low span readers only showed the P600 effect. This pattern was found regardless of whether the incongruent words were placed at sentence or discourse context. These results suggest that the low span comprehenders were relatively slower than their high span counterparts in performing semantic integration at either sentence or discourse level. Thus, our findings support and also extend the one-step model by showing that while both sentence and discourse context are integrated in one-step, whether this step takes place immediately depends on language users’ working memory resources. E32 Discourse belief-updating in the right hemisphere Maxime Tulling1, Ryan Law2, Ailís Cournane1, Liina Pylkkänen1,2; 1New York University, 2NYU Abu Dhabi Research Institute Introduction: With language, we can describe either the actual world or possible worlds. Little is known about the neurobiological basis of this contrast. Here we studied it by comparing the processing of factual assertions, which are claims about the world under discussion that allow you to update your beliefs about this world accordingly (e.g. ‘John loves Mary’), and expressions involving the modal verbs ‘may’ and ‘must’, which refer to possible states of affairs that are not actual or known (e.g., ‘John must love Mary’). Methods: A magnetoencephalography (MEG) study (N=25) compared visually presented sentences (Rapid Serial Visual Presentation) containing the modals ‘may’ and ‘must’ against sentences containing the factual verb ‘do’. In order to have do naturally appear in the same position as may and must, our sentences contained VP ellipsis (…and the king knows that the squires do/may/must too), controlled for elided-VP length and complexity. The interpretation of the modals was further dependent on prior (pre-normed) contexts biasing towards either an inferential or permission/ obligation reading, which served as another independent factor. Target sentences (N=240) were followed by a task sentence, where participants indicated whether these were natural continuations or not. Results: A spatiotemporal cluster based permutation test on the 234 SNL 2019 Program full-brain in the time window 100-900ms after target word onset revealed a significant cluster reflecting a robust increase for the factual conditions over the modal ones, at 210-350ms starting around the right Temporoparietal Junction (rTPJ) and spreading up to the right Inferior Parietal Sulcus (rIPS) and right medial surfaces (cuneus - posterior cingulate cortex). We did not observe any significant differences between the different types of modal verbs nor any reliable activity increases for modal verbs over factual verbs. Discussion: We hypothesize that this increased activation for the factual condition may reflect computations involved with evaluation and integration of claims made about the world of evaluation, a process absent from the modal condition as those sentences only contribute possible compatibilities with the evaluated world. This belief-updating function is in line with suggestions that the rTPJ plays a role in theory revision and conceptual change (Martin & McDonald, 2003; Corbetta et al., 2008; Mahy et al., 2014) and supports that the right hemisphere is involved in pragmatic processing and contextual coherence (Kuperberg et al., 2006; Virtue et al., 2006). Follow-up experiment: In a follow up experiment, we are contrasting the beliefupdating hypothesis with several others. We presented stimuli without a third person character (… so the squires do/may/might too). Our results (n=15) show no effect of our manipulation in right temporal and parietal areas ~100-500ms. For comparison, a random sample of 15 participants from the original study with the same test criteria showed a strong trending spatio-temporal cluster. This suggests the activity observed in the rTPJ might reflect updating the representation of someone else’s beliefs. This is consistent with studies relating rTPJ activity to theory of mind and reasoning about other minds (Saxe & Wexler, 2005). E33 Task effects on pragmatic inference calculation Tal Tehan1, Einat Shetreet1; 1Tel Aviv University Statements such as ‘Some elephants have trunks’ are ambiguous: some speakers accept them as true, adopting a logical interpretation (they understand ‘some’ as ‘some and possibly all’), while, most speakers reject these statements as false, adopting a pragmatic interpretation through the calculation of a scalar implicature (they interpret ‘some’ to mean ‘some but not all’; e.g., Huang & Snedeker, 2009; Noveck, 2001, Papafragou & Musolino, 2003). It has been suggested that the calculation of scalar implicatures depends on extra-linguistic highcognitive functions (e.g., Foppolo et al., 2012; Shetreet et al., 2014). This study examines the role of task in scalar implicature calculation, focusing on these extra-linguistic functions, in two fMRI experiments with Hebrew speakers. In Experiment 1, participants had to judge whether sentences including the equivalent of ‘some’ (‘xelek’) or ‘all’ (‘kol’) matched a picture in which all, some or none of the items shared the trait included in the sentence. In our critical condition, ‘some’ was presented with “allpictures” (e.g. ‘some of the squares are red’ for a picture of five red squares), so that it involved scalar implicature calculation. Preliminary results show that this condition The Society for the Neurobiology of Language SNL 2019 Program  is associated with increased activations in a network of frontal and prefrontal regions, including the ACC and BA 10 & 11, which were also observed with scalar implicature calculation in English speakers (Shetreet et al., 2014). Similar activations were observed in a subset of participants who accepted the critical condition as true (i.e. assigning a logical meaning to “some”). In Experiment 2 (which was always performed after Experiment 1), we used a picture-selection task (similar to Horowitz & Frank, 2015). In this task, participants listened to sentences including ‘some’ or ‘all’ and had to select one of three pictures (presented simultaneously), in which all, some or none of the objects shared the trait included in the sentence. Here, “some-pictures” were exclusively selected following “some-sentences” by all the participants, including those who responded logically in Experiment 1. An ROI analysis in BA 10 & 11 revealed that, as opposed to Experiment 1, “some-sentences” and “allsentences” showed no significant differences. The results from Experiment 1 confirm that in Hebrew, like in English, certain pragmatic inferences recruit extra-linguistic processes, given that some of the observed activations have been implicated in studies of high cognitive functions, such as decision making. Differential activations even in speakers that did not adopt the pragmatic interpretation indicates that they too had to engage in a decision-making process, which suggests that they considered both logical and pragmatic interpretations (that is, they too calculated the implicature). Results from Experiment 2 with no differential activations in regions implicated in decision making could suggest that only the pragmatic interpretation of ‘some’ was considered, possibly due to the contrast between “all-pictures” and “some-pictures”. Thus, in certain contexts, no competition between the logical and pragmatic interpretations is present in both participants that initially adopted the pragmatic interpretation, and those that adopted the logical one. Syntax E34 Paragrammatism, agrammatism and the cortical organization of syntax: a lesion-symptom mapping study William Matchin1, Alexandra Basilakos1, Dirk den Ouden1, Brielle Stark2, Julius Fridriksson1, Gregory Hickok3; 1University of South Carolina, 2Indiana University, 3University of California, Irvine INTRODUCTION: The cortical organization of syntax has been difficult to determine, in part because similar syntactic effects in functional neuroimaging studies are elicited in the inferior frontal gyrus (IFG) and the posterior middle temporal gyrus (pMTG). Agrammatic speech is associated with damage to frontal structures including the pars triangularis of the IFG (Wilson et al., 2010; den Ouden et al., 2019). However, unlike the pMTG, damage to the IFG does not reliably impair basic sentence and syntactic comprehension abilities (Dronkers et al., 2004; Wilson & Saygin, 2004; Pillay et al., 2017; Rogalsky et al., 2018). Kleist (1914) originally proposed two kinds of syntactic disturbances in the speech of people with aphasia: agrammatism (simplification of grammatical structure, omission of function words/ The Society for the Neurobiology of Language Poster Session E morphemes), and paragrammatism (error-filled misuse of grammatical elements and structures, leading to “sentence monsters”, e.g. “two women is ugly”, “…wanted to make a trick her”). Matchin & Hickok (forthcoming) suggest that agrammatism results from damage to a morpho-syntactic sequencing system in the pars triangularis and paragrammatism results from damage to a hierarchical syntactic system in the pMTG. However, as the cortical locus of paragrammatism is largely unknown, we performed a voxel-based lesion-symptom mapping (VLSM) study in 53 patients with chronic aphasia secondary to a single-event left hemisphere stroke. METHODS: Four expert raters classified subjects’ spoken narrative discourse samples as agrammatic, paragrammatic, or no grammatical deficit, with consensus obtained through discussion. Subjects’ lesion maps were manually drawn and warped to MNI space, and VLSM analyses were performed. Lesion volume was always included as a covariate. RESULTS: Region of interest analyses identified a clear double dissociation: damage to the left pars triangularis of IFG was significantly associated with agrammatism, t(51) = 2.959, p = 0.002, but not paragrammatism, t(51) = -1.542, p = 0.935, while damage to the left pMTG was significantly associated with paragrammatism, t(51) = 3.087, p = 0.002, but not agrammatism, t(51) = -1.429, p = 0.920. Neither syndrome was significantly associated with damage to the anterior MTG. Whole brain analyses revealed non-overlapping effects of agrammatism in inferior and middle frontal brain regions and paragrammatism in posterior temporal and parietal brain regions. Secondary analyses adding speech fluency (words per minute) as an additional covariate revealed the same significant double dissociation, albeit weaker, with the same spatial distribution. DISCUSSION: We showed that these qualitatively distinct grammatical deficits, agrammatism and paragrammatism, correspond to inferior frontal and posterior temporal damage (respectively) as proposed by Matchin & Hickok (forthcoming). While both brain regions appear to support syntactic processing, broadly construed, their patterns strongly diverge with respect to grammatical deficits in aphasia, consistent with distinct roles in linear morphosyntactic processing in IFG and hierarchical lexicalsyntactic processing in pMTG. E35 Is the VSO word order canonical in Arabic? Evidence from ERPs Ali Idrissi1, Eiman Mustafawi1, Tariq Khwaileh1, R. Muralikrishnan2; 1Qatar University, 2Max Planck Institute for Empirical Aesthetics Introduction. Among the permissible word orders in Standard Arabic, verb-initial VSO and subject-initial SVO orders are predominant. Traditionally, however, VSO is considered the unmarked/canonical order, with SVO considered its ‘marked’ variant. In a visual ERP study, we investigated the neural signatures of word-order differences in Standard Arabic, and hypothesized that additional processing costs should ensue at the position of the object in the SVO order as opposed to the VSO order, if VSO is indeed processed as the canonical order. Alternatively, if the two orders have the same status in 235 Poster Session E the grammar, no additional processing costs should be observable in processing SVO. Methods. We employed transitive sentences in three word-orders: VSO, SVO and Adverb-VO. The adverb in the subject-dropped sentences in the AVO order was identical (‘yesterday’) in all sentences. All critical stimuli and fillers in the experiment were grammatical, well-formed sentences. Further, subjects and objects were all human singular nouns; the subject was always feminine, with which the verb agreed in person, number and gender; and the object was always masculine. Thus, there was no ambiguity at the position of the object (critical position) as to its objecthood. Thirty right-handed female native speakers of Arabic, all of them students at Qatar University, participated in the study. EEG was recorded using 25 Ag-AgCl active scalp electrodes fixed on an elastic cap (Easycap GmbH, Germany). AFZ served as the ground electrode; recordings were referenced online to the left mastoid, but re referenced to the average of the linked mastoids offline. Results. The ERP results at the position of the object revealed a negativity effect in the 400 to 600 ms time-window for the SVO and AVO conditions, as opposed to the VSO condition. Further, there was a late-positivity effect in the 600 to 800 ms time-window for the AVO condition, as opposed to the VSO condition. Based on their topography and latency, these effects can be plausibly interpreted as instances of N400 and P600. Given that the pre-critical words and their categories were necessarily different in the three word-orders, we checked for possible upstream effects that might have played a role at the critical position. However, ERPs timelocked to the sentence onset for the entire epoch of the sentence and sentence-wide difference waves between the critical conditions showed that the critical effects at the position of the object are independent of effects from the pre-critical positions. Discussion. The pattern of results we found suggests additional processing costs ensuing from integrating the object in the SVO and AVO orders compared to the VSO order. We interpret the latepositivity for the AVO condition as a consequence of enriched composition (Schumacher, 2011) of the inferred dropped subject. Taken together, these findings support the hypothesis that additional processing costs ensue during the integration of the object, when the order was SVO as opposed to VSO. Thus, our results provide the first neurophysiological evidence for the canonical/central status of the VSO order in Arabic. Morphology E36 Automatic decomposition revisited with MEG evidence from visual processing of Tagalog circumfixes, infixes, and reduplication Samantha Wray1, Linnaea Stockall2, Alec Marantz1,3; 1New York University Abu Dhabi, 2Queen Mary University of London, 3New York University Morphologically complex words are decomposed in the visual system based their formal properties, such that even words which orthographically imitate a morphological form (e.g. brother) are decomposed automatically in the visual word form area (VWFA) in anterior fusiform gyrus (Rastle et al. 2004, Lewis et al. 2011). Decomposition 236 SNL 2019 Program effects have also been found for irregular complex words (eg. bought) (Fruchter et al., 2013) and for words with otherwise unattested stems (eg. excursion) (Gwilliams & Marantz 2018). However, the current inventory of demonstrated influences on this process is incomplete, which the current study aims to address by posing two questions. First, are words which are morphologically complex in ways beyond affixation automatically decomposed? Second, are words which phonologically conform to the appearance of morphological complexity despite being morphologically simple decomposed by the visual system as well? For example, the vowel preceding a word-final consonant in Tagalog is realized as [o] but is [u] in other positions. Reduplicated words can exhibit non-transparent application of this rule to retain identity between the base and reduplicant morphemes (e.g. boboto “shall vote”; stem boto “vote”). There are two types of pseudoreduplicants: one of them applies the rules of Tagalog phonology transparently (Wilbur 1973, McCarthy 1995). The pseudoreduplicant dubdob “feeding a fire” conforms to this phonological rule. The second type of pseudoreduplicant does not apply these rules transparently. The word gonggong “fish”, for example, does not exhibit transparent application of the o->u rule; it retains /o/ in both positions. Participants (n=20) performed a visual lexical decision task in seven conditions: (1) reduplicated words, and (2) pseudoreduplicated words which imitate reduplicated words [+i], and (3) pseudoreduplicants which do not [-i, (4) circumfixed words (5) infixed words with the infix /-in-/ (6) pseudoinfixed words with an orthophonemic string /in/ in a typologically appropriate position for an infix (7) morphologically simple words. Magnetoencephalography was recorded concurrently and activity from the VFMA was visually inspected to identify the M170 (arguably the response from the VWFA shown to be sensitive to decomposition). Activity from 150-200ms after presentation was analyzed; LMEM results show that TP modulates activity (when word length and base frequency are constant) for reduplicated, circumfixed, and infixed words. Additional LMEM were then built to predict the activity of the pseudocomplex forms and predictions were compared to observed values. Results were consistent with the hypothesis that [+i] pseudoreduplicates which phonologically imitate reduplicated words are also automatically decomposed, but [-i] pseudoreduplicates are not. Additionally, pseudoinfixed words are not automatically decomposed in contrast with [+i] pseudoreduplicated words. In sum, this study has several implications for the study of processing morphologically complex written words in isolation. First, reduplication, circumfixation and infixation are comparable to more widely-studied suffixation in that properties relating the words’ constituents modulate activity during word recognition, indicating they are automatically parsed by the visual system. Second, phono-orthographic cues aid this process even the absence of an isolable stem or morphosyntactic rule. Furthermore, this study contributes The Society for the Neurobiology of Language SNL 2019 Program  to cross-linguistic evidence that underlying grammar, here morphophonological, influences early visual word processing. E37 Disentangling morphological decomposition and letter recognition: An MEG study of Japanese verbs Shinri Ohta1,2, Yohei Oseki2,3, Alec Marantz2,4,5; 1Department of Linguistics, Kyushu University, 2Department of Linguistics, New York University, 3Faculty of Science and Engineering, Waseda University, 4Department of Psychology, New York University, 5 NYUAD Institute, New York University Abu Dhabi Previous magnetoencephalography (MEG) studies reported that morphologically complex words are decomposed into morphemes around 170 ms after the onset of visual stimuli (M170) in the left fusiform gyrus and inferior temporal gyrus (FG/ITG). Moreover, another MEG study found that transition probability between morphemes (morphTP) was negatively correlated with the amplitude of the M170. As these studies targeted English, in which morphological boundaries are a subset of letter boundaries, it is difficult to examine whether the M170 is modulated by morphTP or TP between letters (letterTP). To disentangle morphological decomposition and letter recognition, we targeted the Japanese language, which uses logographic kanji and moraic kana writing system. For example, in Japanese verbs, kanji and kana represent the verbal root and remaining morphemes, respectively, leading to a mismatch between morpheme boundaries and letter boundaries. In the present MEG experiment, we compared the effects of morphTP (TP between verbal root and suffixes) and letterTP (TP between kanji and kana) on the left FG/ITG activation. We recruited 22 righthanded native speakers of Japanese (nine males, 35.5±7.3 yrs.). We used 112 Japanese verbs for each of intransitive verbs, transitive verbs, intransitive-causative verbs, and transitive-causative verbs, as well as the same number of nonwords (total 896 stimuli). The participants performed a visual lexical decision task. We used a 157-channel MEG system (Kanazawa Institute of Technology). For the MEG analyses, we used spatiotemporal cluster permutation tests as implemented in MEG-Python and Eelbrain packages. As our primary target was the M170, the region of interest was anatomically defined as the left FG/ITG and the analysis time window was restricted to 50-250 ms after word onset. A two-way repeated-measures analysis of variance (transitivity*causativeness) for the accuracies showed significant main effects of transitivity and causativeness, as well as an interaction (transitivity: p=0.046, causativeness: p<0.0001, interaction: p=0.042). The reaction times also showed a significant effect of causativeness, but neither a main effect of transitivity nor an interaction was significant (transitivity: p=0.39, causativeness: p<0.0001, interaction: p=0.34), indicating that the causative conditions were more demanding. For the MEG data, we first examined whether transitivity and causativeness modulated activation in the left FG/ ITG. We found a significantly larger activation in the noncausative conditions in the anterior part of the left FG/ITG (corrected p=0.044), but the main effect of transitivity was not significant (corrected p>0.2). Because The Society for the Neurobiology of Language Poster Session E the TP from the verbal root to the causative suffix was lower than that of the verbal root to a tense suffix, this activation may reflect the difference of the morphTP. We further examined whether the morphTP modulated the left FG/ITG activation, by using spatiotemporal cluster regression analyses. We found a significant negative correlation between the morphTP and the left FG/ITG activation (corrected p<0.03). In contrast, we did not find any significant correlation between the letterTP and the left FG/ITG activation (corrected p>0.2). These results demonstrated that morphologically complex verbs in Japanese are indeed decomposed into morphemes, but not into letters, similar to morphologically complex words in English examined in the previous MEG studies. Multilingualism E38 Where do code-switching constraints apply? An ERP study of code-switching in Mandarin-Taiwanese sentences Chia-Hsuan Liao1, Macie McKitrick1, Maria Polinsky1; University of Maryland, College Park 1 Code switching (CS below) is a pervasive phenomenon of bilingual language use. Existing linguistic analyses show that CS is subject to a well-defined set of principles (Toribio, 2001); however, the place of these principles in grammar remains unclear; in particular, are these principles a reflection of grammatical constraints or rather, conditions on felicity? One of the better-studied principles is The Functional Head Constraint, FHC (Belazi, Rubin & Toribio, 1994). According to the FHC, a switch cannot occur between a functional head and its complement. The current study aims to test the validity of the FHC in a novel empirical domain (numerical classifier phrases, consisting of a numeral, a classifier, and a noun) and to use the results to address the question raised above: is the FHC a syntactic principle or a felicity constraint? Although existing studies do not manipulate the grammaticality of CS, they report that processing costs associated with CS include an N400 and a frontal negativity; these effects point to lexical-access difficulty and the inhibition of pre-activated representations (Liao & Chan, 2016; Proverbial et al., 2004). In our experiment, we tested CS between Mandarin Chinese and Taiwanese, using numeral-classifier phrases (e.g., two-classifier apples), and verb phrases (e.g., ride-aspect bike) embedded in full-sentence contexts. In the numeralclassifier phrases, the grammatical condition involved a switch between a classifier and a classified noun; the ungrammatical condition had a switch between a numeral and a classifier. In the verb phrases, the grammatical condition involved a switch between a lexical verb and its object, and the ungrammatical condition had a switch between the verb and the aspectual marker. Each condition had a non-switched version as its baseline. We expected to obtain a grammatical violation response to the ungrammatical switch conditions, such as a LAN or a P600, on top of the aforementioned processing costs. Participants (N=31) were Mandarin-Taiwanese native speakers, proficient in both languages but more dominant in Mandarin. Sentences were presented with RSVP. Results replicated the findings in Liao & Chan 237 Poster Session E (2016), showing that sentences with CS elicited a larger N400 and a late negativity. However, we did not find the interaction between Grammaticality and Switching in LAN or in P600 time windows. This result indicates that the ‘grammaticality’ of CS predicted by the FHC is too restrictive. We interpret this lack of LAN/P600 effects as an indication that the FHC is a felicity constraint, rather than a syntactic principle. In addition, we showed that switching costs could be modulated by lexical categories. While the switch between a verb and a noun evoked a larger N400 (as compared to the non-switched baseline), the N400 effect was considerably smaller in the switch between a numeral and a classifier (as compared to the the non-switched baseline). The smaller N400 effect in the numeral-classifier phrase as compared to the verb phrase could be due to a lower frequency of classifier constructions as opposed to verb-object constructions in both languages. E39 Examining prediction at the level of the Discourse: An ERP study José Alemán Bañón1, Clara Martin2; 1Centre for Research on Bilingualism, Stockholm University, 2Basque Centre on Cognition, Brain and Language INTRODUCTION: An ongoing debate in psycholinguistics concerns whether adult L2ers can generate predictions online. Grüter et al. (2016) propose that adult L2ers have Reduced Ability to Generate Expectations. Alternatively, Kaan (2014) argues that prediction is similar in the L1 and L2, but is impacted by individual differences in cognitive factors. We used ERP to investigate the role of prediction in Focus assignment (property of the discourse) via the itcleft. Example: What did Ann buy, a book or a calculator? It is [a book]FOCUS that Ann bought. We examine whether comprehenders use the it-cleft to anticipate the Noun Phrase (NP) with Focus status. METHODS: Participants read wh-questions like (1). STIMULI: (1) Either an agent or an adviser could work for a banker. In your opinion, which of the two should a banker hire? (1a) In my opinion… it is an agent that a banker should hire; (1b) ...it is a banker that should hire an agent; (1c) ...an agent should be hired; (1d) ...a banker should hire an agent. Then their EEG was recorded while they read the responses (RSVP: 450/300ms). In (1a-b) the it-cleft provided a cue for Focus assignment. Half of the times, Focus was assigned to an accessible NP (1a) and the other half, to the Topic (1b), thus violating information structure. In (1cd) the absence of the it-cleft made Focus assignment less constrained/predictable. As shown in (1), the two Focus NPs (an agent/an adviser) and the Topic (a banker) were preceded by different allomorphs of the indefinite article (counterbalanced), allowing us to examine prediction effects at the article, before semantic integration occurred (Delong et al. 2005). We used 30 items/condition. RESULTS/DISCUSSION: L1-English speakers (n=23) showed an N400 (250-400ms) for unexpected (1b) relative to expected articles (1a) in the conditions with the itcleft (Cleft by Expectedness by Anterior by Hemisphere, F(1,22)=4.38, p<.05), suggesting that the cleft allows the parser to anticipate upcoming (Focus) nouns. Topic nouns (1b/1d, banker) yielded a P600 relative to Focus 238 SNL 2019 Program nouns (1a/1c, agent) overall. Since the P600 is argued to reflect the reanalysis processes triggered by violations of top down expectations, L1 speakers might have expected the first NP in the response to fill the slot opened by the wh-question. L1-Spanish L2-English learners (n=22, intermediate/advanced) show an Anterior Positivity for unexpected (1b) relative to expected articles (1a). This effect, linked to prediction disconfirmation (Delong et al. 2014) emerged in the conditions with the it-cleft (Cleft by Expectedness, F(1,21)=6.89, p<.05). Topic nouns (1b/1d, banker) yielded an N400 effect relative to Focus nouns (1a/1c, agent), suggesting that the L2ers processed infelicitous Focus assignment lexically. Finally, the size of the prediction effect on the article correlated with processing speed in both L1 and L2ers (Huettig & Janse, 2016). Our results are not fully consistent with either Grüter et al.’s or Kaan’s proposals, but they show that L2ers can predict at the level of the discourse, although differently from L1 speakers, and that L2 prediction is impacted by similar cognitive factors as in L1 speakers (processing speed). E40 Phonological aspects of lexical retrieval in fluent bilingual aphasia Marco Calabria1, Federica Iaia1, Nicholas Grunden1,2, Carmen García Sánchez2; 1Center for Brain and Cognition, Pompeu Fabra University, Barcelona, 2Hospital de la Santa Creu i Sant Pau, Barcelona Introduction. In a previous study, we showed that lexical retrieval may be selectively impaired in bilinguals with aphasia when they need to resolve semantic competition in their non-dominant language. Additionally, our data suggested that this impairment could possibly be explained by an excessive amount of inhibition that makes target words less accessible during lexical retrieval. In this study, we further investigate the origin of the lexical retrieval deficits in bilingual patients with aphasia by focusing on the role of phonological competition. Participants and Methods. We explored the naming performance of bilinguals with fluent aphasia (n=8) and age-matched healthy controls (n=15) on a phonologically blocked cyclic naming task in both their languages. All participants were early bilinguals and high proficient in Catalan and Spanish with a balanced use of the two languages. During the task, participants were asked to name pictures in two conditions: a) homogenous, where picture names shared their initial syllable, and b) heterogeneous, where picture names began with different syllables. Results. Healthy controls showed a small but consistent effect of phonological facilitation, as indicated by shorter naming latencies when pictures were presented in the homogenous compared to heterogeneous condition. Conversely, bilingual patients exhibited an interference effect with longer naming latencies in homogenous versus heterogeneous conditions. Patients also showed phonological interference in naming accuracy to the same degree in both languages along with a similar distribution of error types across languages. Conclusions. Taken together, these results suggest that competitive processes of phonological encoding are language-independent in bilingual speech production. Also, the blocking effects The Society for the Neurobiology of Language SNL 2019 Program  on naming performance in patients are compatible with reduced inhibitory control over phonological competitors during lexical retrieval. E41 The Foreign Language Effect in Moral Decision Making and Social Controversies: The Role of Emotion in Chinese-English Bilinguals’ Decision Angela Tzeng1, Yi Lin Chen ; Chung Yuan Christian University 1 1 In moral decision making utilitarian choices refer to the behavior that produces the greatest good whereas deontological choices refer to behavior adheres to moral rules and principles. In some studies, bilinguals were recruited as the participants to investigate whether there be any difference when moral dilemmas are presented in the native language (NL) or foreign language (FL). The Foreign Language Effect (FLe) refers to the phenomenon that more utilitarian choices are made when moral dilemmas are presented in FL rather than in NL. FLe is repeatedly found in bilingual studies when NL and FL are both Indo-European languages. Moreover, the emotion was proposed as one of the key factors to produce FLe because emotion would be automatically activated while using NL. In other words, less emotion involvement in FL helps to produce more utilitarian choices. There are two purposes for the present study. The first aim is to replicate FLe using Chinese-English bilinguals. The second aim is to investigate the role the emotion plays in FLe using social controversies. Two experiments were conducted respectively. Six moral dilemmas were presented in both Chinese and English in the first experiment. The participants were 118 Chinese-English bilinguals. A 2 (NL vs. FL) x 3 (three dilemma types: personal, impersonal, and non-moral) design was employed. FLe was replicated. More utilitarian judgments were made when materials were presented in English with F(1, 694) = 4.139, p <.5. In the second experiment, the emotion was manipulated in the implicit priming paradigm. Both behavioral and brainwave data were collected. A 2 (NL vs. FL) x 3 (types of primes: positive, negative, neutral) design was introduced. In each trial, a fixation point was presented, followed by the prime, and ended with a controversial social issue (e.g., death penalty). Thirty-four social issues were chosen with the average rating of controversy of 6.75 on a 7-point scale. All materials were presented in both Chinese and English. Participants were 38 ChineseEnglish bilinguals. They were to respond to their own attitude and what they thought the public would think of these issues. Their attitudes, reaction time, as well as ERPs were recorded. Behavioral results showed that participants were less affected by emotion primes in English. Besides, the social and personal attitudes were more consist of when issues were presented in English. These findings were consistent with FLe literature in that less emotional involvement in FL. For ERP data, N400, and LPP were chosen as the indicators of semantic and emotional processes, respectively. The results showed that N400 amplitudes were more negative-going in Chinese than in English. Furthermore, LPP was more significant with positive primes than negative primes. Both N400 and LPP showed more emotional influence in The Society for the Neurobiology of Language Poster Session E NL than in FL. We, therefore, conclude FLe was found in Chinese-English bilinguals. And in both moral dilemmas and social controversies, the more emotional influence was found in NL. Future studies can be conducted to investigate the mechanism of how emotion works in FLe. E42 Phonological non-selective access in cross-script bilinguals: a masked priming event-related potentials (ERP) study I-Fan Su1, Hyun Kyung Lee2; 1The University of Hong Kong, 2The Education University of Hong Kong The issue of bilingual lexical non-selectivity, whether automatic co-activation of two languages in a bilingual when input of only one language is present, has been contested by various bilingual word recognition theories. This study examined how bilingual lexical representations are organized, and more specifically whether sublexical and lexical phonological representations of two languages differing in orthographic script are automatically activated and integrated in one or separate lexicons. Prime-target pairs varying factorally in phonological and semantic similarity were used to compare phonological and semantic priming effects between Korean-English bilinguals and English monolinguals in a masked priming lexical decision task. For bilinguals, behavioural mixed-effects modelling analysis indicated that shared phonology between two languages showed inhibitory effects while the overlap in meaning facilitated target word recognition compared to monolinguals. Phonologically similar prime-target pairs also evoked a greater positivity at the early central-right P2 component, but reduced negativity at the lexical N400 than phonologically dissimilar pairs in bilinguals. Whilst words with semantically similar primes led to an earlier peak latency and reduced activation at the late central N400, suggesting that shared phonology and meaning between Korean and English facilitate the ease of lexical-semantic retrieval and post-lexical processing during L2 word recognition. More effortful processing for phonologically similar pairs at the P200 suggests that automatic and non-selective activation of shared L1 and L2 phonology is inhibitory at the sub-lexical level, and becomes facilitatory as competition is resolved at the lexical N400 level. In general, the findings lend support to the Bilingual Interactive Activation Model (BIA+), which argues for co-activated and shared L1 and L2 sub-lexical and lexical phonological representations. However, to account for the phonological interference effects from orthographically distinct scripts, additional links to the language node from sub-lexical representations and accommodation to the degree of overlap between orthography and phonology between L1 and L2 are proposed. E43 Individual differences in pronoun Processing in Heritage Speakers of Spanish: data from ERPs and TFR Eleonora Rossi1, Beverly Cotter2; 1University of Florida, California State Polytechnic University 2 The aim of the study was to investigate syntactic processing abilities in heritage speakers of Spanish. The term “heritage speaker” refers to individuals who are raised speaking the family language at home from 239 Poster Session E infancy but have not received formal education in that language, while growing up in an environment where another language is spoken. Critically, the literature on heritage speakers’ grammatical processing abilities is still mixed but suggests that comprehension and production abilities are markedly different than the ones observed in native speakers (i.e., Montrul, 2010; Polinsky 2011). Previous behavioral studies have proposed that heritage grammatical processing might bear similarities with late second language processing. However, fewer neurophysiological studies have assessed variability in heritage language processing. METHODS: In this study, we took an individual differences approach to investigate grammatical processing in heritage speakers. We used electroencephalography (EEG) to assess the neural correlates of grammatical processing in heritage speakers of Spanish, using Spanish clitic pronouns as testbed. Given their highly grammatical complexity, clitic pronouns are one of those structures that are processed differently in heritage speakers, especially because of marking both grammatical gender and number which must agree with the antecedent. For example, in a sentence such as: “Ana compró la manzanafem/sing y lafem/sing comió”, the clitic pronoun “la” has to agree with the antecedent “la manzana” (the apple) both in gender and number. A total of 60 heritage speakers of Spanish (age: 19-25) were tested while they processed sentences containing clitic pronouns, and completed a sentence acceptability task. We hypothesized that if heritage speakers are sensitive to clitic pronouns and sensitive to the violations of grammatical gender and/or number, we should observe the emergence of a P600 when presented with clitic pronoun violations. Participants also completed a behavioral battery of tests consisting of a language history questionnaire, a Spanish grammar task, and a memory task to assess individual linguistic and cognitive indexes. RESULTS: When collapsing all the data together no sensitivity to violations of clitic pronouns was observed. However, three major ERP patterns emerged. For a subset of participants a P600 to gender and number violation was observed, suggesting more native-like sensitivity to the grammatical structure. Instead, for another subset of participants, an N400 component emerged, in line with what previously found for second language learners, suggesting that a subset of heritage speakers might process syntactic information through semantics. Time Frequency Representation analysis is currently being performed. If the variability observed in the ERP response will mirror what will be observed during the TFR analysis, we will expect to observe a different coupling of alpha and beta frequencies. Overall, this data shows a variable neural response during syntactic processing in heritage speakers. Data will be discussed under current neurobiological models of syntactic processing, and emphasis will be placed on the role of variable syntactic input. 240 SNL 2019 Program E44 Language entropy: A novel quantification of language experience among bilinguals Jason Gullifer1,3, Shanna Kousaie1,3, Annie Gilbert1,3, Nathalie Giroud2,3, Angela Grant2,3, Kristina Coulter2,3, Denise Klein1,3, Shari Baum1,3, Natalie Phillips2,3, Debra Titone1,3; 1McGill University, 2Concordia University, 3 Centre for Research on Brain, Language and Music Bilinguals vary in their language usage across communicative contexts, which holds consequences for language and executive control more broadly. Theoretical and empirical studies in the neurocognition of bilingualism capture some of this variability, but there remains a focus on static measures like age of acquisition and onedimensional measures like current exposure to a second language. Recently, we have developed a novel measure that captures additional variability in bilingual language experience: language diversity formalized as entropy. Language entropy is computed from questionnaire data about language usage in various communicative contexts. Language entropy characterizes individuals’ language diversity on a continuum from compartmentalized (single language use; low entropy/diversity) to integrated (balanced, dual language use; high entropy/diversity). Previously, we have shown that, in the aggregate, language entropy relates to resting-state organization of functional brain networks that underlie proactive executive control abilities (Gullifer et al., 2018). Crucially, principal component analysis indicates that language entropy in professional contexts is independent from language entropy in other communicative contexts (at least for samples of bilinguals from Montreal, Canada). These independent components both relate to self-reported L2 abilities over and above classic measures alone (Gullifer and Titone, in press). Our goal here is to replicate and extend previous findings using a new sample of bilingual participants (N = 62). Participants from Montreal who speak French and English completed an expanded language history questionnaire that probes information about language usage and exposure across 16 usage contexts. These contexts may have different constraints for language control to the extent that they emphasize comprehender-driven communication (e.g., writing for an audience, speaking in a professional context, etc.) vs. speaker-driven or internal language usage (e.g., language used for dreaming, counting, etc.). A subset of the participants also completed the AX continuous performance task (AX-CPT), which measures proactive executive control abilities. Results show that language entropy for the sample can be decomposed into three components: inner/social entropy, non-interactive entropy, and professional entropy. In turn, only non-interactive and professional entropy components related to performance on the AX-CPT: higher component scores related to increased engagement of proactive executive control. Thus, the findings are compatible with previous work showing that high language entropy, reflecting balanced bilingual language usage, relates to greater engagement of proactive executive control (Gullifer et al., 2018), perhaps resulting from increased competition between the two languages for bilinguals who exhibit integrated language usage (Green & Abutalebi, 2013). These results The Society for the Neurobiology of Language SNL 2019 Program  extend previous findings by showing that some types of language usage (i.e., non-interactive and professional usage) appear to drive these results more than others. The next step is to assess how the different language entropy components relate to resting-state functional connectivity, as a subset of these participants also underwent restingstate MRI scans. E45 Arcuate fasciculus microstructure correlates with cross-linguistic measures of multilingual language experience and L1 skills Jocelyn Caballero1, Nikola Vukovic1, Olga Kepinska1, Fumiko Hoeft1,2,3; 1University of California, San Francisco, 2University of Connecticut, 3Haskins Laboratories The complexity of a child’s phonemic inventory fundamentally depends on their linguistic environment such as the number and types of languages they are exposed to (Kuhl et al. 2003). Each of these languages is characterized by a set of discrete sounds, thus making up a unique phonemic inventory for each child. Over time, exposure to these sounds forms a crucial component of a child’s linguistic skills such as phonological awareness and impacts language developmental milestones. Such potent of an environmental influence must be represented in the child’s brain. The Arcuate fasciculus (AF), a white matter tract that has repeatedly been linked to phonological abilities is a likely candidate. The goal of the current study was to examine the AF as it relates to a behavioral index of phonological awareness and naturalistic language input. We tested a unique sample of L1-English kindergarteners that had been exposed to multiple languages during their preschool years (N = 29, mean age = 5.75 years, SD = 0.33; between 1 and 5 languages, median = 3, SD = 0.92). We measured their L1 phonological awareness and L1 receptive vocabulary scores - the former we hypothesize to be associated with the child’s phonemic inventory and the latter a standard metric for language skill. Leveraging these individual differences in exposure, using cross-linguistic data (obtained through https://phoible.org/), we calculated the richness of children’s phonemic inventories, representing a total number of distinct sound categories across all languages a given multilingual has been exposed to (mean = 79, SD = 22.0). We hypothesized that the anatomical development of the AF, would reflect the richness of children’s receptive vocabulary, phonemic inventories, and phonological awareness. We performed white matter tract segmentation using a novel convolutional neural network-based approach and probabilistic tractography based on multi-shell constrained spherical deconvolution. Mean fiber orientation distribution peaks were measured along the left and right AF - a measure which unlike fractional anisotropy can handle crossing fibers and thus provide a better proxy for white matter tissue integrity. We found that each of the scores representing children’s L1 skills (phonological awareness and receptive vocabulary) positively and significantly correlated with each other and with measures of left AF diffusivity (ps<0.001, FDRcorrected). On the other hand, an index describing the richness of their phonemic inventories showed positive associations with both left and right AF (ps<0.001, The Society for the Neurobiology of Language Poster Session E FDR-corrected), while not being correlated with the L1 phonological skills or receptive vocabulary. Further analyses are ongoing to establish the association between time-weighted measures of children’s multilingual exposure and white matter structure, reflecting the strength of their English phonological representations. In sum, our results demonstrate that the linguistic indices tested here differentially predict left and fight AF diffusivity measures, highlighting the importance of capturing not just L1 skills but also incorporating cross-linguistic information with regards to multilingual environment. Kuhl, et al. (2003). Foreign-language experience in infancy: Effects of short-term exposure and social interaction on phonetic learning. Proc. Natl. Acad. Sci. 100, 9096–9101. Signed Language and Gesture E46 Sensorimotor EEG activity during sign production in deaf signers and hearing non-signers Lorna Quandt1, Athena Willis1; 1Gallaudet University Background: Prior research suggests that the amount of experience an individual has with an action influences the degree to which the sensorimotor systems of their brain are involved in the subsequent perception of those actions. Less is known about how action experience and conceptual understanding impact sensorimotor involvement during imitation. We sought to explore this question by comparing a group of sign language users, who have a great deal of long term linguistic experience with signs, to a group of non-signers, for whom producing signs is novel and lacks conceptual meaning. We pitted the following two hypotheses against each other: 1) Deaf signers will show increased sensorimotor activity during sign imitation, and greater differentiation between sign types, due to greater prior experience and conceptual understanding of the signs; versus 2): Deaf signers will show less sensorimotor system activity and less differentiation of sign types in the sensorimotor system, because for those individuals sign imitation involves language systems of the brain more robustly than sensorimotor systems. Methods: We collected electroencephalograms (EEG) while fluent deaf American Sign Language (ASL) signers (N = 28) and hearing non-signers (N = 23) viewed videos of signs that had varied sensorimotor characteristics. Each of the two groups imitated videos that presented one-handed and two-handed ASL signs. Participants saw and then imitated 40 1-handed signs, and 40 2-handed signs which were matched on ASL frequency, iconicity, and flexion, and on frequency, # of phonemes, word length, and imageability for their English translations. Participants were given the task of imitating each presented sign after the demonstration was completed. Time-frequency data analysis was performed on alpha (8 - 13 Hz) and beta (14 25 Hz) oscillations during 2000 ms of sign production. We focused our analyses on electrodes in the central region of the electrode array, reflecting activity in the underlying primary somatosensory and motor cortices. Results: Both the deaf signing and hearing non-signing groups exhibited sustained central alpha/mu desynchronization during the production of signs, and a central beta desynchronization 241 Poster Session E that was of shorter duration. Both groups also showed significantly different patterns of alpha/mu and beta desynchronization while producing 1-handed ASL signs compared to 2-handed ASL signs (p < .05, FDR corrected). The extent of alpha/mu desynchronization was significantly different between Deaf and Hearing groups, with the Deaf group showing more sustained alpha/ mu desynchronization compared to the Hearing group (p < .05, FDR corrected), particularly ~1800 ms after the onset of sign production. Conclusion: We demonstrate that sensorimotor EEG rhythms in both Deaf and Hearing groups are sensitive to basic parameters of American Sign Language (number of hands used). The results suggest that knowledge of, and/or experience with American Sign Language leads to different patterns of sensorimotor activity during imitation of signs. The greater alpha/mu desynchronization during sign production in the Deaf group supports the notion that fluent knowledge of sign language results in greater recruitment of sensorimotor systems during production. Language Production E48 The Neural Basis of Semantic Cognition in Mandarin Chinese: a combined fMRI and TMS study Qian Zhang12, Hui Wang1, Cimei Luo1, Junjun Zhang1, Zhenlan Jin1, Ling Li1; 1Key Laboratory for NeuroInformation of Ministry of Education, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Information in Medicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 2Southwest Petroleum University, Chengdu While converging sources of evidence point to the possibility of a large scale distributed network for semantic cognition, a consensus regarding the underlying subregions and their specific function in this network has not been reached. In the current study, we combined functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) methodology to investigate the neural basis of semantic cognition in Mandarin Chinese. In the fMRI experiment, strong activations were observed in left inferior frontal gyrus (IFG) and left middle temporal gyrus (MTG) for semantic judgment task, coupling with significant functional connectivity between these regions. Meanwhile, functional connectivity strength between left IFG and left MTG revealed significant correlation with performance in semantic judgment task. Subsequent TMS stimulation over left IFG resulted in performance deficits in semantic judgment task, in contrast to other three sites: left MTG, right intraparietal sulcus (IPS) and a control site. We proposed that the neural basis of semantic processing for Mandarin Chinese closely resembled that for alphabetic languages such as English, supporting a languageuniversal view on semantic cognition. 242 SNL 2019 Program E49 Left-handed musicians show a higher probability of atypical cerebral dominance for language: Overlap in the structural correlates of musicianship and lateralization Esteban Villar-Rodríguez1, Jesús Adrián-Ventura1, María Ángeles Palomar-García1, Mireia Hernández2, Gustau OlcinaSempere1, Maria Antonia Parcet1, Cesar Avila1; 1Universitat Jaume I, 2 University of Barcelona Atypical language lateralization in the right hemisphere has been previously related to a more gyrified left Heschl’s gyrus and a gray matter rightward asymmetry of the right pars triangularis. In parallel, when comparing musicians to non-musicians, musicians present a higher gyrification of both Heschl’s gyri and a thicker right pars triangularis. Taking this into account, we wondered if musicianship was associated to atypical dominance for language and the brain correlates of atypical lateralization. METHODS: A healthy sample of 49 participants were included in this study (33 male; mean age ± SD = 20.7 ± 2.2). All participants were left-handed, according to the Edinburgh Handedness Inventory. 30 participants were musicians, requiring official music school formation and at least 9 years of instrumental experience. 19 participants were non-musicians, having no musical training beyond basic school education. 3D T1-weighted MPRAGE and T2*weighted EPI sequences were acquired on a 3T Philips Achieva scanner. Two groups were formed according to fMRI assessment of language lateralization via Laterality Index calculation (Brodmann areas 9, 44, 45 and 46) on SPM12: left-lateralized (n=36, 12 musicians) and rightlateralized (n=12, 11 musicians). Differential incidence of atypical language lateralization between musicians and non-musicians was tested using Fisher’s exact test. Surface-based morphometry preprocessing and region of interest (ROI) analyses were carried out using the CAT12 toolbox for SPM12 following the default procedure described in the CAT12 manual. We used the HCP-MMP1 surface atlas to define a right pars triangularis ROI (right IFSa) and 3 left Heschl’s gyrus ROIs (left A1, LBelt and MBelt). Finally, we tested via one-tailed two-sample t tests if right-lateralized participants presented greater cortical thickness in the right pars triangularis ROI and greater gyrification index in any of the left Heschl’s gyrus ROIs. Age and musician/non-musician were included as covariates of no interest. All surface tests were corrected by multiple comparisons using an FDR threshold of p < 0.05. RESULTS: Atypical patterns (right and bilateral) of language processing were significantly more prevalent (p = 0.008) in musicians (40%) than in non-musicians (5.3%). When compared to the left-lateralized group, the right-lateralized group presented a thicker cortex in the right IFSa area (p = 0.021) and a higher gyrification index in the LBelt area (p < 0.017). CONCLUSIONS: In consonance with our hypotheses, musicianship is related to a higher incidence of atypical lateralization of language, and the brain correlates found in rightlateralized participants notably overlap with correlates described in the brain of musicians. In fact, recent findings have pinpointed a positive correlation between cortical thickness and pitch discrimination proficiency in the same portion of the right pars triangularis tested in our study, The Society for the Neurobiology of Language SNL 2019 Program  which further supports our view. Therefore, we propose that musicianship and atypical language lateralization are linked in the brain of left-handers, and we suggest the differential development of the right auditory-motor network as a potential structural basis for this relation. E50 The role of inhibition in inflectional encoding: Producing the past tense João Ferreira1, Ardi Roelofs1, Vitória Piai1,2; 1Radboud University, Donders Centre for Cognition, Radboudumc, Donders Institute for Brain, Cognition, and Behaviour, Department of Medical Psychology 2 According to a prominent account of inflectional encoding (Pinker, 1999; Pinker & Ullman, 2002), regular forms are encoded by a rule-governed combination of stems and affixes, whereas irregular forms are retrieved from memory while inhibiting rule application. Previous research has shown that when speakers switch between tasks, languages, or phrase types, an asymmetrical switch cost is obtained, which has been attributed to overcoming previous inhibition of the predominant response. If generating an irregular form involves inhibition of rule application, then switching from an irregular to a regular form should require overcoming previous inhibition and delay responding. Using this rationale, in a first experiment participants alternated between producing regular and irregular past tenses in Dutch. The infinitive form of a verb was presented on a screen and participants were instructed to produce the first-person singular past-tense form of the verb as quickly and accurately as possible. A microphone recorded the onset of their response and an experimenter outside the booth registered manually whether each response was correct. The order of trials was pseudorandomized, with two regular verbs followed by two irregular verbs, and so forth. In a second experiment, and to attest that our stimuli were strong enough to yield an asymmetrical switch cost, participants alternated between inflecting and reading aloud. Procedure was similar to the first experiment, with the addition of a colored frame around the word cuing participants to which task to perform. As with the first experiment, task changed with every second trial. Regulars and irregulars were presented in small “miniblocks” of 24 trials of the same regularity type, hence switching costs in the second experiment concern task and not verb regularity. In the first experiment we found no difference in RT between producing regulars and irregulars, and also no switch costs: production of a verb of one regularity was not affected by the regularity of the previous trial. In the second experiment participants were slower on the inflect compared to the read trials. Crucially, for the read task participants were slower on the task switch than on the task repeat trials, with no difference between regulars and irregulars. Thus, an asymmetrical switch cost was obtained for switching between tasks, but not for switching between irregulars and regulars. Our findings challenge the assumption that inhibition is the mechanism by which rule application is blocked in producing the past tense of irregular verbs. Importantly, we examined inhibition by looking at the effect of overcoming previous inhibition. That is, if producing an The Society for the Neurobiology of Language Poster Session E irregular on a trial involves inhibition of rule application, then switching from an irregular to a regular on the next trial would require overcoming the inhibition of the rule on the previous trial, which should delay responding. It might still be the case that inhibition is involved in irregular production but is not strong enough to affect the next trial. In an ongoing neuroimaging study we investigate the hypothesis of rule inhibition more directly by measuring the hemodynamic reflection of inhibition during the encoding of irregulars itself. E51 Anterior temporal lobe regions critical for picture naming: Voxel-based lesion-symptom mapping in patients undergoing left temporal lobe resections Sara B. Pillay1, Jia-Qing Tong1, William L. Gross1, Wade M. Mueller1, Manoj Raghavan1, Sara J. Swanson1, Lisa L. Conant1, Linda Allen1, Christopher T. Anderson1, Chad Carlson1, Leonardo Fernandino1, Colin J. Humphries1, Lisa Schwartz1, Robyn M. Busch2, Mark Lowe2, John T. Langfitt3, Madalina Trivarus3, Daniel L. Drane4, David W. Loring4, Monica Jacobs5, Victoria Morgan5, Jerzy P. Szaflarski6, Leonardo Bonilha7, Jeffery R. Binder1; 1Medical College of Wisconsin, 2Cleveland Clinic, 3University of Rochester, 4Emory University, 5Vanderbilt University, 6University of Alabama at Birmingham, 7Medical University of South Carolina, 8for the FMRI in Anterior Temporal Epilepsy Surgery (FATES) study Both picture naming (PN) and auditory description naming (ADN) are used during pre-surgical cortical stimulation mapping to identify regions critical for naming. Lateral anterior temporal lobe (ATL) stimulation (<4cm from temporal pole) typically impairs ADN more often than PN, yet paradoxically ATL resection mainly impairs PN. We sought to clarify the ATL regions critical for PN using VLSM in a series of left temporal lobe epilepsy patients. Major advantages of this approach over lesion correlation in stroke patients include the fact that prelesion performance level and language dominance are known, and surgical lesions in this region more often include ventral temporal areas that may be especially critical for PN. Participants were 32 patients with left language dominance confirmed by either fMRI or Wada testing who underwent partial left temporal lobe resection for drug-resistant temporal lobe epilepsy. They completed pre- and 6-month postoperative neuropsychological testing. Surgical lesions were mapped manually using high-resolution postoperative MRI, then mapped to a common template using nonlinear morphing of nonlesioned structures. Lesions varied widely in location and extent, including standard ATL resections with variable caudal extension and STG sparing, focal lateral or ventral resections, selective temporal pole removals, and selective hippocampal ablations. VLSM analyses identified the lesion correlates of pre- to post-surgery change scores on PN (Boston Naming Test) and on an ADN task, first with no controls, and then adding performance change scores on the ADN and PN task, respectively, as covariates to highlight differences in the critical areas associated with these tasks. The resulting maps were thresholded at voxel-wise p < .005 and clustercorrected at FWE p < .05, as determined by randomization testing. Post-operative percent change on the PN and 243 Poster Session E ADN measures were significantly correlated with one another, p < .001. Patients experienced greater declines on PN (mean percent change = 17.4%) than ADN (mean percent change = 7.9%), p < .001. Post-operative declines on PN were associated with resections in a focal region centered on the left anterior fusiform gyrus, including adjacent collateral sulcus, perirhinal cortex, and inferior temporal gyrus, but completely sparing the hippocampus and temporal pole. Inclusion of ADN as a control restricted the region associated with PN decline to a smaller area situated primarily in the anterior fusiform gyrus. Regions associated with ADN decline partly overlapped those associated with PN decline but were less extensive. No regions were correlated with ADN decline independent of PN decline. ATL regions critically necessary for picture naming are located in the anterior-ventral part of the ATL, centered in the anterior fusiform gyrus. PN decline was not associated with resection of the hippocampus or the temporal pole. Damage in the anterior fusiform gyrus is more strongly correlated with PN decline than with ADN decline. We hypothesize that this region is critical for mapping between visual perceptual and abstract conceptual representations. Resection of this “basal temporal language area” at the anterior end of the ventral visual object recognition pathway is likely the main cause of post-operative naming decline in patients undergoing left temporal lobe surgery. E52 Part-Whole but not Whole-Part Relations Elicit Interference Effects in Object Naming Julien Dirani1, Liina Pylkkänen1,2; 1New York University Abu Dhabi, 2New York University INTRODUCTION. Categorically related primes slow down naming times of target images. This interference effect has been extensively used to inform models of lexical access during word production, however, its origin is not well understood. For instance, manipulating the type of semantic relation sometimes reverses the interference effect into a facilitation, but it is not clear whether this facilitation effect stems from the same neural locus as the categorical interference. In this magnetoencephalography study we manipulated the type of semantic relation to assess the spatiotemporal localization of the semantic interference and facilitation effects. We compared semantic priming effects across part-whole relations (car-tire), whole-part relations (tire-car) and categorical relations (truck-car). We hypothesized that categorical relations would induce a late, post-lexical semantic interference effect, in line with previous MEG findings (Dirani & Pylkkänen, 2018). The same was hypothesized for whole-part relations, possibly because a larger representation (i.e. whole) has to be inhibited in order to name the subset of the representation (i.e. part). Partwhole relations were hypothesized to induce facilitation effects, possibly because of spreading activation from the subset of the representation to the larger representation. Evidence for those hypotheses would suggest a model of conceptual representations where “parts” are represented as nested within the “whole” representations. METHOD. Thirteen native English speakers named target pictures 244 SNL 2019 Program that were separated into two sets: A Whole set (e.g. car) and a Part set (e.g. tire). All targets were presented multiple times, each time preceded by repetition primes, unrelated primes, or categorically related primes. In addition, targets in the Part set were preceded by Whole primes, and targets in the Whole set were preceded by Part primes. The order of targets was randomized across sets (i.e. no blocking was used). Voice onset time was measured as well as brain activity using MEG. RESULTS. Our behavioral results replicated the previously identified semantic interference effect using categorical primes with Whole targets. MEG localization showed a categorical interference pattern for the Part set only, at ~300-400ms in the Superior Temporal Gyrus. We found that part-whole relations elicited a behavioral semantic interference effect, however, whole-part relations did not induce any priming effects relative to the unrelated primes. Surprisingly, repetition priming was also absent in the Part set, suggesting that participants might have been using strategies to name the Part objects. CONCLUSION. Our results replicated the previous MEG spatiotemporal localization of categorical interference, with a slightly earlier locus at 300-400ms in the STG. Contrary to what was hypothesized, we found that part-whole, but not whole-part relations elicited an interference effect. With the current sample of subjects, we find no support for a model of conceptual representations were parts are represented as nested within the whole representations. E53 The role of ventral fiber pathways in healthy and disordered language production Margot Mangnus1, Ardi Roelofs1, Joanna Sierpowska1, Roy P.C. Kessels1,2, Vitória Piai1,2, Nikki Janssen1,2; 1Donders Institute for Brain, Cognition and Behavior, 2Department of Medical Psychology, Radboud University Medical Center While neuroimaging research on language production has traditionally focused primarily on grey matter, several recent studies highlight the involvement of ventral and dorsal white matter pathways. A debated issue concerns the exact functional role of these pathways. The ventral pathway has been suggested to underlie top-down control in language production, but the functional roles of each specific white matter tract within this pathway, like the inferior fronto-occipital fasciculus and uncinate fasciculus, have not yet been elucidated. To investigate the involvement of the inferior fronto-occipital and uncinate fasciculus in top-down control, 15 patients with primary progressive aphasia (PPA), an acquired language deficit due to neurodegenerative disease, and 22 age-matched healthy controls performed a picture-word inference (PWI) task including an incongruent (semantically related word, e.g., word “cow” on the picture of a horse) and a neutral (e.g., “XXX” on the picture of a horse) condition. The stimuli were made of 12 high frequency words from two semantic categories (animals and fruits). The difference in reaction time and accuracy between semantically related and neutral picture-word pairs served as a behavioral measure of the participants’ top-down interference control. Furthermore, the microstructural integrity of the inferior fronto-occipital and uncinate fasciculus was The Society for the Neurobiology of Language SNL 2019 Program  calculated as a neuroanatomical measure with diffusion tensor MRI and tractography, expressed in Fractional Anisotropy (FA) and Mean Diffusivity (MD) values. A linear mixed-effects model revealed that patients were more susceptible to the PWI effect in reaction time compared to controls. Moreover, the FA of the inferior frontal-occipital and uncinate fasciculus and the MD of the inferior frontaloccipital fasciculus were altered in patients compared to controls. Importantly, the FA and MD of the inferior frontaloccipital fasciculus were associated with the PWI effect in reaction times, whereas the FA of the uncinate fasciculus and MD of both fasciculi were associated with the PWI effect in the accuracy. These results indicate that PPA patients manifest impaired top-down control processes in language production and that these processes are mediated by the microstructural properties of the inferior fronto-occipital and uncinate fasciculus in both PPA and healthy individuals. E54 Microstate ERP analyses for detecting the articulation onset in speech production Anne-Lise Jouen1, Monica Lancheros1, Marina Laganaro1; 1Faculty of Psychology and Educational Science, University of Geneva Despite the unresolved issues of brain signals ‘contamination by articulation-induced artifacts, the use of electroencephalography (EEG) to study overt speech production has increased substantially in the past 15 years (Ganushchak et al., 2011). The majority of EEG studies have used stimulus-aligned analysis to avoid possible artifacts during motor preparation/execution, hence targeting only the early encoding processes. As it well known that production latencies can vary considerably from one participant/trial to another, the alignment on the vocal response onset has become recently an extremely useful tool to target later stages of word production (Riès et al., 2013; Laganaro, 2014). Yet the response-aligned evoked potential (ERPs) raise another methodological issue: where to place the point of alignment of the ERPs (Fargier et al., 2018)? Indeed, the point of alignment -that would ideally be the onset of the articulatory movementis generally measured by voice onset but articulation may start up to several hundred milliseconds prior to voice onset and depends on the properties of the phonemes (Rastle et al., 2005). The purpose of the present study was to determine if the articulatory onset can be detected from the EEG signal itself and thus identify neurally the existing gap between the vocal and articulatory onsets, in particular when their precise onset is unclear which is the case when speech stimuli start with voiceless stops (/p, t, k/,Ouyang et al., 2016). To this end, we recorded high density EEG in 25 healthy participants during a delayed production task of 224 monosyllabic pseudowords, consisting of complex consonant clusters (CCV/CCCV) beginning either with voiceless stops consonants (/p/, /t/, /k/) or with a fricative voiceless consonant (/s/) for which the vocal and articulatory onsets are closer. ERPs were aligned to the vocal onset, detected manually on the acoustic signal, and microstate analysis (spatiotemporal analysis, Michel & Koenig, 2018) was performed on the 300 ms preceding the vocal onset. Behavioral The Society for the Neurobiology of Language Poster Session E analysis on production latencies, identified by the vocal onset, revealed that the /s/-pseudowords were initiated approximatively 100 ms faster than the /p, t, k/-ones. However, the microstate ERP analyses invalidated this interpretation showing that the behavioral observations were due to the fact that articulatory and vocal onsets were not aligned similarly. Indeed, the spatio-temporal segmentation revealed a global topographic ERPs (maps) pattern similar in all onset conditions with the specific microstate, corresponding to the global distribution of scalp topography likely associated with articulation, shifted of about 100 ms for the /s/-pseudowords relative to the /p/, /t/, /k/. The fitting in the single epochs also revealed earlier onsets of the same microstate for labial and alveolar stops relative to velar /k/. These results show that articulatory onset had already begun about 100 ms before the vocal onset for the voiceless stops and suggest that ERP microstates can be used to target articulation onset, which may represent a complementary marker to electromyography (EMG), in particular for velar stops difficult to detect with facial EMG. E55 The cognitive cost of lexical alignment Cristina Baus1, Alice Foucart1, Albert Cost1; 1Center for Brain and Cognition, University Pompeu Fabra, Barcelona Verbal communication requires the tight coordination of two or more interlocutors working together to reach mutual understanding. Despite the frequency and simplicity with which we engage in dialogues, little is known about the neural underpinnings of language processing in interaction (Pickering & Garrod, 2004). One important aspect of verbal communication is namely lexical alignment, which refers to the tendency of interlocutors to use the same lexical choices to refer to objects favoring successful communication (e.g., Brennan & Clark). In an EEG experiment, neural signatures of lexical alignment were explored to determine the cognitive consequences for speaker’s lexical choices. Name agreement was taken as an index of lexical alignment. From MULTIPICT (Duñabeitia et al., 2018), we selected pictures (e.g., COLOGNE) for which two names were used, one more frequently used (60-70% of participants, preferred name: “cologne”) than the other (30-40% of participants, dispreferred name: perfume). Participants were asked to take turns in a joint picture naming task with a “confederate”. The confederate was introduced to the participant at the beginning of the experiment and both were instructed together. The confederate was instructed to ask some clarification questions allowing participants to be familiarized with her voice. This was done to have an ecological interactive setting while experimentally controlled. The participant was indeed alone performing the task and the words the participant heard during the experiment were pre-recorded by the confederate. Emulating turn-taking in dialogue, during the experiment participants had trials in which they had to speak and trials in which they heard the confederate speaking. The confederate named the pictures first by using either preferred or dispreferred names (counterbalanced across and participants). The same 245 Poster Session E pictures were presented a second time and participants were asked to name them. Behavioral and EEG responses for preferred and dispreferred picture’s corresponding words were obtained. Replicating previous studies, a lexical alignment effect was observed, revealing that participants align to their partners by using dispreferred names very rarely used otherwise. At the EEG level, ERPs locked to the onset of the picture presentation showed that pictures whose corresponding names were previously named with a dispreferred named elicited a larger negativity (N2) than those pictures named with a preferred name, and this effect was especially prominent at frontal-central electrodes. Those results revealed that lexical alignment is an important feature of language in interaction. Interlocutors automatically align to the lexical choices or their interlocutors even when mutual understanding is not required. Importantly, our results revealed for the first time that lexical alignment entails a cost for interlocutors, overriding their lexical preferences in favor of communicative success. References: Brennan, S. E., & Clark, H. H. (1996). Journal of Experimental Psychology: Learning, Memory and Cognition, 22, 1482–1493. Duñabeitia, J. A., Crepaldi, D., Meyer, A. S., New, B., Pliatsikas, C., Smolka, E., & Brysbaert, M. (2017). The Quarterly Journal of Experimental Psychology, 1-24. Garrod, S., & Pickering, M. J. (2004). Trends in cognitive sciences, 8(1), 8-11. Speech Motor Control E56 Disruption of Speech Adaptation with Repetitive Transcranial Magnetic Stimulation of the Articulatory Representation in Primary Motor Cortex Ding-Lan Tang1, Alexander McDaniel1, Kate. E Watkins1; 1University of Oxford Sensorimotor learning has gained growing interest in the domain of speech motor control due to its importance in the development and maintenance of fluent speech production. The primary motor cortex (M1), which was considered as executor of motor commands and merely responsible for producing or controlling the movement, has recently been shown to be involved in motor learning. However, whether M1 causally contributes to speech motor learning remains unknown. Here, we aimed to determine whether temporary disruption of the articulatory representation in left M1 by repetitive transcranial magnetic stimulation (rTMS) impairs speech motor learning. Forty right-handed native English speakers between the ages of 18 and 36 years read words containing the /ɛ/ vowel (as in “head” “bed” and “dead”). To induce sensorimotor learning, the first formant (F1) of these productions was shifted up and played back to participants without a noticeable delay using a Matlab Mex-based program Audapter. Typically, participants compensate for the increase in F1 by altering their speech production to reduce F1 and increase the frequency of the second formant (F2), which results in production closer to the vowel /i/ (as in “hid” “bid” and “did”). The changes to F1 and F2 provide a measure of sensorimotor learning. Two groups of 20 participants (10 male, 10 female) received low-frequency (0.6 Hz, subthreshold, 12 min) rTMS to either the hand or the tongue representation 246 SNL 2019 Program of primary motor cortex between the baseline phase with normal feedback and the learning phase when feedback was shifted. RTMS successfully inhibited motor cortex as demonstrated by a significant reduction in the amplitude of motor-evoked potentials (MEPs) elicited by single pulses of TMS over the representation of the target muscle. Participants who received rTMS over the hand representation showed the expected compensatory response for the upwards shift in F1 by significantly reducing F1 and increasing F2 frequencies in their productions. In contrast, such compensatory changes in both F1 and F2 were abolished by rTMS applied over the tongue representation. This was confirmed by statistical tests indicating a significant difference between hand and tongue groups during the learning phase for the changes in F1 (t(38) = 2.65, p =.012) as well as for the changes in the opposite direction for F2 (t(38) = -3.35, p = .002). Critically, rTMS (subthreshold) over the tongue representation did not affect vowel production, which was unchanged from baseline. As predicted, inhibitory TMS over the tongue representation, but not hand representation significantly impaired compensatory changes in both the shifted formant (F1) and the unaltered formant (F2). These results provide direct evidence that the articulatory representation in primary motor cortex causally contributes to sensorimotor learning in speech. Furthermore, these results also suggest that M1 is critical to the network supporting a more global adaptation. E57 Lateralized control of spectral and temporal speech features in speech production Mareike Flögel1, Susanne Fuchs2, Christian Kell1; 1Goethe University, 2Leibniz Center for General Linguistics (ZAS) Albeit the traditional view that speech processing is especially performed by the left cerebral hemisphere, the analysis of acoustic speech feedback information to adapt speech production in the presence of disturbances seems to rely on the right hemisphere [Tourville, J. A., & Guenther, F.H. (2011). Language and Cognitive Processes, 26(7), 952–81]. While feedback control circuits for the control of spectral speech features (e.g. pitch, formant structure) have been well characterized by means of altering the acoustic speech feedback of speaker’s in near realtime [Tourville, J. A., Reilly, K. J., & Guenther, F. H. (2008). NeuroImage, 39(3), 1429–1443; Toyomura et al. (2007). Neuroscience, 146(2), 499-503], comparable investigations for the control of temporal speech features (e.g. length of phonemes and their transitions) are lacking. The current study investigated whether the proposed right-lateralization of feedback control reflects a right hemisphere preference for sensory processing during speech production or a right hemisphere preference for spectral processing [Zatorre, R.J., & Belin, P. (2001). Cerebral Cortex, 11(10), 946-953]. Healthy, native German speakers’ auditory speech feedback was manipulated spectrally or temporally [Tourville, J.A., Cai, S. & Guenther, F.H. (2013). Proceedings of Meeting on Acoustics, 9:060180] during the production of CVC monosyllabic pseudowords. Spectral manipulations either increased the first formant of vowels or the The Society for the Neurobiology of Language SNL 2019 Program  spectral center of gravity of fricatives by 20% relative to production. Time warping prolonged the vowel or fricative by 20% and thus altered phoneme timing. A dichotic speech feedback alteration experiment (n = 40) and a binaural altered auditory feedback functional magnetic resonance experiment (n = 44) revealed a left ear/right hemisphere advantage for speaking with spectrally altered speech feedback compared to normal speaking and a right ear/left hemisphere advantage for speaking with temporally altered auditory feedback compared to normal speaking. Further, resting state functional connectivity was performed to identify the consequences of adaptive learning. Learning to adapt temporal feedback manipulations was associated with increased frontotemporal coupling in the left hemisphere while learning to adapt to spectral feedback manipulations showed increased frontotemporal coupling in the right hemisphere. Our results suggest that the right hemisphere preferentially processes auditory speech feedback information to control spectral speech features while the left hemisphere preferentially analyses the acoustic speech signal to control temporal speech features during speech production. Our results extent current models of speech motor control by showing that feedback control relies on contributions of both cerebral hemispheres that can be differentiated along the spectro-temporal axis. E58 Speech Network in Bulbar ALS: structural neuroimaging and post-mortem neuropathological examination Yana Yunusova1,2,3, Sanjana Shellikeri1,2, Sandra E. Black2,4,5,6, Lorne Zinman2,4,5, Julia Keith4,7; 1Department of Speech-Language Pathology & Rehabilitation Sciences Institute, University of Toronto, 2Hurvitz Brain Sciences Program, Sunnybrook Research Institute, 3University Health Network: Toronto Rehabilitation Institute, 4Department of Medicine, Division of Neurology, Sunnybrook Health Sciences Centre, 5L.C. Campbell Cognitive Neurology Research Unit, Sunnybrook Research Institute, University of Toronto, 6Rotman Research Institute, Baycrest, 7 Department of Laboratory Medicine and Pathobiology, University of Toronto Amyotrophic Lateral Sclerosis (ALS) is a multisystem disorder with motor and extramotor changes, characterized by progressive motor degeneration and behavioural and cognitive-linguistic deficits. Bulbar ALS is a subtype that affects speech production and may be associated with a greater burden of cognitive-linguistic deficits. The speech network (SpN) is a set of motor and extramotor structures involved in the production and processing of speech. The SpN encompasses the ventral portion of the primary motor cortex (oral PMC), as well as the ventral premotor cortex, posterior superior temporal gyrus (pSTG), inferior frontal gyrus (IFG; both, pars triangularis (ParsT) and pars opercularis (ParsO)), primary auditory cortex (i.e., Heschl’s gyrus of the transverse temporal cortex (TT)), parietal-temporal junction, insula, and cingulate cortex, as well as subcortical structures. This project investigates neuroanatomical changes in regions of the SpN in bulbar ALS using in vivo structural neuroimaging and post-mortem neuropathology. It was hypothesized that the degree of bulbar motor The Society for the Neurobiology of Language Poster Session E impairment would be associated with changes in the SpN regions, beyond the PMC (Study 1). It was also hypothesised that these differences would be observed in the neuropathological presentation of the disease (Study 2). For study 1, T1 and DTI images were obtained for 19 ALS participants and 13 controls. Surface-based, volumetric, and DTI metrics were obtained for 6 regions of the SpN including the oral PMC, ParsT, ParsO, pSTG, and TT. Structural changes were observed in the grey matter (GM) of right oral and limb PMC and left ParsT, as well as white matter (WM) underlying left TT and pSTG in ALS. Bulbar motor dysfunction was associated with WM abnormalities in the right oral PMC and left pSTG, and GM changes in bilateral TT. In contrast, symptom progression rate predicted GM and WM changes in bilateral ParsO. Grip strength and disease duration models were nonsignificant. For study 2, regions of the SpN within the left brain and brainstem were histologically assessed in 3 cases with bulbar-onset ALS (bALS), 4 cases with spinal-onset ALS and antemortem bulbar dysfunction (sALSwB), and 3 cases with spinal-onset without bulbar dysfunction (sALSnoB). Histological examination revealed degenerative changes in the frontal and temporal regions part of the SpN exclusively in bulbar variants (bALS and sALSwB). TDP-43 staging revealed differences in the anatomic distribution and severity of pathology between subtypes; bALS presented with the most severe and widespread SpN changes, followed by sALSwB. SpN regions were spared in sALSnoB cases. These findings suggest that regions of the left-dominant SpN are affected in ALS, and degeneration of these areas is related to bulbar disease severity. Regions that overlap across multiple connectomes, such as the IFG, may degenerate based on the rate of disease progression. The work contributes to our understanding of bulbar ALS subtype, which is crucial for predicting disease progression, delivering targeted clinical care, and appropriate recruitment into clinical trials. E59 Motor-induced suppression of N1 and P2 is modulated by phonotactic probability and syllable stress Alexandra Emmendorfer1,2,3, Milene Bonte1,2, Bernadette Jansma1,2, Sonja Kotz3; 1Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 2 Maastricht Brain Imaging Center, Faculty of Psychology and Neuroscience, Maastricht University, 3Department Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University When processing the environment, the brain uses prior knowledge to formulate predictions about the quality (‘what’) and timing (‘when’) of upcoming events, maximizing the efficiency of sensory processing, and facilitating perception in noisy conditions. In language, these predictions may be made at the level of the phonotactic probability and syllable stress. Predictions may be employed to monitor our motor output, as is the case in speech production. Here, an internal copy of the motor command is sent to the sensory cortices to anticipate the consequences of an action, leading to a suppressed sensory response to self-generated 247 Poster Session E sensations known as motor-induced suppression (MIS), which can be observed in the N1 and P2 components [1]. Previous M/EEG studies on speech production have demonstrated that MIS is sensitive to variations in stimulus properties, such as the prototypicality of an utterance: more prototypical (more predictable) utterances show greater MIS than less prototypical utterances of the same vowel [2]. The current EEG study examines whether MIS is sensitive to statistical regularities of language, including phonotactic probability and syllable stress. We employ a motor-to-auditory paradigm comparing externally and self-generated stimuli. Previous studies employing this paradigm used stimuli ranging from tones [1] to single syllables [3], making this the first study using more complex bisyllabic utterances. The paradigm consists of three conditions: motor-auditory (MA), where the auditory stimulus presentation is triggered by button press, auditory only (AO), where the auditory stimulus is presented without button press, and motor only (MO), where the button press does not trigger stimulus presentation, allowing the comparison of externally (AO) vs. self-generated (MA - MO) stimuli. Stimuli consist of Dutch pseudowords varying in phonotactic probability and syllable stress. In Dutch, the sound combinations ‘-ts-‘ and ‘-tf-‘ are considered to have high (HPP) and low phonotactic probability (LPP), respectively, while first syllable stress (SylS1) is the more probable stress pattern compared to second syllable stress (SylS2). We predict that both phonotactic probability and syllable stress modulate MIS in the N1 component. Furthermore, we examine whether the P2 suppression may be similarly modulated by stimulus properties. Preliminary results in 9 participants suggest a main effect of phonotactic probability on N1 suppression, with self-generated LPP stimuli eliciting greater N1 suppression than HPP stimuli. Furthermore, the data suggest an interaction between phonotactic probability and syllable stress in P2, with greater suppression for self-generated HPP stimuli with SylS1 compared to SylS2, and a reversed effect for LPP stimuli. The modulation of N1 suppression, with greater suppression for low probability items, does not match previous findings of greater suppression for more predictable items [2]. The interaction between phonotactic probability and syllable stress in the P2 suppression highlights that this component is sensitive to variations in stimulus predictability, but reflects a different process than the N1 suppression. Further analyses are warranted to disentangle these differential effects on MIS. [1] Knolle F. et al., Cortex 49(9), 2449-2461 (2013). [2] Niziolek CA. et al., JNeurosci 33(41), 16110-16116 (2013). [3] Ott CGM. et al., Front Hum Neurosci, 7, 41. (2013). Methods E60 Imaging Language Functions Using Functional Near-Infrared Spectroscopy: A Combined Correction Method to Improve Signal Quality Gongting Wang1, Miaomiao Zhu1, Xiaoqian Zhou1, Qing Cai1, Lily Tao1; 1East China Normal University 248 SNL 2019 Program Functional near-infrared spectroscopy (fNIRS) is a noninvasive optical neuroimaging method that is increasingly popular for studying human brain functions. Similar to functional magnetic resonance imaging (fMRI), the method involves calculating task-related neural activations based on hemodynamic changes. Compared to fMRI, fNIRS is cheaper, quieter, more portable, has higher temporal resolution, and is less prone to artifacts arising from head motion. There are also disadvantages, however, particularly lower spatial resolution and lower signalto-noise ratio compared to fMRI, resulting in variable quality of the fNIRS method. In order for future studies to best utilize the advantages of fNIRS (e.g., portability, greater tolerance of head motion), it is important to find effective signal correction methods, to improve signal quality to a level comparable to more reliable imaging techniques (e.g., fMRI). The present study, therefore, quantitatively compared fNIRS and fMRI methods for studying neural language functions, employing different fNIRS signal correction methods. Participants performed a word generation task, first in an fMRI scanner, then in an fNIRS device one day later. The task involved covertly (i.e., without overt articulations) producing as many words as possible that begin with the displayed letter within the fixed time frame. For baseline, participants saw a nonmeaningful symbol (“^”), and covertly repeated a non-meaningful sound (“bou”). Each trial comprised two events, first a fixation cross (“+”) for 15 s, during which participants were instructed to rest, followed by a cue stimulus for 15s, during which participants performed the experimental or baseline tasks. There were 20 trials in total (10 experimental, 10 baseline), and the whole task lasted 10 min. Blocks alternated between experimental and baseline conditions. Within the experimental condition, letters were presented in a random order for each participant. SPM8 was used to analyze the imaging data. For improving fNIRS signal quality, “correction based on signal improvement” (CBSI) and “temporal derivative distribution repair” (TDDR) methods were used. Correlation coefficient was then calculated between fMRI and corrected fNIRS results from the same participants. Lateralization indices (LI) was also calculated, separately for fMRI and fNIRS data. In the fMRI task, significant activation was seen in left inferior frontal gyrus (BA 44, 45), left superior temporal gyrus (BA 22, 42). Compared to uncorrected fNIRS signal, applying signal correction resulted in stronger activation in left inferior frontal gyrus (BA 44, 45). There was significant correlation between fMRI and fNIRS signal (p < .05), while the LIs obtained from the two methods did not differ significantly (p > .05). In conclusion, by using combined signal correction methods, fNIRS signal can be more reliable, and produces results comparable to that of fMRI when investigating the neural basis of language functions. Further work with different types of language tasks, and different populations are needed confirm the present findings. If shown to be consistently reliable and comparable to fMRI, researchers can then capitalize on the advantages of fNIRS, addressing research questions that may not be easily investigated using fMRI, such as The Society for the Neurobiology of Language SNL 2019 Program  Poster Session E language development in infants and children, interactive communication, and more naturalistic language processing settings. conscious attention to rhythmically repeating structure can also account for previous results in the language domain. Perception: Auditory E62 Tracking the building blocks of domain-general processes supporting vowel perception Ellie Abrams1,2, E61 Is neural entrainment a basic mechanism for structure building? Markus Ostarek1, Phillip Alday1, Olivia Gawel1, Johannes Wolfgruber2, Birgit Knudsen1, Francesco Mantegna1,3, Falk Huettig1,4; 1Max Planck Institute for Psycholinguistics, 2 Graz University of Technology, 3University of Trento, 4Radboud University Neural entrainment has been proposed as a mechanism for structure building in language and music (Ding et al., 2015; Nozaradan, 2014). In music, this idea is particularly appealing because of the intuitive mapping between perceptual and neural rhythms. The strongest evidence has come from studies in which participants listened to isochronous sequences of identical tones presented at a rate of 2.4 Hz and were asked to imagine hearing the sequence in binary (march) or ternary (waltz) meter (Fujioka et al., 2015; Nozaradan et al., 2011). The critical finding was that in addition to increased signal at the frequency corresponding to the tone rate (2.4 Hz) there was increased signal at the imagined meter frequencies (1.2 Hz for binary and 0.8 Hz for ternary meter). While it is striking that meter tracking was observed without any acoustic cues in the input (and thus had to be endogenous), rhythm perception was confounded with rhythm imagery involving active attention to rhythmic structure. This opens up the possibility that meterrelated entrainment reflects conscious attention to rhythmic structure rather than rhythm perception per se. To put this possibility to the test, we conducted two electroencephalography experiments with 19 musicians and 19 non-musicians, focusing on the frequency domain (frequency tagging). In Experiment 1, participants were asked to spot occasional irregularities in otherwise perfectly isochronous sequences (where one of the tones was played 30 ms early). Crucially, participants were cued with count-in beats that induced the perception of metric structure (as indicated by ratings collected after the experiment). Thus participants were perceiving metric structure without consciously focusing their attention on it. In Experiment 2, participants listened to the same tone sequences but were now asked to actively imagine hearing them in particular metric structure (block 1), or tap their finger on the imagined downbeat (block 2). We observed evidence for meter-related neural entrainment only in situations where conscious attention was allocated to rhythmic structure, either implicitly (imagery) or in the form of overt behavior (tapping). Thus, our data suggest that mere rhythm perception is not sufficient for neural entrainment to the meter and that instead entrainment is driven by conscious attention to rhythmic structure. Regarding recent evidence linking neural entrainment to hierarchical structure building in language (Ding et al., 2015), it will be critical to determine whether entrainment has a more fundamental role in language, or whether The Society for the Neurobiology of Language Laura Gwilliams1,2, Alec Marantz1,2; 1New York University, 2NYUAD Institute INTRODUCTION Correctly identifying meaningful categories from auditory input involves processing relevant components of the acoustic signal. For vowel perception, at least two features drive the eventual percept: i) the relevant frequency components in the signal (e.g. F1, F2, F3) and ii) the ratios between those components. In order to fully orthogonalise these two features, and therefore independently track their contribution to neural responses, and ultimately the perceptual output, we used sinusoidal tones as a test case. METHOD Two types of tones were created – pure and complex tones – with a perceived pitch ranging from 220-624Hz (mapping onto musical notes within three diatonic scales). The acoustic signal either consisted of a sinusoid at a single frequency (F0), mapping directly to the perceived pitch (pure tones); or they consisted of five harmonic frequencies: integer multiples of, but not including, F0 (complex tones). Critically, the ratio between the harmonic sinusoids generates the same pitch percept as the pure tones, even though the fundamental frequency is absent. Fourteen participants listened to sequences of these tones while magnetoencephalography (MEG) was recorded. RESULTS Multivariate analyses were used to decode pitch and tone-type from activity across MEG sensors. The goal was to assess when each feature becomes present in the neural signal. At 100ms, we could decode pitch from the pure F0 tones and the complex tones; responses to the different tone-types was indistinguishable. This suggests that the ratio between frequencies of the complex tones has been utilised at this latency to map onto a shared pitch category. At 200300ms, we could again decode the pitch of the tone, but the spatial pattern differed depending on tone-type. This later response likely reflects processing spectral content that further characterises the acoustic percept. DISCUSSION In line with previous work on the temporal unfolding of vowel categorisation, our results confirm that an early neural response is involved in encoding the ratio between frequency components. Spectral information, namely the precise collection of frequencies present in the signal, may later inform more complex categorisation, such as source identification. Given the importance of formant ratios for accurate vowel perception, and the modulation of F0 for speaker identification, we can relate this domain-general process to the processing of vowels, which may unfold along a similar time-course. E63 Can brain potentials reflect L2 learning potential? Lisette Jager1,2, Jurriaan Witteman1,2, James McQueen3,4, Niels Schiller1,2; 1Leiden University Centre for Linguistics, 2Leiden Institute for Brain and Cognition, 3Max Planck Institute for Psycholinguistics, 4Donders Institute for Brain, Cognition and Behaviour 249 Poster Session E As part of a larger project, this study examines whether individual variation in ERP components during L2 perception predicts subsequent L2 pronunciation proficiency. We investigate this issue by examining perception of L2 by means of measuring ERPs, once in the beginning of the first year of Dutch university students’ classes in English, and once at the end of the first year when progress in L2 acquisition is to be expected. Models of L2 speech acquisition [1, 2] propose that L1 can interfere with acquisition of L2 at two levels: at the phonological and the phonetic level. Incompatibility of the abstract phonological systems of the two languages may hamper L2 speech perception, while differences in surface characteristics of parallel L1 and L2 phonemes may also interfere with accurate L2 perception. Dutch /ɛ/ has a slightly different realization than the English /ɛ/ [3]. Thus, efficient L2 production by Dutch learners of English may depend on their ability to distinguish between subphonemic differences in the realization of these parallel vowels. In the current study we used a passive oddball paradigm with simultaneous EEG recordings to present /ɛ/ as standards. As deviants, we presented Dutch /ɪ/ (control contrast), English /æ/ (contrast reflecting phonological interference between L1 and L2) and English /ɛ/ (contrast reflecting sub-phonemic differences in surface realization between languages). A significant mean MMN was found for both the control and the phonological contrast at both T1 (mean MMN -0.466 μV, p = 0.0097, and -0.257 μV, p = 0.00037, resp.) and T3 (mean MMN -0.012 μV, p = 0.0357, and -0.169 μV, p = 0.0024, resp.), whereas it was only found for the phonetic contrast at T3 (mean MMN -0.460 μV, p = 0.0050). From these preliminary analyses we can draw two conclusions. First, these relatively proficient L2 speakers are able to perceive the L2 phonological contrast between English /æ/ and /ε/ already at the start of their first semester, as indicated in the ERP analysis. Second, whereas there is no apparent neural discrimination of the phonetic contrast at T1, there is one at T3. This indicates that L2 speech perception is becoming more finely attuned during the course of the first academic year, but further statistical analyses are required to substantiate this claim. In particular, it will be necessary to compare the present data with a control group of participants who are not studying English at university, in order to establish whether the improvements observed here are a consequence of the course of study in English. Collection of these control data is ongoing. [1] Best, C. T. (1995). A direct cross-realist view of crosslanguage speech perception. In: W. Strange (Ed). Speech Perception and Linguistic Experience: Theoretical and Methodological Issues (171–204). Baltimore: York Press. [2] Flege, J. E. (1995). Second language speech learning theory, findings, and problems. In: W. Strange (Ed). Speech Perception and Linguistic Experience: Theoretical and Methodological Issues (233 -277). Baltimore: York Press. [3] Williams, D. and Escudero, P. (2014). Distributional learning has immediate and long-lasting effects. Cognition, 133 (2), 408-13. 250 SNL 2019 Program Perception: Speech Perception and Audiovisual Integration E65 Evaluation of an auditory and visual fMRI language paradigm with reliability measures and dynamic causal modelling Karsten Specht1,2,3, Kathrine Midgaard1, Erik Rødland1; 1Department of Biological and Medical Psychology, University of Bergen, 2Mohn Medical Imaging and Visualization Centre, Haukeland University Hospital, Bergen, 3Department of Education, UiT/The Arctic University of Norway, Tromsø The present study aimed to develop and evaluate a language paradigm for both speech perception and production that could be administered either visually or acoustically, such that both patients with limited sight or hearing could be examined. A secondary aim of the study was to examine whether the underlying network is independent of the sensory modality of the initial stimuli. The original paradigm was used by Berntsen et al. (Berntsen et al., 2006) and rests on the concept, extracted from the former TV-show Jeopardy. The stimuli are simple sentences, presented either visually or aurally, and require the subject not only to understand the sentence but also to formulate a question that has a semantic relation to the content of the presented sentence. Thereby, this paradigm does not only test the perception of the stimuli, but also sentence and semantic processing, and covert speech production. Based on current network models, it was hypothesized that, besides differences in the primary sensory processing, both activation of, and effective connective within the dorsal and ventral stream of the speech and language network (Hickok & Poeppel, 2007; Specht, 2013, 2014) will be identical and independent from the sensory modality. It was further expected that this paradigm would show high cross-modal reliability. Method Twenty-one healthy, right-handed participants (10 men / 11 women), were recruited for this fMRI study. The participants were aged 21 to 50 years, with a mean age of 25 years. The study was conducted on a 3T GE MR scanner. A visual and auditory version of the paradigm has been developed. There were separated fMRI runs for each paradigm. The structure of the paradigm was identical for both sensory modalities and consisted of eight active blocks, each containing six trials, and eight blocks with a sensory control condition (either, e.g. #### # ## or reversed speech, respectively). Irrespective of the sensory modality, subjects had to formulate an appropriate response to every trial covertly. Data were analysed with a general linear model and dynamic causal modelling (DCM) (Friston, Harrison, & Penny, 2003). The reliability of brain activations and network connections were explored with intraclass-correlation coefficients (ICC) (Specht, Willmes, Shah, & Jäncke, 2003). Results & Discussion The results demonstrate that independent from the sensory modality, this paradigm reliably activated the same brain networks, namely the dorsal and ventral stream for speech processing (Hickok & Poeppel, 2007). Further, the ICC analysis revealed that there was high reliability of brain activation across sensory modalities. This was supported by the fact, that the DCM analysis showed that the underlying network structure and connectivity The Society for the Neurobiology of Language SNL 2019 Program  was the same between sensory modalities, although the strength of the effective connectivity appears to vary with the sensory modality. In conclusion, the explored paradigm reliably activated the most central parts of the speech and language network, independently whether the stimuli were administered acoustically or visually, and is, therefore, suitable as a clinical paradigm, since both patients with visual or auditory disabilities can be examined. E66 A Lexicon in the Anterior Auditory Ventral Stream: Preliminary evidence from an fMRI-RA Study Srikanth Damera1, James Mattei1, Laurie Glezer2, Patrick Cox3, Xiong Jiang1, Josef Rauschecker1, Maximilian Riesenhuber1; 1Georgetown University Medical Center, 2San Diego State University, 3George Washington University The auditory system, like the visual system, is thought to be organized following a dual-stream architecture. Under this framework, the auditory ventral stream known as the “what” pathway is specialized for recognizing auditory “objects” including spoken words. Analogous work in in the visual system has shown that visual word recognition proceeds along a simple-to-complex hierarchy in which words are first represented via simple visual features in early visual cortex, and then by increasingly complex features along the ventral stream. This process culminates in a visual lexicon thought to be located in the posterior fusiform cortex (Glezer et al., 2009, 2015, 2016). Despite growing evidence that the auditory system is similarly organized along a simple-to-complex hierarchy, it is still unknown if, and, if so, where a putative auditory lexicon might exist. We tested the hypothesis of an auditory lexicon in the anterior superior temporal gyrus (aSTG) using an fMRI rapid adaptation (fMRI-RA) experiment inspired by our aforementioned visual work. In fMRIRA, two stimuli are presented in quick succession in each trial, and the BOLD-contrast response to the pair is taken to reflect similarity of the neuronal activation patterns corresponding to the two individual stimuli, with the lowest response for two stimuli activating identical neuronal populations, and maximum signal if the two stimuli activate disjoint groups of neurons. In the present study, subjects performed two such fMRIRA experiments. In Experiment 1, subjects (N=8 so far) heard pairs of real English spoken words on every trial while performing a phoneme oddball detection task. The words in a pair were either identical (SAME), differed by a single phoneme (1PH), or shared no phonemes at all (DIFF). We investigated the adaptation effect in subject-specific left anterior and middle superior temporal gyrus ROI (a/mSTG). These ROIs were identified in an independent localizer scan and chosen to be as close as possible to putative aSTG and mSTG word- and phoneme-selective foci, respectively, reported in a recent meta-analysis (DeWitt and Rauschecker, 2012). We found that the left aSTG (but not the mSTG) ROI showed a significant adaptation effect when comparing SAME and 1PH (p=.0167) as well as SAME and DIFF (p=.0108), but not when comparing DIFF and 1PH (p > 0.05), compatible with an auditory lexicon in which The Society for the Neurobiology of Language Poster Session E neurons are tightly tuned to individual real words. In Experiment 2, subjects (N=7 so far) heard pairs of spoken pseudowords on every trial while performing a phoneme oddball detection task. Pseudowords in a pair were either identical, differed by only a single phoneme, or shared no phonemes at all. As these pseudowords were unfamiliar to the subject, we hypothesized that aSTG regions corresponding to an auditory lexicon would not show a selective representation for these novel words, but instead (analogous to our visual work) show a graded release from adaptation corresponding to the amount of phonemic overlap between the words. The results so far show a trend in this direction. In sum, the current study provides preliminary evidence for an auditory lexicon in the aSTG. FUNDING SOURCES: NSF grant (BCS-1756313) Phonology and Phonological Working Memory E67 Relationships between phonological working memory and language processing in adults with dyslexia Terri Scott1, Yaminah Carter1, Ja Young Choi2, Tyler Perrachione1; 1Boston University, 2Harvard University Phonological working memory (PWM) is the ability to encode and maintain representations of speech sounds in short-term memory. PWM is believed to play an important role during language and reading development (Duvfa et al. 2001) and deficits in PWM are found in a wide variety of developmental language disorders (DLD), including dyslexia (Larrivee et al. 1999; Peter et al. 2011). In order to better understand the relationship between PWM and language in DLD, this work explores the brain basis of these abilities in typically developing adults and adults with persistent PWM deficits associated with dyslexia. The most widely used theoretical framework for conceptualizing PWM is Baddeley’s model of working memory (Baddeley & Hitch 1974; Baddeley 1986; 2003). This model operationalizes PWM as a separate ability from language processing; however, more recent studies have challenged this view by showing that established PWM assessments, such as nonword repetition, recruit brain areas associated with speech and language (Strand et al. 2008; McGettigan et al. 2011; Perrachione et al. 2017; Scott et al. 2018). Using functional magnetic resonance imaging (fMRI), we investigated the extent to which language and PWM overlap in the brains of typically developing adults and adults with dyslexia. Twenty typically developing adults (12 female; age 19-32, M=24.1 years) and twentytwo adults with dyslexia (18 female; age 19-29, M=23.5 years) underwent fMRI while completing a PWM task (nonword and real-word repetition) and a passive listening language localizer (Scott et al. 2017). Nonword/real-word repetition activation was measured during a sparsesampling block design fMRI (TR=2.25s, TA=0.75s, 3mm isotropic, 45 slices, 5 simultaneous slices). Language localizer activation was measured during continuoussampling fMRI (same acquisition parameters with no silent delay; TR = 0.75s). Separately from the scanning session, participants completed a battery of cognitive tests including the TOWRE (Torgesen et al. 1999) and WRMT (Woodcock 1998), which were used to confirm reading difficulties in dyslexia. Group-constrained subject- 251 Poster Session E specific analysis (GCSS; Fedorenko et al. 2010; Julian et al. 2012) was used to account for individual variability in local organization of functional neuroanatomy among members of both our control and dyslexia groups. Surprisingly, we found no differences in the mean level of activation in core PWM areas (bilateral STG, LPT, LPreCG, and RCereb; identified in Scott et al., 2018) once we accounted for individual variation in functional neuroanatomy; however, we uncovered subtle differences in the degree to which language areas were recruited during PWM in adults with dyslexia. Most notably, left IFG was the only region to show a significant group-by-condition interaction during nonword repetition, and this overlapped with significant group differences during real-word repetition and language processing. These results suggest that the persistent PWM deficits in dyslexia may be related to the extent to which language regions are recruited to support PWM, rather than disruption to domain-general working memory systems. Computational Approaches E68 Language ERPs reflect learning through prediction error propagation Hartmut Fitz1,2, Franklin Chang3,4; Donders Centre for Cognitive Neuroimaging, Radboud University, Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, 3Kobe City University of Foreign Studies, 4 ESRC International Centre for Language and Communicative Development 1 2 Event-related potentials (ERPs) in language processing have been linked, variously, to word retrieval or integration (N400) and syntactic repair or unification (P600), but it has been difficult to reach consensus on their functional interpretation. Here, we propose a novel theory arguing that ERPs in comprehension arise as side effects of an error-based learning mechanism which is driven by prediction in the production system. On this view, ERPs reflect learning signals that play an important role in language acquisition and adult linguistic adaptation. We instantiated this theory in a recurrent neural network model with interacting processing pathways for syntax and semantics. The model learns language in production by mapping meaning representations onto appropriate sentences that convey the intended message. In comprehension, the model covertly pre-activates the most likely continuation at each sentence position while incrementally processing an overheard utterance. Words that are unexpected within a given context trigger prediction error which generates a learning signal that is propagated through the network. Thus, the model continuously adapts its representations in response to linguistic experience in order to make future predictions more accurate. Since these representations are distributed across the network, distinct areas will be differentially sensitive to semantic and syntactic expectation mismatch. When these learning signals are interpreted as ERPs, the model is able to simulate data from three studies on the N400 where amplitude is modulated by expectancy (Kutas & Hillyard, 1984) and sentence position (Van Petten & Kutas, 1991), but insensitive to the strength of contextual constraints (Federmeier, Wlotko, De Ochoa- 252 SNL 2019 Program Dewald & Kutas, 2007). We also simulated effects from five studies on the P600 in response to violations of number agreement (Hagoort, Brown & Groothusen, 1993), tense inflection (Allen, Badecker & Osterhout, 2003), word category (Hagoort, Wassenaar & Brown, 2003), verb subcategorization (Osterhout & Holcomb, 1992), as well as temporary ambiguity in garden-path sentences (Osterhout, Holcomb & Swinney, 1994). Furthermore, the model displays the semantic P600 when processing role reversal anomalies (Kim & Osterhout, 2005). A unique prediction of this approach is that error-based learning will lead to adaptation of ERPs. Support for this account comes from experimentally observed developmental changes in ERPs (Clahsen, Lück, & Hahne, 2007), the adaptation of ERP amplitude to frequency manipulations within a block of trials (Coulson, King & Kutas, 1998), and the sensitivity of ERPs to word predictability in previous prime sentences (Rommers & Federmeier, 2018). The model can account for these findings because ERPs as error propagation signals are not merely indices of comprehension processing but have a emph{functional role} in learning and adaptation. This computational approach provides a unified account of the sensitivity of ERPs to expectation mismatch, the relative timing of the N400 and P600, the semantic nature of the N400, the syntactic nature of the P600, and learning-based changes in ERP components and amplitude. It demonstrates that prediction error propagation is not only a useful way to explain how humans learn and adapt their language representations, but also sheds light on the neural signatures measured in EEG during language processing. E69 Prediction of continuous speech from intracranial cortical recordings Gaël Le Godais1,2,4,5, Philémon Roussel1,2, Florent Bocquelet1,2, Marc Aubert1,2, Thomas Hueber4,5, Philippe Kahane3, Stéphan Chabardès3, Blaise Yvert1,2; 1Inserm, Braintech Lab, 2University Grenoble Alpes, Braintech Lab, 3CHU Grenoble Alpes, 4CNRS, Gipsa Lab, 5University Grenoble Alpes, Gipsa Lab Intracranial Brain-Computer Interfaces could allow restoring continuous speech in paralyzed patients that lost ability to speak. A straightforward design of a speech BCI is to implement a regression model that maps brain activity recordings to a mathematical representation of acoustic speech. An indirect design is to map brain activity to a model of speech articulators and then to use an articulatory to speech synthesizer to infer the corresponding sound. Here, we compare both methods in an offline setting. We used electrocorticographic recordings in two French-speaking patients: one patient undergoing brain surgery (acute recordings) and one epileptic patient (subchronic recordings). In both cases, brain activity and speech were recorded while the participant read or repeated short sentences. These recordings were aligned using dynamic time warping to a previous acoustic and electromagnetic articulography corpus of a french speaker that includes the same sentences. We used linear models and neural networks to 1) directly predict MEL generalized coefficients from spectrogram features of brain activity; and 2) predict articulatory trajectories from these same features that The Society for the Neurobiology of Language SNL 2019 Program  were then converted to MEL coefficients with a deep neural networks trained on the acoustic-articulatory corpus. Correlations between 0.2 and 0.4 could be obtained between predicted and ground truth MEL coefficients, which were compared between direct and indirect predictions. E70 A shallow neural network to predict naming recovery in chronic stroke. Barbara Khalibinzwa Marebwa1, Julius Fridriksson2, Chris Rorden3, Leonardo Bonilha1; 1Department of Neurology, Medical University of South Carolina, 2Department of Communication Sciences and Disorders, University of South Carolina, 3Department of Psychology, University of South Carolina Language impairment after stroke persists in up to 40% of chronic stroke survivors. Several studies have demonstrated that anodal tDCS coupled with speech therapy can lead to significant language improvement, improving the quality of life for stroke survivors. While the mechanism of action explaining this observation is yet to established, we aimed to test whether tDCS treatment and a combination of neuroanatomical and individual factors before treatment could predict maintenance of language 6 months after treatment. Seventy-one chronic stroke individuals with aphasia (40 women, mean age 59 ± 10.5 years) were randomized into two groups and underwent 15 picture-word matching language therapy sessions over a period of 3 weeks. One group (36 participants) received 1 mA anodal-tDCS while the other (35 participants) received sham-tDCS to the intact left temporoparietal region for the first 20 min of each session. The primary outcome was object naming improvement. Using post-processing methods of diffusion tensor imaging optimized for lesioned brains, we reconstructed individual structural whole-brain connectomes, and quantified the topological organization of each network using Newman’s modularity algorithm. We then trained a shallow pattern recognition neural net to dichotomize participants into two groups: those who performed better at 6 months compared to baseline vs those who performed worse, and a fitting neural network to determine a non-linear combination of behavioral and neuroanatomical factors that best predicted naming scores 6 months after therapy. In accordance with previous literature, we included the organization of the domain general and language specific left and right hemisphere networks, lesion load, the number of identified short-, mid- and long-range fiber connections as neuroanatomical factors, and age, time since stroke, and education as individual factors. We employed a 70:15:15 cross-validation scheme, meaning the training set contained 70% of the participants, the validation group, 15%, and testing was done on an independent sample- the remaining 15%. A grid search determined that 2 neurons and an 8-point threshold (6months - baseline) produced the highest classification accuracy (93% higher than a random model). We were able to classify the subjects into two groups: those who maintained naming performance (at least an 8-point improvement from baseline) vs. those who performed worse 6 months after treatment with up to 70%accuracy. For participants who performed better than The Society for the Neurobiology of Language Poster Session E the threshold (>8 points), right hemisphere modularity (93%), left hemisphere language network modularity (83%) and education (74%) were significant predictors associated with classification accuracy. For participants who performed worse than the threshold (<8 points), age (94%) and tDCS (81%) were significant predictors associated with classification accuracy. We were further able to predict PNT scores with 92% accuracy 6 months after treatment. We present simple neural networks that can not only determine participant language recovery up to 6-months after treatment, but also offers useful insight into disentangling factors necessary for language recovery, for instance the topological integrity of residual white matter networks, or factors leading to deteriorating language abilities for instance age, and the absence of an intervention e.g. tDCS. Speech Perception E71 Individual differences in bottom-up and top-down processing in speech perception as reflected by beta and gamma oscillations Jinghua Ou1, Sam-Po Law2; 1University of Chicago, 2University of Hong Kong Introduction Most, if not all, cognitive processes involve interactions between bottom-up and top-down processing. Neurophysiologically, the ascending and descending information is proposed to be conveyed via distinct frequency bands (Arnal & Giraud, 2012; Fontolan, Morillon, Liegeois-Chauvel, & Giraud, 2014; Wang, 2010). Specifically, intracortical recordings from human auditory cortex have directly linked beta (i.e. 𝛽) to information transfer in the top-down direction, whereas gamma (i.e. 𝛾) in the bottom-up direction. The present study seeks to further elucidate the functional specificity of these neural oscillations (i.e. 𝛽 and 𝛾) in the domain of speech perception. To this end, beta and gamma activities were examined among individuals exhibiting different patterns of speech perception, and further correlated with behavioural measures reflecting top-down and bottomup mechanisms in speech processing. Methods Data were based on two groups of native Cantonese speakers participating in a passive oddball paradigm using EEG (Ou & Law, 2017). They differed in discrimination sensitivity to two rising tones (d’: t(36) = 15.2, p < .0001) but both had non-distinctive production, [–Pro+Per] and [–Pro–Per]. Event-related spectral perturbation (ERSP) difference spectrograms (deviant-standard) in beta (12-30 Hz), and gamma (30-50 Hz) bands were subjected to nonparametric bootstrap resampling to detect significant group differences. Two behavioural measures of speech perception - discrimination sensitivity d’ (taken to reflect the quality of internal representation, i.e. top-down knowledge) and RT (taken to reflect both bottom-up and top-down processes) were then correlated with neural oscillatory activities in order to reveal the functional relevance of different frequency bands. Given that beta is suggested to subserve top-down processes and gamma bottom-up processes, we expect to observe significant group differences in beta, which may further be related to differences in d’. Additionally, beta and gamma may contribute differentially to individual differences of 253 Poster Session E RT between the two groups. Results and Discussion Significant group differences were observed in beta from 170 to 280 ms (p < .05, N = 1000 permutations), with reduced beta power observed in [–Pro–Per]. No group differences in the gamma waveform reached significance (all p > .05). We found significant correlation between beta and d’ (r = .705, p = .001) for [–Pro–Per], but not in the [–Pro+Per] (r < - .069, p > .05). RTs in the two groups showed significant correlations with different frequency bands, with a significant RT-beta correlation observed in [–Pro–Per] (r = - .651, p = .003) but a RT-gamma correlation in [–Pro+Per] (r = .580, p = .009). Based on the current findings, we propose that beta, implicated in top-down processing, reflects individual differences in the quality of phonological representations among the [-Pro-Per] participants, further leading to variations in d’ and RT; on the other hand, gamma, involved in bottomup processing, reflects individual differences in acoustic encoding among the [-Pro+Per] participants, resulting in differences in RT. E73 Tracking the implicit phonological learning of speech using EEG Manli Zhang1, Lars Riecke1, Milene Bonte1; Maastricht University 1 Statistical learning, or the ability to extract statistical regularities from the sensory environment, plays a critical role in language acquisition and reading development. Recent studies have shown that the acquisition of novel word structures can be tracked over time via EEG. It is currently unclear (1) how different phonological units, such as syllables and words, are encoded throughout the learning process; (2) whether the outcome of implicit phonological learning resembles the neural representation of pre-existing word structures and (3) how this tracking and learning of speech units relates to individual differences in reading and phonological abilities. To address these questions, the current study measures EEG while participants listened to (a) a structured stream with repetitions of tri-syllabic nonwords, (b) a random stream of syllables, and (c) a series of tri-syllabic real Dutch words. We analyzed inter-trial coherence (ITC) at the frequency of repeating (non)words and individual syllables, as well as the N400 component time-locked to (non)word/triplet onsets. Behavioral measures of structured nonword recognition, as well as reading and phonological skills were assessed after training. We found that syllable tracking was present and stayed stable across blocks in both the random and structured conditions. In contrast, nonword tracking started to emerge and approximated the neural encoding of real words as exposure accumulated, and was only observed in the structured condition. (Non) word onsets in the structured and real word conditions were observed to elicit larger N400 amplitudes compared to the triplet onsets in the random condition, indicating successful cortical segmentation of the continuous speech stream. Furthermore, nonword-rate ITC in the first two blocks of the structured condition correlated with individuals’ phonological awareness and naming speed, which may imply that participants who are more skilled in phonological processing and visual- 254 SNL 2019 Program verbal conversion would be more sensitive to statistical regularities at an early stage of learning. Our results provide two neural indicators, word-rate ITC and N400, to track the progression of implicit phonological learning. Finally, reading and phonological abilities appear to be closely related to the quick detection of statistical linguistic patterns. Currently acquired data from dyslexic participants are expected to further reveal such an association. E74 Gaming in children’s foreign-language learning Sari Ylinen1, Katja Junttila1, Anna-Riikka Smolander1, Maria Uther2, Reima Karhila3, Seppo Enarvi3, Mikko Kurimo3, Risto Näätänen1; 1University of Helsinki, 2University of Wolverhampton, 3 Aalto University Language and communication skills will be increasingly important. Future learning will likely be assisted by technology, and initiatives to improve foreign-language skills and to stimulate learning and teaching through ICT and digital content are needed. Digital learning environments using the gaming approach have great potential especially in children’s foreign-language learning which requires extensive training. We have designed a digital game Say it again, kid! that aims to support children’s learning of spoken foreign language. Children are stimulated to speak English and advanced speech technology is used to assess their utterances. This automatized assessment enables the game to reinforce learning by providing feedback. We sought to investigate the effect of gaming approach in children’s foreignlanguage learning. The learning effects of the game were evaluated by measuring the mismatch negativity (MMN) component of event-related potential (ERP). 37 typical readers and 24 children with dyslexia (7-11-year-old) trained with the game for about 5 weeks (4-5 days per week, 15-20 minutes per day) and participated in EEG measurements before (pre-test) and after (post-test) the gaming period. To reveal the effect of gaming, learning was compared in two conditions: 1) a game condition with game boards that the children could freely explore and 2) a non-game condition with white screen and forced presentation of English words. In the game condition, feedback (1-5 stars) was used as a game element (the stars enabled to proceed on the game board), whereas non-game condition provided no feedback. To control for the effect of exposure on learning, the number of English words presented in each condition was the same. In typical readers, the MMN responses were significantly larger in the post-test than in the pre-test in the gaming condition, but not after using the non-game, suggesting that gaming induced more robust plastic changes in the brain. However, children with dyslexia did not show increased MMN after gaming. Thus, they do not seem to benefit from gaming similarly to typical readers. Since striatum as part of the reward system of the brain has been suggested to show structural and functional abnormalities in children with language disorders, a possible account for seeing gaming effects in typical readers but not in children with dyslexia is differences The Society for the Neurobiology of Language SNL 2019 Program  in the activation of striatum. The findings are applicable to language teaching and the development of languagelearning applications for children. E75 Newborns can encode the speech envelope in familiar and unfamiliar languages Maria Clemencia Ortiz Barajas1,2, Ramón Guevara Erra1,2, Judit Gervain1,2; 1Integrative Neuroscience and Cognition Center, Université Paris Descartes, Sorbonne Paris Cité, 2Integrative Neuroscience and Cognition Center, CNRS When humans listen to speech, their neural activity tracks the slow modulations of speech over time, known as the speech envelope. Studies have shown that a speech stream must contain a well-preserved envelope for humans to understand it, and the quality with which the neural activity tracks this envelope is related to the quality of speech comprehension. However, it has recently been called into question whether this neural mechanism is sufficient for comprehension to take place, as some studies have found envelope-tracking to be present even when speech is unintelligible. To shed new light on this debate we assessed how newborns, who do not yet comprehend language, decode speech. We tested 47 newborns born to French monolingual mothers, within their first 5 days of life. Their experience with speech was, therefore, mostly prenatal. We presented them with naturally spoken sentences in French, Spanish and English, while simultaneously recording their brain activity using electroencephalography (EEG). The use of these 3 languages allowed us to investigate how the newborn brain responds when presented with familiar and unfamiliar languages. To assess speech envelopetracking we analysed how the neural activity encodes the amplitude and the phase of the speech envelope. Our results show that prenatally French-exposed newborns are able to encode both the amplitude and the phase of the speech envelope in the three languages equally well. Therefore, we have found that the human brain is capable of tracking the speech envelope in familiar and unfamiliar languages already at birth. Our findings reveal that humans’ ability to track the speech envelope does not require experience with language and is, therefore, present in the absence of comprehension. Speech envelope-tracking is thus a basic auditory ability, necessary but not sufficient for language processing. E76 Semantic predictability modulates cortical sensitivity to phonetic competition Hannah Mechtenberg1, Xin Xie2, Emily Myers1; 1University of Connecticut, 2University of Rochester INTRODUCTION. A critical step in speech perception is the resolution of phonetic ambiguity from overlapping speech categories (Myers, 2007). Activation of multiple categories in tandem arises when a speech sound falls in the acoustic space between two or more categories, resulting in competition for recognition. Phonetic competition occurs in naturally produced speech (Bradlow, Torreta, & Pisoni, 1996) and can be quantified by calculating the distance in acoustic space between a given token to tokens in other phonetic categories. For The Society for the Neurobiology of Language Poster Session E instance, the vowel /i/ (as in “beet”) is rarely confused with other vowels and has low values of phonetic competition, while the vowel /ɛ/ (as in “bet”) falls in a densely-populated acoustic space occupied by many other vowel categories, and as such has higher values of phonetic competition. Contemporary neurobiological models debate the role of frontal regions in speech perception and whether these regions are necessary in naturalistic listening conditions (Hickock, 2012). Previous work by Xie and Myers (2018) found sensitivity in the left inferior frontal gyrus (LIFG) to natural variation in phonetic competition within semantically anomalous sentences. An open question is whether semantic context alters reliance on frontal regions for processing phonetic category ambiguity by providing predictive cues to lexical identity (and therefore decreasing demands on phonetic processing). The current experiment examined phonetic competition within sentences that varied in their semantic predictability (Kalikow et al., 1977; Bradlow & Alexander, 2007), as sentences with high semantic predictability are more likely to be heard in naturalistic settings compared to sentences with low semantic predictability. If phonetic competition effects disappear once listeners can rely on semantic context to anticipate upcoming words in the sentence, it suggests that frontal recruitment to resolve phonetic ambiguity is situationdependent. METHODS. Participants passively listened to both high and low predictability sentences (normed for predictability using the Cloze procedure) during fMRI. Rare probe trials (not further analyzed) kept participants’ attention focused on the content of each sentence. The high and low predictability sets of sentences were highly intelligible and equated on pitch, duration, lexical frequency, and sentence-wide phonetic competition. RESULTS. Preliminary results indicate that phonetic competition modulates activity in the LIFG regardless of sentence predictability, partially replicating Xie and Myers (2018). Effects of phonetic competition differed by sentence predictability in the left posterior temporal/ inferior parietal junction, with a greater effect of phonetic competition in high predictability sentences compared to low predictability sentences. CONCLUSION. These data replicate findings by Xie and Myers (2018), as similar neural regions show sensitivity to phonetic competition across investigations. Activity in the left posterior/ inferior parietal junction implies that the relatively “easy” perception of anticipated words within highly predictable sentences may boost the phonetic competition processing signal compared to the higher semantic ambiguity of low predictability sentences. Taken together, these results suggest a neural system that flexibly recruits phonetic processing resources according to the predictability of the sentence context. E77 The effect of seeing written word-form on spoken foreign-language learning in children Katja Junttila1, Anna- Riikka Smolander1, Reima Karhila2, Seppo Enarvi3, Mikko Kurimo2, Risto Näätänen1,4, Sari Ylinen1; 1University of Helsinki, 2Aalto University, 3Nuance Communications, 4University of Tartu 255 Poster Session E The pronunciation of written English words is not always straightforward for Finnish speakers without knowledge of the English language. Some written forms are misleading, like the word ‘jet’, which might be pronounced by a naïve Finnish speaker as /jet/ and confused with the word ‘yet’. Our study investigated how seeing a potentially misleading written word form affects the spoken foreignword learning in children. Additionally, we investigated how the effects differ in children with dyslexia compared to typically reading children. We hypothesised that seeing misleading written form of the word might hinder the learning of spoken words. Furthermore, we hypothesised that children with dyslexia could be more affected by these misleading words due to their reading impairments. The participants were 7-11-year-old monolingual Finnishspeaking typically reading children (n=34) and children with dyslexia (n=24). They rehearsed spoken English words with the “Say it again, kid!” (SIAK) computer game that is played by imitating the spoken English words heard in the game. The automatic speech recognition of the game gives the player feedback on the accuracy of their pronunciation. In the game, half of the participants only heard the spoken words and the other half both heard the spoken words and saw their written forms. Of the several words trained with the game, ‘jet’ and ‘yet’ were used as stimuli during electroencephalography (EEG) measurement. Before and after playing the game, we recorded event-related potentials (ERP) while presenting the auditory standard word ‘jet’ and deviant word ‘yet’ in an oddball paradigm. We compared the ERP responses of the children who saw the written forms during the game to those of the children who only heard the spoken words. An increase in mismatch negativity (MMN) responses indicating better ability to distinguish the stimulus words was observed in typically reading children who saw the written forms while rehearsing, and in children with dyslexia who did not see the written forms. This suggests that typically reading children benefited from seeing the written form contrary to children with dyslexia whose learning was impeded by seeing it. These results indicate that spoken foreign-language training free from written word-forms can aid children with dyslexia in learning foreign languages. However, the same kind of training might not be best for typically reading children who could benefit more from also seeing the word-form to which the spoken word is connected to. E78 Behavioral and Neural Correlates of Phonetic Plasticity Christopher Heffner1, Charles Davis1, Emily Myers1; University of Connecticut 1 Introduction. Speech understanding requires constant adjustment. This can be seen most radically when encountering new speech sound tokens in a second language, where categorizing non-native sounds requires learning. Yet adjustment to phonetic categories can be observed even within a native language, where tolerating variation between talkers in speech rate or in accent requires adaptation. Though non-native and native speech perception share similarities, it is unclear whether they rely on a common cognitive and neural substrate. In the 256 SNL 2019 Program present study, we explore individual differences in each process in behavior and brain structure. If learning and adaptation share common mechanisms, good non-native phonetic learners should also be good phonetic adapters to native-language speech, with a shared reliance on similar neural architecture. This should include auditory regions, frontal (Myers, 2014) language areas, and the basal ganglia (Lim et al., 2014), as well as connections among these three regions. Methods. We recruited 80 healthy English native-speaking young adults for a battery of tests of phonetic plasticity as well as general cognitive function. Learning Tasks. Participants learned to categorize new phonetic categories either incidentally (Gabay et al., 2015) or explicitly (Heffner et al., 2019). In incidental learning, participants were told to attend to a primary task of hunting zombies, which, unbeknownst to the learner, showed up in locations that could be predicted by sound categories; in explicit learning, they were given explicit instruction and feedback in the categories they were learning. Adaptation Tasks. Participants heard English sentences that were either rate-compressed (rate adaptation) or spoken by a native speaker of Italian with noise in the background (accent adaptation). After the sentence finished, they had to choose between three pairs of keywords that they heard within the sentence. The amount of rate compression or background noise the participants were able to tolerate was used as an index of adaptation. Other Cognitive Measures. Participants also completed a variety of tasks that were not speech-specific, including tests of executive function, inhibitory control, working memory, attention, episodic memory, vocabulary, and speech perception in noise. MRI Measures. All participants underwent MRI scanning, giving us structural, resting-state, and diffusionweighted scans for each participant. Results. There were interrelationships in performance on three of our measures of phonetic plasticity—explicit learning, accent adaptation, and rate adaptation. Better learners of nonnative tokens in explicit tasks were also better adapters. The fourth task, incidental learning, did not correlate as cleanly with the other three tasks. Correlations with the other cognitive measures were generally weak apart from positive relationships with vocabulary size and speech-innoise ability. ROI-based analysis of brain structure showed relationships between the speech learning and adaptation measures and the properties of regions including inferior frontal cortex, Heschl’s gyrus, and insula. For example, participants showing better learning and adaptation had more gyrification in bilateral Heschl’s gyrus. Planned analyses include a whole-brain structural analysis as well as connectivity measures for both resting-state functional and diffusion-weighted measures, focusing on connections between frontal and temporal language areas and the basal ganglia. The Society for the Neurobiology of Language SNL 2019 Program  Prosody E79 Neural characteristics of acoustic prosody during continuous real-life speech Satu Saalasti1,2, Enrico Glerean2, Antti Suni1, Jussi Alho2, Juraj Šimko1, Iiro P. Jääskeläinen2, Martti Vainio1, Mikko Sams2; 1University of Helsinki, 2Aalto University School of Science When we exclaim “I told you so!” we mark the word “told” acoustically; this word is longer, pronounced with more effort, with higher pitch and different voice quality compared to the rest of the utterance. The acoustic parameters of pitch, length, and loudness support linguistic distinctions during continuous speech, and help the listener interpret the linguistic meaning of the utterance, as prosody serves several linguistic processes, e.g. changes related to word stress, phrase boundary and sentence type. Studying the neural processing of prosody during continuous real-life speech has been hindered by lack of efficient analysis methods. The current study aims to quantify prosodic characteristics of speech based on a continuous wavelet transform (CWT) based method and explore the correlation between the prosodic signals and brain activity. We estimated prosodic events from continuous real-life speech with a recently developed unsupervised unified account, which is based on a scalespace analysis based on continuous wavelet transform (CWT). A 3T functional magnetic resonance imaging (fMRI) was used to record brain activity of 29 female participants (age 19-49) while they listened to an 8-minute narrative. A CWT based scale-space analysis was used to extract prosodic characteristics of the narrative, and the obtained wavelet timeseries were used as regressor for the fMRI data to reveal how they map into the brain recordings. More specifically, we used CWT (Morlet mother wavelet (5) to identify frequency bands containing most of the energy of the magnitude timeseries. We used ridge regression to compute a similarity between the obtained magnitude timeseries and individual brain time series. T-value was computed for the regression scores across subjects with 5000 permutations (FSL randomise TFCE at p=0.05) We found that acousticprosodic properties of speech aligned in a hierarchical fashion encompassing syllable and short phrase density in the narrated speech, and they predicted distinct brain activity. Syllable density predicted brain activity in the medial temporal as well as superior fronto-lateral areas, suggesting involvement of the speech motor areas. Phrase density predicted brain activity in the medial temporal area. Our findings are in line with what is known about brain activity related to speech and language processing. In conclusion, CWT based scale-space analysis enabled automatic quantification of acoustic prosody during continuous real-life speech. Importantly, the automatically created model identified different levels of linguistic hierarchy, and different wavelet scales of the model elicited different brain activity. The Society for the Neurobiology of Language Poster Session E E80 Emotion processing in speech prosody and written emotion words in Cantonese-speaking congenital amusics Yi Lam Cheung1, Yubin Zhang1,2, Caicai Zhang1,2; Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, 2Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University 1 Congenital amusia is a lifelong impairment in musical ability. Individuals with amusia have a limited ability to perceive and produce pitch. Previous studies have found that amusics showed reduced sensitivity to emotion recognition in emotional prosody and emotional faces, suggesting that there might be a domain-general emotional processing deficit in amusics that extended to emotion recognition in the visual domain. However, the underlying deficiency mechanism of this emotion deficit remains unclear. On the one hand, it is possible that this emotion deficit primarily applies to the socio-emotional domain. On the other hand, it is possible that this deficit is due to impaired domain-general representations of emotion categories in the amusic brain. In order to tease apart these two hypotheses, it is thus important to examine emotion processing beyond the auditory modality outside of social-emotional domain. To this end, we examined whether amusics’ emotion processing deficit extends to linguistic emotion processing in written words. In the current study, we recruited 19 Cantonesespeaking amusics and 17 matched controls. We addressed the above question by comparing amusics and controls’ performance on three tasks: (1) Emotion prosody rating task which required participants to indicate how much each spoken sentence was expressed in each of the four emotions (Happiness, Anger, Sadness, and Fear) on 7-point rating scales (1 as lowest and 7 as highest); (2) Written word emotion recognition task where we asked both groups to recognize the emotion of Chinese written emotion words as one of the four categories (Happiness, Anger, Sadness and Fear); and (3) Written word emotion valence judgment task, where the two groups judged the valence (positive, negative or neutral) of the Chinese written words. Accuracy results (rating of the intended emotion only with the highest scores) in the emotion prosody rating task showed that amusics preformed significantly worse than controls in emotion prosody processing, while the two groups showed similar ambivalent rate (rating more than one emotion with the highest scores), indicating that amusics were able to identify one single category as the most salient one. In both written word emotion recognition and valence judgment tasks, the two groups showed no significant difference in accuracy rate, indicating that amusics performed similarly compared to controls. Results supported the view that tonal language speakers with amusia also showed a deficit in emotional speech prosody processing, even though tonal language speakers have constant exposure to small changes in pitch in daily communication. More importantly, such impairment in amusia appears to not extend to linguistic emotion processing in written words, implying that the emotion deficit is likely to be restricted to the socio-emotional 257 Poster Session E SNL 2019 Program domain in amusics. It is possible that amusics might rely on other cues such as semantic cues for linguistic emotion processing in written words. and semantic attributes. This demonstrates the critical role of the vOT processing stream in transforming written words into their spoken forms and meanings. Reading E82 Orthographic processing of Chinese characters: evidence from the sub-lexical level Xiaodong Liu1, Yan Wu2, E81 Mapping visual symbols onto spoken language along the ventral visual stream J S H Taylor1,2, Matthew H Davis3, Kathleen Rastle4; 1Aston University, 2University College London, 3MRC Cognition and Brain Sciences Unit, 4Royal Holloway University of London Introduction: Reading acquisition requires the brain to map arbitrary visual symbols onto their sounds and meanings. This requires a degree of invariance to information such as case, font, and size, and position (e.g., the b in Cab is the same as the B in Bad). Dehaene and colleagues (2005; 2011) proposed that such invariance is achieved by left ventral occipitotemporal cortex (vOT), with neural representations becoming increasingly tolerant to location-shifts and encoding increasingly complex orthographic information, along a posteriorto-anterior hierarchy. However, others argue that the primary characteristic of this posterior-to-anterior vOT hierarchy is that representations become increasingly sensitive to higher-level information about word sounds and meanings (Price & Devlin, 2011). The current study used Representational Similarity Analysis (RSA) of brain responses measured with fMRI to reveal how representations along the vOT hierarchy transform visual information in the service of reading. In particular, we sought to uncover how vOT represents letter identity and position, and the extent to which representations in this region come to capture word sounds and meanings. Method: 24 adults learned to read two sets of 24 novel words that shared phonemes and semantic categories but were written in different artificial orthographies. This allowed us to independently manipulate the similarity between words with respect to both their letters (within and across positions) and their linguistic associations. Following two-weeks of training, participants recalled the meanings of the newly learned written words whilst neural activity was measured with fMRI. Using RSA we examined the extent to which the multivoxel patterns of fMRI responses along the vOT hierarchy encoded 1) basic visual form (simple cell representations from the HMAX model, Serre et al., 2007), 2) letter identity and position (position-specific and spatial coding models, Davis, 2010), 3) phonemes and 4) semantic category. Results: RSA on item-pairs from the same orthography revealed that right and more posterior vOT regions were sensitive to basic visual similarity whereas mid-anterior left vOT was sensitive to letter identity. These representations of letter identity became progressively more position-invariant from posterior to anterior vOT. In more anterior regions of the left vOT, item-pairs that shared sounds or meanings, but were represented by different orthographies with no letters in common, had similar neural patterns. Conclusions: These results reveal a hierarchical, posterior-to-anterior gradient in vOT regions, in which representations of letters become increasingly invariant to position and are transformed to convey their phonological 258 Suiping Wang3; 1Ghent University, 2Northeast Normal University, South China Normal University 3 Prior research has shown that radicals (character components) play an important role in Chinese word recognition. Many studies including behavioral and ERPs research showed that the radical spatial position affects the processing of Chinese characters, but it is still unclear how this is done. In this study, we intend to employ advanced high-resolution functional magnetic resonance imaging (fMRI) techniques to investigate where is the radical position information processed in the human brain. Characters with different radical combinability (the number of characters sharing the same radical) are selected as stimuli and participants are required to decide whether the presented Chinese character refers to an animal or not. When considering position-specific radical combinability (SRC), by contrasting high SRC against low SRC, there was significantly greater activation in the left mid-fusiform region (VWFA, BA19), the bilateral middle frontal gyrus (BA9/46), the bilateral inferior frontal gyrus (BA9/45), the bilateral middle temporal gyrus (BA19/21), the bilateral superior temporal gyrus (BA22/41), the left cerebellum posterior lobe and the right fusiform gyrus; while considering position- general radical combinability (GRC), by contrasting high GRC against low GRC, there was significantly greater activation in the bilateral middle frontal gyrus (BA8/9), the left inferior frontal gyrus (BA45), the left medial frontal gyrus (BA6), the right middle occipital gyrus (BA19) and the bilateral parahippocampal gyrus (BA30). These results suggest that radical position plays an important role in Chinese character reading, and the VWFA is sensitive to the processing of radical position information. E83 Visual simulations in the two cerebral hemispheres during L1 and L2 sentence reading Tal Norman1, Orna Peleg1; 1The Program of Cognitive Studies of Language Use, Tel Aviv University Embodied theories of language processing hold that sentences are understood by mentally simulating the events described by the sentence. In line with this proposal, several studies have shown that first language (L1) comprehenders automatically activate perceptual (visual) information of verbally described objects, even when this information is not explicitly stated, but merely implied by the described situation (e.g., Zwaan & Yaxley, 2002). The present study had two aims: The first aim was to investigate whether second language (L2) processing involves similar visual simulations. Specifically, we examined whether readers activate visual information regarding objects’ shape during sentence comprehension in L1 and L2. The second aim was to investigate the separate and combined abilities of the two cerebral hemispheres to activate perceptual-visual information The Society for the Neurobiology of Language SNL 2019 Program  during L1 and L2 sentence reading. To accomplish these aims, two experiments were conducted. In both experiments, late proficient Hebrew-English bilinguals performed a sentence-picture verification task in L1Hebrew and L2-English. In the task, they verified whether a pictured object (e.g., a balloon) was mentioned in the preceding sentence (e.g., The boy saw the balloon in the air). In all critical trials the pictured object was mentioned in the sentence, however, the object’s shape either matched (e.g., an inflated balloon) or did not match (e.g., a deflated balloon) the one implied by the sentence. In the first experiment, pictures were presented centrally to both hemispheres. In the second experiment, pictures were presented either in the right visual field (RVF) to the left hemisphere (LH), or in the left visual field (LVF) to the right hemisphere (RH). In the first experiment (central presentation), responses were significantly faster in the match than in the mismatch condition, but only in L1Hebrew, and not in L2-English. This indicates that visual simulations during sentence processing are more likely to occur in L1 than in L2. In the second experiment (lateral presentation), irrespective of the experimental target language (L1-Hebrew or L2-English), the shape effect was stronger in the LVF/RH, than in the RVF/LH. This indicates that the RH may be more crucial for the activation of implied visual information, than the LH. Taken together, these findings suggest that under normal reading conditions (central presentation), the RH shows greater involvement in L1 than in L2 sentence comprehension. As a result, embodiment effects may be reduced in L2 in comparison to L1. E84 Evidence for a critical role of the left inferior parietal lobule and superior longitudinal fasciculus in proficient text reading Sylvie Moritz-Gasser1,2,3, Sébastien Boissonneau4, Guillaume Herbet1,2,3, Anne-Laure Lemaître1,5, Hugues Duffau1,3; 1 CHU Montpellier, Department of Neurosurgery, 2Montpellier University, Department of Speech-Language Therapy, 3Institute of Neuroscience of Montpellier, 4APHM, CHU Timone, Department of Neurosurgery, Marseille, 5PSITEC – University of Lille Reading proficiency is an important skill for personal and socio-professional daily life. Current neurocognitive models underline a dual route organization of word reading in which information is processed by both a dorsal phonological “assembled phonology route” and a ventral lexical-semantic “addressed phonology route”. Because proficient reading should not be reduced to the ability to read words one after another, the current study was designed to shed light on the neural bases underpinning specifically text reading, and on the relative contribution of each route in this skill. Twenty-two patients harboring a left diffuse low-grade glioma operated on in awake condition were included in the study. They were divided into three groups according to tumor location: Inferior Parietal Lobule (IPL Group, n=6), Inferior Temporal Gyrus (Tinf Group, n=6), Fronto-Insular (CTRL Group, n=10). Spoken language and reading testing was performed in all patients the day before, during the surgery, and 3 months after, as well as cognitive functioning before and 3 months after. Text reading scores obtained before The Society for the Neurobiology of Language Poster Session E and three months after the surgery were compared within each group and between groups, and correlations between reading scores and both spoken language and neuropsychological scores were calculated. Results indicated that only the IPL Group showed a significant decrease in text reading scores between the two periods, which was not associated with lower scores neither in naming nor in verbal fluency, while Tinf Group showed a slight decrease in text reading between the two periods, associated with a clear decrease in naming and semantic verbal fluency, CTRL Group showing no differences between preoperative and postoperative reading and spoken language scores. The analysis of these behavioral results and anatomical data (i.e. resection cavities and white matter damage) suggests a critical role for the left inferior parietal lobule and superior longitudinal fasciculus in proficient text reading. This ability might then depend not only on the integrity of both processing routes, but also on their capacity of interaction. These findings may thus have fundamental as well as clinical implications. E85 Localization of gap-filler integration and gardenpath effect of Chinese relative clause Yanyu Xiong1, Aina Puce1, Sharlene Newman1; 1Indiana University Bloomington The grammatical features of Chinese relative clauses offer the possibilities of not only investigating gap-filler dependency by itself, but also how the hierarchical syntactic integration interacts with ambiguous sentence context. Our previous ERP study has found that at the relative marker de, an early left anterior negativity for the gap-filler integration effect (110-220ms) was followed by a left-lateralized negativity for the context effect (411-441ms) and a late centro-posterior positivity for the garden-path effect (540-620ms) (Xiong, Dekydtspotter & Newman, 2019). Although the temporal information reveals that gap-filler integration is modulated by sentence context in a highly dynamic manner, the neural sources of the modulation have not been identified. The present study used the boundary element method to localize the neural generators of the processes. The distributed dipole source localization was performed on the functional EEG data with the volume conduction model and source model constructed based on the MRI anatomical image of the subjects. The participants were presented with subjectgap and object-gap relative clauses either modifying the subject or object of the matrix sentences in the Rapid Serial Visual Presentation (RSVP) paradigm. The results show that between 100-200ms after the word onset of the relative marker de, object-gap relative clauses evoke greater activation in the left mid-lateral prefrontal cortex with the maximum dipole strength at the inferior frontal junction. The left-lateralized negativity of the context effect was localized at the left anterior temporal lobe between 250-350ms, which is earlier than our previous finding. In the time window between 500-600ms, the garden path effect of object-gap object-modifying relative clauses elicited more activation in the left inferior parietal cortex (including supramarginal gyrus and angular gyrus) and the fusiform gyrus. The results suggest that the left mid-lateral prefrontal cortex, as involved in a 259 Poster Session E network of working memory system (Nee, et al., 2012), supports gap-filler integration in terms of maintaining and retrieving the unsolved constituents. Different from the previous report of N400 effect localized in the left superior temporal lobe (Service, et al. 2007), our finding reveals that the processing of sentence context is supported by the anterior temporal lobe in an earlier time window, indicating that the computation of phrase structure beyond the clausal level also relies on the information transferred from the anterior temporal cortex to the frontal cortex (Grodzinsky & Friederici, 2006). In addition, the garden-path effect was found to be supported by the left inferior parietal cortex and the fusiform gyrus in the late time window of de, suggesting that syntactic-semantic integration may be implemented to solve the ambiguity. E86 Differences between deaf and hearing readers of Chinese are limited to the right superior temporal cortex Junfei Liu1,2,3,4, Tae Twomey1,2, Yiming Yang3,4, Mairéad MacSweeney1,2; 1Institute of Cognitive Neuroscience, University College London, 2Deafness, Cognition and Language Research Centre, University College London, 3Jiangsu Key Laboratory of Language and Cognitive Neuroscience, 4School of Linguistic Sciences and Arts, Jiangsu Normal University Reading is a challenging task for deaf readers of alphabetic orthography such as English (Cawthon, 2004) and also of Chinese (He, 2005). Previous studies have reported both differences and similarities in the neural responses between deaf and hearing readers of Chinese during explicit semantic and phonological tasks on written words. However, it is still unclear whether deaf and hearing readers of Chinese recruit the same network during reading. For example, deaf readers may rely on semantics more than phonology, due to reduced auditory experience. However, phonological knowledge is less important than semantic knowledge in reading Chinese since the orthography generally maps more closely to semantics than phonology. Behaviourally, semantic bias has been observed in hearing readers of Chinese (Williams & Bever, 2010). This suggests that the neural network recruited by deaf and hearing people may be similar when they read Chinese. Using functional magnetic resonance imaging, we contrasted the neural responses in Chinese deaf and hearing adults. We used a lexical decision task, which does not emphasise the use of semantic or phonological knowledge. We predicted that the reading network recruited by deaf and hearing readers of Chinese would be similar given that the orthography is semantically biased. Twenty-four hearing and 24 deaf adults viewed a sequence of real singlecharacter Chinese words (n=120) and pseudo-characters (n=120), presented for 0.5 second each (ISI=1~4 second; mean=2.5 second). Participants made a button press to indicate whether it was a real character or not. They were also tested on Chinese reading, with a test developed based on the Vernon-Warden Reading Comprehension Test of English (Vernon-Warden, 1996). The reading scores were included as a covariate in the imaging analyses following a significant group difference: deaf participants (M=63.1%; SD=11.3%) were poorer readers than hearing 260 SNL 2019 Program participants (M=82.9%; SD=11.3%), t(46)=6.08, p<0.001, d=1.75. There were no significant differences between groups on reaction times or accuracy. The conjunction analysis of deaf and hearing groups identified bilateral ventral occipito-temporal cortices, left precentral gyrus, left central operculum and left supplementary motor area. The only group difference was found in the right superior temporal cortex (STC), in the anterior (x=54, y=8, z=-7; Z=5.38, p=0.001 FWE, k=12) and the middle (x=63, y=-19, z=2; Z=4.97, p=0.010 FWE, k=3) portions. At these peaks, activation in the deaf group was significant while the responses in the hearing group did not differ from the baseline. The current study shows that when deaf and hearing readers of Chinese read, statistically reliable differences between the groups are limited to the right STC. This effect is likely to be associated with deafness since task performance did not differ between the groups; and the reading level was included as a covariate. Consistent with previous studies (e.g., Finney, Fine & Dobkins, 2001; Twomey et al., 2017), this effect may reflect low-visual processing in the deaf group. Our results also suggest that semantically biased orthography might minimise hearing status effects during reading. Future studies are needed to directly contrast different orthographies and investigate the influence of orthography on reading in deaf people. E87 Distinct neural substrates of subcomponents of reading: Neuroimaging investigation of the Simple View of Reading framework Ola Ozernov-Palchik1, Tracy M. Centanni2, Sara D. Beach1,3, Sidney May1, John Gabrieli1; 1Massachusetts Institute of Technology, 2Texas Christian University, 3Harvard University Reading comprehension is the overall goal of reading. The Simple View of Reading posits that reading comprehension consists of two components: decoding and language comprehension. Ample behavioral evidence supports the view that decoding and comprehension are correlated but separable skills, together accounting for a large amount of the variance in reading comprehension performance across development. Only a few studies to date have directly compared the neural underpinnings of differences in reading comprehension and single-word decoding skills during naturalistic reading in the same individuals. Adult participants (mean age: 26 years; 22 female, 21 male) were administered a comprehensive battery of 18 standardized behavioral reading, language, and cognitive measures. An exploratory factor analysis with the promax rotation technique was conducted on the measures, and four factor scores were extracted. Based on the patterns of loadings, the factors were interpreted as follows: 1) reading accuracy, 2) general cognitive, 3) decoding fluency, and 4) language comprehension. Participants then read seven paragraphs out loud in their normal reading voice and rate in a 3T MRI scanner. For the control block, participants verbally indicated whether arrows on the screen were pointing up or down (e.g., by saying “up” or “down”). Participants’ speech was recorded with an MRI-compatible microphone. fMRI analyses were conducted using FEAT in FSL, and significance was The Society for the Neurobiology of Language SNL 2019 Program  determined by Z > 2.3 and a corrected cluster significance threshold of p = 0.05. Overall, reading passages aloud activated the canonical occipitotemporal, temporoparietal, and inferior frontal reading regions, as well as superior and medial frontal regions, more strongly than did naming arrows. A series of whole-brain regressions were implemented to examine the relation between the four out-of-scanner behavioral factor scores and cortical activation for the passage > arrows contrast. (1) There were no significant clusters demonstrating correlation with the reading accuracy factor. (2) Lower scores on the general cognitive factor were associated with greater activation in bilateral thalamus, cerebellar lobules IV and V, parahippocampal and lateral occipital regions, as well as in a left temporoparietal cluster. (3) Higher scores on the decoding fluency factor were associated with greater activation in the lateral occipital gyrus, multiple temporal left hemispheric regions, and left middle/inferior frontal gyrus. (4) Finally, lower scores on the comprehension factor were associated with activity in the bilateral cingulate and several temporoparietal regions (including middle temporal, supramarginal, and angular gyri). No other correlations were significant. The neuroimaging results suggest that individual differences in decoding fluency and in language comprehension are associated with distinct patterns of activation during naturalistic reading. Therefore, these findings support the Simple View of Reading framework and provide insight into the neural mechanisms underlying the two distinct reading components and the domain-general cognitive processes operating during reading. Poster Session E is general across writing systems at the meaning level, Chinese reading shows an additional strong influence of orthographic form, as specific consequence of the structure of the Chinese writing. E88 Integrating word meaning with text meaning: ERP evidence from Chinese shows differences and similarities to English Lin Chen1, Charles Perfetti1, Xiaoping Fang1; 1Learning Research & Development Center, University of Pittsburgh The influence of writing systems and orthographies during reading is largely localized at the word level, where the written form maps onto phonology and semantics, as demonstrated in comparisons of Chinese and English reading (Perfetti et al., 2005). But what about text processing? One expects that, once words are identified, the downstream processes of Chinese and English should be the largely the same. We tested this assumption in studies of on-line comprehension processes using ERPs. In two studies, native Chinese speakers read short texts for comprehension in the across-sentence word-to-text integration paradigm (Yang et al, 2007) to compare with studies of English that have used this paradigm. We found evidence for local binding of the currently read word with an antecedent (a “paraphrase” word) in the previous sentence, as indexed by N400 reduction. This result is consistent with results in English. However, we also observed a difference from English in an orthographic facilitation effect based strictly on the orthographic form independent of its meaning. This effect across sentence boundaries emerged in remote binding sites as well as local binding sites near the word being read at early stage (around 150ms) of word reading. The results indicate that although meaning based word-to-text integration The Society for the Neurobiology of Language 261 Author Index SNL 2019 Program Author Index Authors are indexed by abstract number, not page number. Italic indicates first author. A Abel, A - C38, D10 Abrams, E - E62 Adamson, H - C48 Adank, P - D60 Adeen, F - D64 Adrián-Ventura, J - E49 Agmon, G - Poster Slam C, C51 Ahissar, M - E7 Ahlfors, S - A80 Ahlfors, SP - D15 Ala-Salomäki, H - A52 Alario, F-X - C57 Alday, P - E61 Alday, PM - Poster Slam B, B68 Alemán Bañón, J - D51, E39 Alexandra, B - A1 Alho, J - C79, E79 Alho, K - C77 Aliko, S - A68 Allen, L - A36, E51 Allopenna, P - B70 Alrazi, T - D17 Altarelli, I - C82 Altmann, G - C24, D36 Alves, M - E20 Amoroso, L - A33 Amoruso, L - D5 Amunts, K - B27, C12 Anbari, Z - C80 Andersen, CM - B56 Anderson, CT - A36, E51 Andersson, A - B40, E23 Andin, J - A48, B1 Andriola, D - Poster Slam D, D70 Angelopoulou, G - A79, B11, C20, D61 Ansaldo, AI - D20 Anton, J-L - C57 Anurova, I - D81 Anwander, A - C48 Araneda Hinrichs, N - C72 Archibald, LMD - D71 Arrington, CN - A83 Artemova, A - C13 Arutiunian, V - A5 Asano, R - C72 Asaridou, S - Poster Slam C, C3 Assaneo, MF - C61, C65 Atanasova, T - A54, D59 Aubanel, V - A64 Aubert, M - E69 Auclair-Ouellet, N - D17 Auer, T - B82 Austin, T - A16 Avecilla-Ramirez, G - A29 Ávila, C - C15, E49 Azaiez Zammit Chatti, N - C73 Becker, R - B78 Bedny, M - C6 Beese, C - E3 Behne, DM - C76 Bekolay, T - D4 Belin, P - C57 Belley, É - D84 Belteki, G - A16 Ben-Shachar, M - B44, C51 Ben-Zion, D - B50 Benasich, A - B75 Benito-Aragón, C - A12 Benjamin, C - B4 Berent, I - D8 Bergerbest, D - C85 Bermudez-Diaz, I - A17 Biau, E - B61 Billiet, T - E12 Binder, J - A35, C30 Binder, JR - A36, E51 Binney, RJ - B31 Biondo, N - B49 Bird, L - D16 Bishop, D - A51, A9, D14 Bitan, T - B50, C5 Black, SE - E58 Blackburn, A - A46 Blagovechtchenski, E - A31 Blanco-Elorrieta, E - A20 Blank, I - A14 Blockmans, L - Poster Slam E, E9 Blohm, S - B71 Blomberg, F - E23 Bludau, S - C12 Bobbit, S - B77 Bocquelet, F - B66, E69 Boggio, PS - A39 Boissonneau, S - E84 Bolger, D - B58, C49 Bondarev, D - A88 Bonilha, L - A1, A36, A4, A8, B54, B9, E15, E19, E51, E70 Bonnard, M - C57 Bonte, M - B62, E59, E73 Bookheimer, S - B4 Borghesani, V - C34, C58 Bornkessel-Schlesewsky, I A65, C1 Borodkin, K - C46 Bortfeld, H - E8 Boucoiran, I - B14 Bradshaw, A - A51, A9 Brambati, SM - C16 Brand, J - D2 Branzi, FM - A24 Brennan, J - A19, A74 Brennan, JR - B41 Brilmayer, I - B68 Brisebois, A - C16 Brisson, V - D84 Brisson-McKenna, M - D24 Brodtmann, A - D16 Bronov, O - C13 Brovelli, A - A56 Brown, K - B70 Brozdowski, C - C39 Bruffaerts, R - C18, C31 Brunellière, A - B30 Bruno, N - C55 Brysbaert, M - D2 Buchwald, A - A60 Bunn, G - D54 Burks, A - B80 Busby, N - B5 Busch, RM - A36, E51 Butorina, A - A72, A88, C74 Buxbaum, LJ - B65 Carlson, C - A36, E51 Carney, J - D2 Caron-Desrochers, L - Poster Slam B, B14 Carreiras, M - A33, D5 Carter, Y - E67 Caspers, S - B27 Castelluccio, B - B21 Centanni, T - C7, D87 Centanni, TM - E87 Chabardès, S - B66, E69 Chang, E - A69 Chang, F - E68 Chang, S-E - A11, A12 Chang, W - B37 Chang, Y-N - A53 Chanoine, V - A56, C57 Charidimou, A - B8 Cheetham, J - D17 Chehrazad, S - A34 Chen, H - B26 Chen, J - A30, C6 Chen, J-K - A44 Chen, L - D45, E88 Chen, P-H - A21 Chen, SA - B18 Chen, T - D26 Chen, Y-h - E28 Chen, YL - E41 Cheng, L - A45 Chernoff, B - Poster Slam C, C50 Chernyshev, B - A72, A88, C74 Cheung, YL - E80 Chiba, K - C83 Chien, P-J - B64 B Baart, M - D30 Baciu, M - A50 Badier, J-M - A56 Baggio, G - A32, C25 Bai, J - A85, D74 Bai, X - A30 Baker, M - D67 Bakermans-Kranenburg, M B57 Balasuramanian, V - A3 Baldo, J - D22 Baldo, JV - D34 Balla, VR - D44 Banjac, S - A50 Banks, B - B63 Barbieri, E - Poster Slam E, E13 Baron, A - C45 Barouch, B - C5 Basilakos, A - A8, B54, B9, D18, E15, E19, E34 Bauch, A - D65 Baum, S - A22, A44, E44 Baum, SR - C47 Baus, C - E55 Beach, S - C7, D87 Beach, SD - E87 Beck, L - B4 C Caballero, J - E45 Caballero-Gaudes, C - D30 Cabrera, L - D82 Cai, C - A69 Cai, Q - B71, B84, B85, E60 Calabria, M - E40 Călinescu, L - A32 Callaghan, E - B3, C22 Camerino, I - B6 Canfield, A - B21 Cao, F - A86 Caplan, D - B8, E13, E16 262 The Society for the Neurobiology of Language SNL 2019 Program  Choi, HS - Poster Slam C, C62 Choi, JY - A76, E67 Chopde, AP - C81 Chouinard, B - A20 Chow, HM - A11, A12 Chow, W-Y - D37 Chung, KKH - C9 Cichon, S - C12 Cler, G - D13, D14 Cloutman, L - D23 Cocquyt, E-M - C21 Coderre, E - A23 Author Index Coffey-Corina, S - A77 Cohn, N - A23, A39, C28, D48 Conant, L - A35, C30 Conant, LL - A36, E51 Cong, F - B26 Connell, L - D2 Constantin, IM - B14 Coombes, JS - A15 Coopmans, C - B38 Copland, DA - A15 Corina, D - A77, D50 Corona Hernández, H - A29 Correia, JM - B60 Cosgrove, AL - C53 Cosper, SH - E21 Cost, A - E55 Costumero, V - C15 Cotter, B - E43 Coulson, S - D62 Coulter, K - E44 Cournane, A - E32 Courteau, LM - A40 Cousin, E - A50 Coventry, K - A37 Cox, P - E66 Crepaldi, D - D43 Cross, A - D71 Crosson, B - A83 Cruz Heredia, AAL - C42 Cui, H - C29, D68 Curran, B - D22, D34 Curtis, A - B86 Cusack, R - C6 Dekhtyar, M - A18 de la Cruz-Pavía, I - D8 de Leeuw, F-E - B6 Deleon, J - C34 De Letter, M - A34, C21 Delrue, L - B30 Demir-Lira, ÖE - C3 den Ouden, D - E34 De Rosa, M - D43 Desautels, A - C16 Désilets-Barnabé, M - C16 De Smedt, B - B10 Devereux, BJ - C62 Devinsky, O - B67 Devlin, JT - D60 De Weer, A-S - C18 Dewenter, A - A47 Diaz, MT - C53 Díaz Calzada, L - A29 Dickens, JV - C88 Dickerson, B - C42 Dickerson, N - E16 Diedrichsen, J - E47 Dietrich, S - E30 Di Liberto, G - A63 Ding, G - C82 Ding, J - D27, E31 Dirani, J - D39, D41, E52 Dmitrieva, X - E3 Dobrego, A - D81 Donos, C - B86 Doppelbauer, L - D25 Dorjsuren, N - E2 Doyle, A - C32 Doyle, W - B67 Doyle, WK - D64 Dragoy, O - A5, C13, E1 Drane, DL - A36, E51 Dravida, S - D12 Dreyer, FR - D25 Dries, E - C18 Dronkers, N - D22 Dronkers, NF - D34 Drossinos Sancho, N - D23 Drury, J - A84, C44 Du, Y - C66 Dubarry, A-S - B58 Duffau, H - E84 Dufour, S - D58 Dugan, P - D64 Dumassais, S - B46 Dunagan, D - A19 Duncan, ES - A60 Dupont, P - C31 Duyck, W - C21 Elm, J - E15 Emmendorfer, A - E59 Emmorey, K - C39 Enarvi, S - E74, E77 Enrico, G - C79 Erickson, KI - A15 Eriksson, D - D18 Erkkinen, M - C10 Escabi, M - B70 Eulitz, C - C41 Evans, S - E47 Fernandez, CB - C53 Fernandez, E - A39 Fernandino, L - Poster Slam C, A35, A36, C30, E51 Ferreira, J - E50 Figueiredo, P - E20 Fimm, B - B27 Findlay, A - A69 Fischer-Baum, S - B82, C75 Fitz, H - B69, E68 Flecken, M - A28, A38 Fleur, D - A28 Flinker, A - B67 Floegel, M - A55 Flurie, M - C17 Flynn, M - B51 Flögel, M - E57 Foldi, NS - A2 Forseth, KJ - A58 Forster, M-T - B7 Foucart, A - E55 Franco, AR - C19 Francois-Nienaber, A - A7 Frank, M - B74 Frenck-Mestre, C - A43, C49 Fridriksson, J - A1, A4, A8, B54, B9, D18, E15, E19, E34, E70 Friederici, AD - B64, C4, C48, E3, E6 Friedman, D - D64 Friedman, RB - C88 Friedrich, C - D65 Frijters, JC - D71 Fritz, I - C25 Fromont, LA - B42 Frost, SJ - D72 Fuchs, S - E57 Fuhrmann, S - B7 Fuhrmeister, P - C68 Fyshe, A - A20 Gazola, AA - C19 Geng, S - A33 Georgieva, S - Poster Slam A, A16 Gertel, VH - C53 Gervain, J - D8, E4, E75 Ghaleh, M - C80 Gheorghiu, F - A68 Ghesquière, P - E12, E9 Ghosh, S - C10 Giglio, L - C33 Gilbert, A - E44 Gilbert, AC - C47 Gilbert, R - B72 Giordano, M - C26 Giraud, A-L - A64 Giroud, N - C70, E44 Giskes, A - A32 Gispert-Sanchez, S - A55 Glerean, E - E79 Glezer, L - E66 Gnedykh, D - A31 Goddard, K - D56 Goh, JOS - A21 Goldin-Meadow, S - C3 Golovanova, I - D9 Golovteev, A - A5 D Daffner, K - E17 Dai, B - C22 Dale, C - C58 Daliri, A - C60 Damera, S - E66 Damian, M - D55 Das, S - B80 Davis, C - E78 Davis, K - B80 Davis, MH - A61, A66, B72, E81 Davydova, A - D9 De, A - B26 De Aguiar, I - C81 de Almeida, RG - B46, D24 de Bruin, A - D30 De Deyne, S - C31 Dehaene, S - C82 Dehaene-Lambertz, G - C82 de Jesus, DB - D42 E Economou, M - D76, E12 Egorova, N - Poster Slam D, D16 Eigsti, I-M - B21, C11, E25 F Fahimi Hnazaee, M - A34 Fairs, A - D58 Fama, M - C80 Fanda, L - D64 Fang, X - E88 Fargier, R - A54, D59 Farshchi, S - B40 Federmeier, K - B35, C23 Federmeier, KD - C33, C89 Fedorenko, E - A14 Feng, C - D55 Feng, J - B84 Feng, S - B16 Feng, X - Poster Slam C, C82 G Gabouer, A - E8 Gabrieli, J - C7, D87, E87 Gallagher, A - B14 Gao, Z - A41 García Sánchez, C - E40 Garg, A - Poster Slam B, B53 Garnett, E - A11 Gassner, T - C46 Gawel, O - E61 The Society for the Neurobiology of Language 263 Author Index Gomez, P - C87 Gonzalez-Gomez, N - A17 Gonçalves, A - C19 Gordeyeva, E - C13 Gordon, J - D62 Gore, K - B13 Gorniak, R - B80 Gorno-Tempini, ML - C34, C58 Gosselke Berthelsen, S - B47 Gotts, SJ - D28 SNL 2019 Program Goucha, T - C48 Gough, P - D13 Goutsos, D - B11, D61 Gow, D - A80 Gracco, V - A44, D12 Grainger, J - A82 Grant, A - E44 Grasemann, U - A18 Grigorenko, E - D9 Gross, J - C78 Gross, R - B80 Gross, WL - A36, E51 Grunden, N - E40 Gryllia, S - A45 Gryllou, K - B11 Grünert, M - D31 Gu, J - D69 Guediche, S - D30 Guenther, FH - C60 Guerrero, B - A46 Guevara Erra, R - E75 Gullberg, M - E23 Gullifer, J - E44 Guo, R - B16 Gutierrez-Sigut, E - A57, B83, E47 Gwilliams, L - E62 Hartzell, J - C56 Haug Olstad, AM - C25 Hauptman, A - E17 Hawelka, S - B81 He, L - B39 Healy, M - B31 Hedge, M - D82 Heffernan, J - A35 Heffner, C - E78 Heidlmayr, K - C22, C27 Heim, S - B27, C12, D31, D4 Henson, RN - B72 Herbay, A - B42, E22 Herbet, G - E84 Hernández, M - E49 Herron, T - D22, D34 Herron Lee, J - D50 Hertich, I - E30 Hervais-Adelman, A - A38, B78 Hess, K - A29 Hickok, G - A49, B59, B9, C63, D88, E34 Higgins, J - E13, E16 Hilger, D - C12 Hillis, AE - B9 Hilton, C - C72 Hinkley, L - C58 Hirsch, J - D12 Hoeft, F - D76, E45, E9 Højlund, A - D21, D79, E24 Hok, P - B7 Holmer, E - A48, B1 Honari-Jahromi, M - A20 Hong, T - D72 Honma, S - C58 Horne, M - B47 Houde, J - C58 Howell, P - D13 Hsu, C-T - C37 Huang, Z - B67 Hubbard, R - B35, C23 Hubner, LC - C19 Hueber, T - E69 Huettig, F - E61 Hultén, A - A81, B29, B73, D73 Humphreys, G - B28 Humphries, C - A35, C30 Humphries, CJ - A36, E51 Hutzler, F - B81 Hyder, R - Poster Slam D, C40, D79 Ikäheimonen, A - B73 Ille, S - D3 Ilmoniemi, R - B12 Ince, RAA - C78 Ingram, R - C14 Isaacs, ML - A15 Ishinabe, H - D68 Itzhak, I - C47 Ivanova, M - D22 Ivanova, MV - Poster Slam D, D34 Jansma, B - E59 Janssen, N - D57, E53 Janssen, R - B62 Jantzen, K - A73 Jantzen, M - A73 Japardi, K - B4 Jasińska, KK - D72 Jean Luc, A - C55 Jean Luc, V - C55 Jefferies, B - C35 Jensen, M - C40, D79 Jensen, O - B61 Jenson, D - A62 Jeong, H - C29, D68 Jerônimo, GM - C19 Jesse, A - A75 Ji, L - D47 Jiang, X - E66 Jin, Z - E48 Joanisse, MF - D71 Jobst, B - B80 Jockwitz, C - B27 Johnson, L - B9, D18 Johnson, LP - E19 Jonen, M - D31 Jouen, A-L - A59, E54 Juha, L - C79 Julien, S - C55 Junttila, K - E74, E77 Kawashima, R - C29, D68 Ke, A - B41 Kean, H - A14 Kearney, E - C60 Keator, L - A4, E15 Keith, J - E58 Kelekis, D - D61 Kelekis, N - A79 Kell, C - E57 Kell, CA - A55, B7 Kepinska, O - E45 Kere, J - B24 Kessels, RPC - A47, B6, D57, E53 Kessler, A - A77 Khachatryan, E - A34 Khalighinejad, B - A63 Khlif, M - D16 Khosa, N - A60 Khwaileh, T - A84, C44, E35 Kibreab, M - D17 Kielar, A - A7 Kikuchi, T - E29 Kiran, S - A18, B8, E13, E16 Kircher, T - B2 Kivisaari, S - A2, B29 Klaus, J - C54 Klein, D - A44, E44 Kleinberger, R - C10 Kleinschmidt, DF - B60 Kliesch, M - A42 Knockaert, N - C21 Knoop, CA - D85 Knudsen, B - E61 Koch, C - C11 Koch, X - D46 Kochari, A - A26 Koeda, T - C86 Koenraads, S - A11 Kojima, K - Poster Slam A, A69 Kolozsvari, O - A87 Konina, A - D81 Konja, C - C38 Kopachev, D - C13 Kornilov, S - D9 Korompoki, E - C20 Kösem, A - C22 Koskinen, M - B73 Kostromina, S - A31 Kothare, H - Poster Slam C, C58 H Hagoort, P - A38, A81, B3, C22, C27, C33, C89, D38, E26 Hakonen, M - B73 Halai, A - B13, B5, C14, D19 Halai, AD - C2, D23 Hale, J - A19, B34 Hämäläinen, J - A87, C73, D7 Hammer, T - D17 Han, Z - B16, E27 Hanganu, A - D17 Hansen, P - B22 Hanslmayr, S - B61 Hansmeyer, L - B7 Harkrider, A - A62 Harrington, R - A83 Hartmann, C - D62 Hartwigsen, G - B55, B64, C54 I Iaia, F - E40 Idrissi, A - A84, C44, E35 Iiro, J - C79 Ikeda, A - E29 J Jääskeläinen, IP - B73 , E79 Jackson, ES - Poster Slam D, D12 Jackson, PL - D84 Jackson, R - D29 Jacobs, J - B80 Jacobs, M - A36, E51 Jager, L - E63 Jagoda, L - C70 Jahkola, L - D83 K Kaan, E - C81 Kahana, M - B80 Kahane, P - A50, E69 Kandylaki, KD - A70 Kaplan, E - A75 Karavasilis, E - A79, D61 Karhila, R - E74, E77 Karimi, H - C53 Kartushina, N - D56 Kasselimis, D - Poster Slam A, A79, B11, C20, D61 Kathol, I - D17 Katzir, T - C5 Kaufeld, G - C72 Kautto, A - E11 Kauttonen, J - B73 Kavroulakis, E - C20 264 The Society for the Neurobiology of Language SNL 2019 Program  Kotz, S - B63, E59 Kotz, SA - A56, A70 Kourtzi, Z - A16 Kousaie, S - A44, E44 Koutsogiannaki, M - D56 Koyanagi, K - D68 Kozloff, V - D66 Kraimeche, S - B14 Krass, K - C24 Author Index Kremneva, E - C13 Krethlow, G - D59 Krieg, S - D3 Krishnamurthy, LC - A83 Krishnamurthy, V - A83 Krishnan, S - Poster Slam D, D11, D14 Kristinsson, S - Poster Slam A, A1 Kronbichler, M - B81 Kropff, I - B7 Krotenkova, M - C13 Kröger, BJ - D4 Ktori, M - D43 Kuhn, T - B4 Kujala, J - A52, C84, D73 Kujala, T - B17, B20, B23, C8 Kunieda, T - E29 Kuo, C-H - D53 Kurimo, M - E74, E77 Kurmakaeva, D - A31 Kurthen, I - A65, C70 Kuuluvainen, S - B23 Kwok, E - A10 Lee, C-L - A21 Lee, HK - E42 Lee, J - C47 Lee, S-H - D26 Lega, B - B80 Le Godais, G - E69 Lehtihalmes, M - A6 Lemaître, A-L - E84 Leminen, A - C77, D44, E24 Leminen, M - C77, D44 Lemmens, R - B10 Leonard, C - D20 Leong, V - A16 Leppänen, P - B26, C73, D7 Lewis, A - A26, E26 Li, B - D15 Li, J - B34, D39 Li, K - A30 Li, L - C82, E48 Li, M - B16, B70 Li, PL - C37 Li, X - C66, E27, E31 Liao, C-C - D26 Liao, C-H - Poster Slam E, D37, E38 Liberov, C - D67 Liljander, S - B76 Liljeström, M - A52, D73 Lin, F-H - B73 Lin, N - B33 Lin, W-T - A21 Lioumis, P - B12 Litcofsky, K - E13 Litovsky, C - Poster Slam E, E2 Liu, H - B18 Liu, J - E86 Liu, M - B36 Liu, X - E82 Liu, Y - C19 Lo, C-W - B41 Loberg, O - C73, D7 Logvinenko, T - D9 Lohvansuu, K - D7 Loiotile, R - C6 Long, L - B80 Long, Y - A30 Longcamp, M - C55 Loring, DW - A36, E51 Loureiro, F - C19 Lovett, MW - D71 Lowe, A - B73 Lowe, M - A36, E51 Lu, B - B31 Lu, C - A30 Lucchese, G - D25 Luh, W-M - B34 Lukic, S - C34 Lund, T - A37 Lund, TE - B56 Luo, C - E48 Luthra, S - B60 Lwi, S - D22 Ly, T - B4 Lykartsis, A - A70 Lynott, D - D2 Lyytinen, H - B24 Mareike, B-T - C79 Margulies, D - C35 Marin, L - C15 Markiewicz, R - A27 Marslen-Wilson, W - C62 Martignetti, L - A40 Martin, A - A19, D28 Martin, C - D56, E39 Martin, S - B55 Martinez-Alvarez, A - E4 Martino, D - D17 Masson-Trottier, M - D20 Matar, S - D41 Matchin, W - E34 Matsuhashi, M - E29 Matsumoto, R - E29 Mattei, J - E66 Mauranen, A - D81 Maurer, U - C9 May, S - C7, D87, E87 Mazaheri, A - A27, C43, D42 Mazerolle, E - D17 McBride, C - C9 McCullough, S - C39 McDaniel, A - E56 McDougle, CJ - D15 McKitrick, M - E38 McMahon, KL - A15 McQueen, J - E63 McQueen, JM - B53 McSween, M-P - A15 Mechtenberg, H - E76 Mednick, J - B79 Medvedev, AV - C67 Meersmans, K - C31 Meijs, EL - D38 Melamed, T - E5 Meltzer, J - A7 Meng, Q - D69 Meng, X - C82 Menninghaus, W - D85 Mesgarani, N - A63, B80 Mesite, L - B60 Mestre, D - A43 Meyer, B - D3 Meyer, L - A19, B19, E3 Meyer, M - A42, A65, C70 Meyer, NH - B6 Michaelis, K - C67 Michel, H - C55 Michelas, A - D58 Midgaard, K - E65 Miikkulainen, R - A18 Mikko, S - C79 Mikusova, N - D81 Miller, B - C34 Miller, D - D51 Miller, L - A77 Miller, Z - C34, C58 Mills, DL - D54 Mine, F - D68 Mineroff, Z - A14 Minotti, L - A50 Mirault, J - A82 Mirman, D - C17 Misirliyan, C - B42 Miyakoshi, M - B18, C67 Miyazaki, A - C83 Mizuiri, D - C58 Mkrtychian, N - A31 Mo, J - C9 Mody, M - D15 Mohr, B - D25 Molinaro, N - A33 Møller, MLH - D21 Momsen, J - D62 Monchi, O - D17 Mongelli, V - D38 Monsch, AU - A2 Monto, N - B70 Monzalvo, K - C82 Moran-Mizrahi, M - C85 Morgan, V - A36, E51 Moritz-Gasser, S - B25, E84 Morris, R - A83 Mostofsky, S - C11 Mota, MB - D42 Mottonen, R - A13 Mountford, H - A17 Mueller, JL - E21 L Laasonen, M - C8 Lacey, E - C80 Laganaro, M - A54, A59, D59, E54 Lally, C - B82 Lam, N - A81 Lamalle, L - A50 Lamb, DG - C81 Lambon Ralph, MA - A24, A53, B5, B13, B28, C2, C14, D19 D29, E29 Lamekina, I - B76 Lametti, DR - B77 Lancheros, M - A59, E54 Landi, N - B79 Lang, S - D17 Langdon, C - C71, D70 Langfitt, JT - A36, E51 Lau, E - C42, D37 Lauricella, M - C58 Law, R - B43, E32 Law, S-P - E71 Lawyer, L - A77 M Ma, M - B71 Machover, T - C10 MacIntyre, A - C59 Mack, J - D35 MacSweeney, M - A57, E47, E86 Maegherman, G - D60 Magnuson, J - B70 Maguire, K - A15 Maguire, M - E5 Mahmood, B - B67, D64 Mahon, B - C50 Mailend, M-L - B65 Mainela-Arnold, E - E11 Mak, H - B84 Mäkelä, J - C8 Mäkelä, S - C84 Maksumov, D - B67 Malcorra, BLC - C19 Malinen, E - A6 Malyutina, S - E1 Mancini, S - B49 Manfredi, M - A39 Mangnus, M - E53 Männel, C - E21, E3, E6, E7 Manouilidou, C - B45 Mantegna, F - E61 Marantz, A - D41, E36, E37, E62 Marcotte, K - C16, D20 Marebwa, B - A8, B54, B9 Marebwa, BK - E70 The Society for the Neurobiology of Language 265 Author Index Mueller, WM - A36, E51 Muhlack, B - B74 Mühleisen, TW - C12 SNL 2019 Program Mukoyama, Y - D68 Mullins, P - B52 Muraki, E - C32 Muralikrishnan, R - A84, E35 Mustafawi, E - A84, C44, E35 Musz, E - C6 Myachykov, A - B76, D52 Myers, E - C68, E76, E78 Myers, EB - B60 Negwer, C - D3 Nelissen, N - C18 Neuhaus, J - C34 Neuloh, G - D31 Neuschwander, P - C70 Nevat, M - B50 Newbury, D - A17 Newman, A - A67 Newman, AJ - D86 Newman, S - B51, E85 Newman-Norlund, R - D18 Nguyen, A - D66 Nie, J - A63 Nieto-Castañón, A - C60 Nieuwland, M - B38 Nieuwland, MS - A28 Nikolaev, A - C19 Nikolaeva, A - A72, A88 Nikulin, V - E3 Nishimoto, S - C52 Noah, A - D12 Noe, C - C75 Nora, A - B24 Norman, T - E83 Norrman, G - B15 Novitskiy, N - D52 Nuttall, HE - D60 Okamoto, K - C29, D68 Olcina-Sempere, G - E49 Oostenveld, R - A81 Oram Cardy, J - A10 Orpella, J - C61 Orrin, D - D64 Ortiz-Mantilla, S - B75 Ortiz Barajas, MC - E75 Osa Garcia, A - C16 Oseki, Y - E37 Ostarek, M - E61 Østergaard, K - D21, D79 Osterhout, L - D53 Ou, J - E71 Ouellette, H - D86 Ovchinnikova, I - D9 Ozernov-Palchik, O - C7, D87, E87 Ozker Sertel, M - B67 Payne, H - A57 Paz-Alonso, P - C56 Paz-Alonso, PM - B36 Pedyash, N - C13 Peeters, D - B3 Peleg, O - C85, E83 Penaloza, C - Poster Slam A, A18, B8 Perdue, M - Poster Slam B, B79 Perea, M - B83, B87, C87 Perfetti, C - D77, E88 Pergandi, JM - A43 Perrachione, T - A76, E67 Perron, M - D84 Perrone-Bertolotti, M - A50 Persichetti, AS - D28 Petrakou, C - D61 Petrides, M - A79, D61 Petrova, A - D52 Petrozzino, G - D78 Pexman, PM - C32 Phan, TV - E12 Philip, L - B54 Phillips, N - A44, E44 Phillips, SF - B48 Piai, V - A47, B53, B6, D57, E50, E53 Pichat, C - A50 Pieperhoff, P - C12 Pike, B - D17 Pilcher, W - C50 Pillay, SB - A36, E51 Pinango, M - E28 Pinheiro, A - C69 Pitkow, X - A58 Pobric, G - C14 Poeppel, D - C61, C65 Polczynska, M - Poster Slam B, B4 Polinsky, M - E38 Porges, EC - C81 Potagas, C - A79, B11, C20, D61 Poudel, S - E5 Poulisse, C - C43 Prekovic, S - A17 Price, CJ - E47 Prior, A - B50 Prokofyev, A - A72, A88 Protzner, AB - C32 Provost, S - B14 Prystauka, Y - D36 Psaroba, A-M - D61 Puce, A - E85 Pugh, K - B79 Pugh, KR - D72 Pulvermüller, F - D25 Purcell, JJ - D78 Pylkkänen, L - A20, A25, B43, B48, D39, D41, E32, E52 Qian, P - D33 Qing, CC - C59 Qiu, X - D33 Qu, Q - Poster Slam D, D55 Qualter, K - E2 Quandt, L - E46 Reynolds, G - E17 Richlan, F - B81 Riecke, L - E73 Riesenhuber, M - E66 Rijntjes, M - A79, D61 Rimikis, S - A60 Rimmele, JM - C65 Ripolles, P - C61 Rivera-Figueroa, K - E25 Rocabado, F - B87 Rocca, R - A37, B56 Rochon, E - C16, D20 Rødland, E - E65 Rodriguez, AD - A15 Roehm, D - C1 Roelofs, A - B53, D57, E50, E53 Roger, E - A50 Roger, K - B14 Rogers, TT - D29 Rolke, B - E30 Roll, M - B47 Rollo, P - B59, B86 Romanovska, L - B62 Romero, J - A29 Rommers, J - A28, C33, C89 Ronimus, M - B24 Rorden, C - A1, A4, A8, B54, B9, D18, E15, E19, E70 Rosa, E - B87 Rosenkranz, A - B2 Rossi, E - E43 Rothman, J - D51 Roussel, P - B66, E69 Rousselet, G - D32 Roy, J-P - D84 Royle, P - A40, B42, B45, E22 Rudner, M - A48, B1 Rueckl, J - B70 Ruiz Tovar, S - A29 Runnqvist, E - A56, C57 N Näätänen, R - E74, E77 Nagarajan, S - A69, C58 Nagels, A - B2 Naigles, LR - E10 Nakai, T - B18, C52 Nakamura, MS - C81 Nam, H - B70 Nathaniel, U - C5 Nazarian, B - C57 O O’Donnell, E - A23 O’Neil, K - A67 O’Riordan, CE - D54 O’Rourke, E - A23 Obrig, H - E6, E7 Oganian, Y - A69, D46 Ohta, S - E37 P Pablos, L - A45 Palomar-García, MÁ - E49 Palva, S - D81 Pantazis, D - D87 Papageorgiou, G - A79, B11, C20, D61 Paradis, C - B40 Parcet, MA - C15, E49 Park, H - B61, C78 Parker, G - B5 Parker, K - D22 Parkkonen, L - C8 Parrish, A - A25 Parrish, T - E13, E16 Partanen, E - B20, E24 Parviainen, T - B26 Pattamadilok, C - Poster Slam B, B58, C57 Patterson, K - C14 Pavlova, A - A88 Q Qi, T - C4 Qi, Z - D66 R Raghavan, M - A36, E51 Ralph, Y - E5 Ramezani, M - D17 Ramus, F - C82 Ranasinghe, K - C58 Randall, B - C62 Raposo, A - E20 Rapp, B - D78, E13, E16, E18, E2 Rastle, K - B82, E81 Rauschecker, J - E66 Rautu, IS - D3 Razorenova, A - A72, C74 Reilly, J - C17 Reisert, M - A79, D61 Renvall, H - B24, D83 266 The Society for the Neurobiology of Language SNL 2019 Program  Author Index S Saalasti, S - Poster Slam E, C79, E79 Sabu, S - A3 Sajjadi, S - C14 Sakreida, K - D31 Salmelin, R - A52, B24, B29, C84, D1, D73, D83 Salo, K - B12 Saltuklaroglu, T - A62 Sammler, D - B64 Sams, M - B73, E79 Samuel, AG - D30 Sanchez Pinho, P - A39 Santens, P - C21 Santhana Gopalan, PR - D7 Sarah, P - C55 Sarna, J - D17 Sarzedas, J - C69 Saur, D - B55 Schaadt, G - B19, C4, E3, E6, E7 Schaeverbeke, J - C18 Scharinger, M - B74, D85 Schelinski, S - C64 Scheurich, R - A73 Schevenels, K - B10, D76 Schild, U - D65 Schiller, N - A45, B57, E63 Schilling, LP - C19 Schlesewsky, M - A65, C1 Schloss, B - C37 Schneider, J - D66 Schneider, JM - E5 Schoenhaut, A - A80 Schoffelen, J-M - C22 Schoffelen, JM - Poster Slam A, A81 Schoknecht, P - C1 Schriefers, H - A26 Schumacher, R - C2 Schuster, S - B81 Schwartz, J-L - A64 Schwartz, L - A36, E51 Schwartz, M - B65 Schwendemann, M - C48 Schönström, K - A48 Scott, S - C59 Scott, T - E67 Segaert, K - A27, C43, D42 Seibold, V - E30 Sein, J - C57 Seki, A - C86 Sepulcre, J - A12 Sereno, SC - D32 Seyfried, F - C37 Shafer, V - B75, D67 Shah-Basak, P - A7 Shamma, S - A63 Shao, J - A71 Shao, X - E27 Sharan, A - B80 Sharer, K - D40 Sharp, B - C38 Shea, J - D78 Shellikeri, S - E58 Sheth, S - B80 Shetreet, E - E33 Shi, J - A85 Shimotake, A - E29 Shtyrov, Y - A31, B32, B47, B76, C40, D44, D52, D79, E24 Shu, H - C82, D72 Shuai, L - D72 Shum, J - D64 Shwe, W - C34, C58 Sierpowska, J - A47, B6, D57, E53 Silfverberg, S - D83 Šimko, J - E79 Simonsen, HG - B22 Simos, P - A79 Siqueira, ECG - C19 Siu, CTS - C9 Sivaratnam, G - A7 Skipper, JI - A68, B77 Skoe, E - E10 Slivac, K - A38 Small, SL - C3 Smalle, E - Poster Slam A, A13 Smallwood, J - C35 Smidarle, A - C19 Smith, H - D14 Smith, HJ - D11 Smolander, A-R - E74, E77 Snell, J - A82 Snider, SF - C88 Soder, RB - C19 Sohoglu, E - A61, A66, B72 Sollberger, M - A2 Sollmann, N - D3 Sorati, M - C76 Sorvisto, P - B52 Spalek, K - D46 Spanoudis, G - C36 Specht, K - E65 Sperling, M - B80 Spray, G - A11 Stager, CL - D71 Stahl, B - D25 Staib, M - A37 Stark, B - B54, D18, E34 Stark, BC - A1, B9 Steddy, S - B45 Stefanakis, G - C10 Stein, J - B80 Steinbach, KA - D71 Steinhauer, C - B45 Steinhauer, K - A22, A40, B42, E22 Stevens, MC - E25 Stille, CM - D4 Stockall, L - B45, E36 Stokes, R - B59 Storms, G - C31 Stratford, S - D35 Strauß, A - A64 Strijkers, K - C57, D58 Stroganova, T - A72, A88, C74 Stupina, E - Poster Slam C, C13 Su, I-F - E42 Sugiura, M - C29, D68 Suni, A - E79 Suppanen, E - B17 Swanson, SJ - A36, E51 Szaflarski, JP - A36, E51 Szmalec, A - C21 Tecoulesco, L - E10 Teghipco, A - D88 Tehan, T - E33 Teng, X - Poster Slam B, B71 Terporten, R - C22 Terrezza, J - A3 Teti, S - A7 Theodore, R - B70 Thiede, A - C8 Thijssen, S - B57 Thompson, C - E13, E16 Thompson, P - A51, A9 Thomsen, SG - D21 Thornton, D - A62 Thors, H - B54 Thothathiri, M - D40 Tian, L - B26 Tian, X - B39, B71, B85, D33 Timofeeva, P - D5 Titone, D - A44, E44 Titova, O - D9 Tivarus, M - A36 Todorović, S - A56 Tong, J - A35 Tong, J-Q - A36, C30, E51 Topalidou, M - C63 Tountopoulou, A - C20 Tranchina, S - C80 Tremblay, J - B14 Tremblay, P - D84 Trivarus, M - E51 Troutman, SBW - C53 Tseng, W-YI - A21 Tsolakopoulos, D - A79, B11, C20, D61 Tuckute, G - A14 Tuladhar, A - B6 Tulling, M - E32 Tunçgenç, B - C11 Tung, T-Y - B41 Turkeltaub, P - C80 Turkeltaub, PE - C67, C88 Turunen, P - B23 Twomey, T - E86 Tyler, LK - C62 Tylén, K - A37 Tyulenev, N - A72 Tzeng, A - E41 Unger, N - C12 Ungrady, M - C17 Uri, H - C79 Usai, F - D86 Usami, K - E29 Uther, M - E74 Vanassing, P - B14 Van Bouwel, K - C18 van Bree, S - A61 Vandenberghe, R - C18, C31 van den Broek, D - B69 Vandenbulcke, M - C18 Vanderauwera, J - E12 Vandermosten, M - B10, D76, E12, E9 van der Stelt, C - C80 T Taillefer, C - B14 Tainturier, M-J - B52 Takahashi, D - C29 Takahashi, R - E29 Takashima, A - B53, C27 Talola, S - B20 Tan, Y - E26 Tandon, N - A58, B59, B86 Tang, D-L - Poster Slam E, E56 Tao, L - E60 Tao, Y - E18 Tapia, JL - B87 Taylor, J - B63 Taylor, JE - D32 Taylor, JSH - E81 Taylor, KI - A2 U Uchiyama, H - C86 Uddén, J - A81 Ulanov, M - A88 V Vaalto, S - B12 Vaillancourt, J - D84 Vainio, M - E79 Valles Capetillo, DE - C26 van ‘t Veer, A - B57 The Society for the Neurobiology of Language van de Weijer, J - B40 van Gaal, S - D38 van Hinsberg, D - D85 Van Hulle, M - A34 Van IJzendoorn, M - B57 267 Author Index van Mierlo, P - C21 van Vliet, M - D1 Varjola, J - B23 Varkanitsa, M - B8, D61 Vassilopoulou, S - B11, C20 SNL 2019 Program Vatche, B - A49 Velonakis, G - A79, D61 Vergara-Martinez, M - C87 Vergara-Martínez, M - B83, B87 Vidal, Y - D43 Vigliocco, G - B31, B65 Villar-Rodríguez, E - E49 Villringer, A - E7 Virtala, P - B20, C8 Vogt, C - A55 von Kriegstein, K - C64 Vukovic, N - E45 Vulchanova, M - A32 Wang, Y - D69 Wang, Y( - B72 Ward, C - D35 Warner, G - A8 Watkins, K - D13, D14 Watkins, KE - D11, D63, E56 Weber, K - C27 Weiller, C - A79, D61 Weis, E - C34 Weiss, Y - C5 Weissler, RE - A74 Welch, A - C58 Wellner, B - B27 Wen, Y - A82 Weng, Y-L - D66 Werden, E - D16 West, E - D63 Wheeldon, L - C43 White, B - C71 Wikman, P - C77 Wiley, R - D78, E16 Williams, N - D81 Williamson, JB - C81 Willis, A - E46 Willis, H - D11, D14 Willment, K - E17 Wilmskoetter, J - A8, B9 Wilson, A - A51 Wilson, S - D18 Wiltshire, C - Poster Slam D, D63 Winkler, I - B17 Witteman, J - B57, E63 Wolfgruber, J - E61 Wolpert, M - A22, C47 Wong, J-S - A21 Wong, PCM - C9 Woodhead, Z - A51, A9 Woodruff, M - D54 Woollams, A - B13, D19 Woollams, AM - D23 Woolnough, O - B86 Worrell, G - B80 Wouters, J - E12, E9 Wray, S - Poster Slam E, E36 Wreh, C - D15 Wu, J - B26 Wu, KC - C9 Wu, Y - E82 Xie, X - E76 Xiong, Y - B51, E85 Xu, W - A87 Xu, X - D45 Xu, Y - B33 Yang, J - B36, B71, B85, D33 Yang, M - B80 Yang, X - B33, E31 Yang, Y - A45, A85, D27, D69, D74, E86 Yao, B - Poster Slam B, B63 Yeaton, J - A63 Ylinen, A - C77 Ylinen, S - B17, C73, E74, E77 Yokoyama, S - C83 Yoshida, K - E29 You, H - B70 Yourganov, G - A1, A4, B54, D18 Yu, C-L - D26 Yu, Y - D67 Yunusova, Y - E58 Yurchenko, A - A5 Yvert, B - B66, E69 Zhang, J - E48 Zhang, L - B16, B39, D33 Zhang, M - B33, E73 Zhang, Q - A86, B26, E31, E48 Zhang, R - B36 Zhang, S - B34 Zhang, X - B36, D12 Zhang, Y - A71, D47, E31, E80 Zhang, Z - D33 Zhao, C - C81 Zhao, H - A30 Zhao, W - B26 Zhao, Y - B5 Zheng, XC - C9 Zhou, F - A30 Zhou, L - D77 Zhou, S - A30 Zhou, X - B37, E60 Zhu, M - E60 Zhu, Q - B67 Zhukova, M - D9 Zink, I - B10 Zinman, L - E58 Zoefel, B - Poster Slam A, A61 Zöllner, JP - B7 Zuanazzi, A - C61 Zuckerman, B - C17 Zuev, A - C13 Zyryanov, A - Poster Slam E, C13, E1 W Wagner, M - B75 Wagner, V - D85 Waibel, A-M - C41 Walker, G - B59 Wallentin, M - A37, B56, D21, E24 Wanda, P - B80 Wang, C - D76 Wang, D - B61 Wang, F - C9 Wang, G - E60 Wang, H - E48 Wang, J - C9, D74 Wang, K - B16, D67 Wang, L - B37, D33 Wang, S - E82 Wang, X - B36, C35, D33 X Xiao, F - A1 Xie, K - E13 Y Yablonski, M - B44, C51 Yamaguchi, H - C52 Yan, X - A86 Yang, CL - C9 Yang, FP - E28 Yang, H - B33 Z Zaghloul, KA - B80 Zalonis, I - C20 Zappa, A - A43, C49 Zebe, F - B74 Zesiger, P - A54 Zhai, Y - A30 Zhang, A - E28 Zhang, C - A71, E80 Zhang, G - B33 Zhang, H - A22, C53 268 The Society for the Neurobiology of Language SAVE THE DATE THE 2021 MEETING OF THE SOCIETY FOR THE NEUROBIOLOGY OF LANGUAGE WILL BE HELD IN BRISBANE, AUSTRALIA OCTOBER 7-9, 2021 Join Us in Philadelphia SNL 2020 October 21-23