26 May 2023
John Starr presented his work studying the processing trade-offs between syntax and phonology at the Manchester Phonology Meeting (MFM).
23 May 2023
Fangcong Yin was honored as a Merrill Presidential Scholar, one of the highest undergraduate honors at Cornell! Well done, Fangcong!
3 April 2023
Marten gave an invited talk at the University of Florida on the linking hypotheses between surprisal and human language processing.
31 Mar 2023
C.Psyd co-organized the second Workshop on Processing and Evaluating Event Representations (PEER) with FACTS.lab.
9 Mar 2023
C.Psyders rocked HSP 2023 with 6 poster presentations.
20 Jan 2023
Marten gave an invited talk at Duesseldorf University exploring the linking hypotheses between surprisal and human language processing.
7 Dec 2023
Sidharth Ranjan presented his work on discourse influences on Hindi word order at EMNLP.
20 Nov 2023
Sidharth Ranjan presented his work on dual mechanism priming at AACL.
2 Jul 2022
Congratulations to Dr. Forrest Davis for his successful PhD defense! He’s off to amaze the world in his new position at MIT Linguistics.
15 Apr 2022
Congratulations to Dr. Katie Blake for her successful PhD defense! She’s going to do great in her new position at Amazon.
1 Apr 2022
C.Psyd hosted the first annual CNY workshop on computational cognitive models of meaning.
25 Mar 2022
John Starr presented his mind rhyme work at HSP 2022.
22 Feb 2022
John Starr gave an invited talk at UC Berkeley’s Phorum about his initial findings on Mind Rhymes, a phenomenon where a rhyme expectation is violated to humorous effect.
1 Feb 2022
Marten gave an invited talk at UC Irvine on methods for applying psycholinguistic priming to neural networks.
25 Jan 2022
Marten gave an invited talk at Dongguk University on methods for applying psycholinguistic priming to neural networks.
3 Dec 2021
Marten gave an invited talk at University of Chicago on the weaknesses of statistical learning methods for inferring linguistic representations and processing.
15 Oct 2021
Marten gave an invited talk at Georgia Tech on the weaknesses of statistical learning methods for inferring linguistic representations and processing.
26 Aug 2021
Timkey and van Schijndel (2021; EMNLP) shows that cosine similarity doesn’t work for Transformer models. We introduce a simple method to correct the issue without retraining.
25 June 2021
Paper published in Cognitive Science!
Surprisal can only explain the existence of garden path effects in reading times, not the magnitude of the effects themselves.
10 May 2021
2 papers accepted at ACL and ACL Findings:
1) Davis and van Schijndel (2021) shows that linguistic knowledge in language models can be modeled as constraints.
2) Wilber et al. (2021) shows that abstractive summarization is extremely shallow at present, often simply emulating extractive summarization.