LSTM toolkit that can estimate incremental processing difficulty and adapt to new contexts
van Schijndel and Linzen (2018) “A Neural Model of Adaptation in Reading”
Left-corner parsing toolkit that can estimate incremental processing difficulty and regress to human behavior
van Schijndel, Exley, and Schuler (2013) “A Model of Language Processing as Hierarchic Sequential Prediction”
125 pre-trained English LSTMs (trained on 2m,10m,20m,40m,80m tokens; 25 models each)
van Schijndel, Mueller, and Linzen (2019) “Quantity doesn’t buy quality syntax with neural language models”
5 LSTM models of Wiki English and 5 LSTM models of Spanish Wiki (80m tokens)
Davis and van Schijndel (2020) “RNN LMs Always Learn English-Like RC Attachment”
5 models of discourse-intact English Wiki LSTMs (80m tokens)
Davis and van Schijndel (2020) “Interaction with Context During RNN Sentence Processing”
25 English Wiki and 25 English OpenWebText discourse-intact LSTMs
Davis and Altmann (2021) “Finding event structure in time: What recurrent neural networks can tell us about event structure in mind”