What’s Going On in Neural Constituency Parsers? An Analysis, Gaddy et al., 2018 [Paper] [Notes] #nlp
Two Methods for Domain Adaptation of Bilingual Tasks: Delightfully Simple and Broadly Applicable, Hangya et al., 2018 [Paper] [Notes] #nlp
What do you learn from context? Probing for sentence structure in contextualized word representations, Tenney et al., 2019 [Paper] [Notes] #nlp
BPE-Dropout: simple and effective subword regularization, Provilkov et al., 2019 [Paper] [Notes] #nlp
From English To Foreign Languages: Transferring Pre-trained Language Models, Tran, 2020 [Paper] [Notes] #nlp
Evaluating NLP models via contrast sets, Gardner et al., 2020 [Paper] [Notes] #nlp
Byte Pair Encoding is Suboptimal for Language Model Pretraining, Bostrom et al., 2020 [Paper] [Notes] #nlp
Translation artifacts in cross-lingual transfer learning, Artetxe et al., 2020 [Paper] [Notes] #nlp
Weight poisoning attacks on pre-trained models, Kurita et al., 2020 [Paper] [Notes] #nlp
SimAlign: High Quality Word Alignments without Parallel Training Data using Static and Contextualized Embeddings, Sabet et al., 2020 [Paper] [Notes] #nlp
Dissecting contextual word embeddings: architecture and representation, Peters et al., 2018 [Paper] [Notes] #nlp#embeddings
BERT: Pre-training of deep bidirectional transformers for language understanding, Devlin et al., 2018 [Paper] [Notes] #nlp#embeddings
Learning Semantic Representations for Novel Words: Leveraging Both Form and Context, Schick et al., 2018 [Paper] [Notes] #nlp#embeddings
Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from Wikipedia, Yamada et al., 2018 [Paper] [Notes] #nlp#embeddings
Rare Words: A Major Problem for Contextualized Embeddings and How to Fix it by Attentive Mimicking, Schick et al., 2019 [Paper] [Notes] #nlp#embeddings
Attentive Mimicking: Better Word Embeddings by Attending to Informative Contexts, Schick et al., 2019 [Paper] [Notes] #nlp#embeddings
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance, Schick et al., 2019 [Paper] [Notes] #nlp#embeddings
BERT is Not a Knowledge Base (Yet): Factual Knowledge vs. Name-Based Reasoning in Unsupervised QA, Poerner et al., 2019 [Paper] [Notes] #nlp#embeddings
Architectures
Conditional Random Fields: probabilistic models for segmenting and labeling sequence data, Lafferty et al, 2001 [Paper] [Notes] #nlp#architectures
Probing Neural Network Comprehension of Natural Language Arguments, Nivel and Kao, 2019 [Paper] [Notes] #nlp#datasets
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference., McCoy et al., 2019 [Paper] [Notes] #nlp#linguistics#datasets
Reusing a Pretrained Language Model on Languages with Limited Corpora for Unsupervised NMT, Chronopoulou et al., 2020 [Paper] [Notes] #nlp#machine-translation
A Trainable Spaced Repetition Model for Language Learning, Settles and Meeder, 2016 [Paper] [Notes] #linguistics
Targeted synctactic evaluation of language models, Marvin and Linzen, 2018 [Paper] [Notes] #nlp#linguistics
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference., McCoy et al., 2019 [Paper] [Notes] #nlp#linguistics#datasets
My English sounds better than yours: Second language learners perceive their own accent as better than that of their peers, Mittlerer et al., 2020 [Paper] [Notes] #linguistics
Fake news game confers psychological resistance against online misinformation, Roozenbeek and van der Linden, 2019 [Paper] [Notes] #social-sciences#humanities
Kids these days: Why the youth of today seem lacking, Protzko and Schooler, 2019 [Paper] [Notes] #social-sciences
Humanities
Fake news game confers psychological resistance against online misinformation, Roozenbeek and van der Linden, 2019 [Paper] [Notes] #social-sciences#humanities