Hi, welcome to my website! My name is Jasmijn (she/her), and I’m an AI researcher at Google. Currently I’m interested in the following topics, especially in the context of Natural Language Processing:
- Explainable AI
- Trustworthy ML
- Machine Learning
I received my PhD from ILLC, University of Amsterdam, where I was advised by Wilker Aziz, Ivan Titov and Khalil Sima’an.
You can find my publications on Google Scholar.
- EMNLP 2020 Blackbox NLP. The Elephant in the Interpretability Room. (PDF)
- ACL 2019. Interpretable Neural Predictions with Differentiable Binary Variables (Google Slides)
- EMNLP 2017. Graph Convolutional Encoders for Syntax-Aware Neural Machine Translation (Google Slides)
- The Annotated Encoder-Decoder. Explains implementing RNN-based NMT models in PyTorch.
- 2006-2009, BSc in AI, Utrecht University.
- 2009-2012, MSc in AI, University of Amsterdam.
- 2015-2020. PhD in AI, ILLC, University of Amsterdam. Defended 8 October 2020.*
- 2019-Now, Research Engineer, Google.
Reviewing / Area Chair / Committees
I am a co-organizer of:
- Blackbox NLP 2022 (co-located with EMNLP 2022)
- Blackbox NLP 2021 (co-located with EMNLP 2021)
I was area chair (AC) for the following conferences:
- EACL 2021 (Machine Learning for NLP)
- NAACL 2021 (Interpretability and Analysis of Models for NLP)
- ACL 2021 (Interpretability and Analysis of Models for NLP)
- EMNLP 2021 (Interpretability and Analysis of Models for NLP)
I reviewed for the following conferences and workshops:
- ACL (2019, 2020)
- EMNLP (2018, 2019, 2020)
- CoNNL (2018, 2019)
- ICLR (2020)
- MT Summit (2019)
- WMT (2018, 2019)
- Analyzing and interpreting neural networks for NLP (BlackboxNLP, 2019, 2020)
- Debugging Machine Learning Models (Debug ML, ICLR Workshop, 2019)
- Workshop on Neural Generation and Translation (WNGT, 2018, 2019, 2020)
- Workshop on Representation Learning for NLP (RepL4NLP, 2020)
- Workshop on Structured Prediction for NLP (SPNLP, 2019)
During my PhD I supervised the following students:
- Joost Baptist (MSc Thesis)
- Laura Ruis (Honours Project)
- Interpretable Neural Predictions with Differentiable Binary Variables contains the HardKuma distribution that allows (hybrid) binary samples (with true zeros and ones) that allow gradients to pass through.
- Joey NMT is an easy-to-use, educational, and benchmarked NMT toolkit for novices that I developed with Julia Kreutzer.
- FREVAL is an all-fragments parser evaluation metric that I developed with Khalil Sima’an.