Jasmijn Bastings
Hello
Hi, welcome to my website! My name is Jasmijn (she/they), and I’m a Senior Research Scientist at Google DeepMind. I’m interested in the following topics, including (perhaps especially) their intersection:
- Fairness, bias, gender, gender bias, gender-fair language, gender-fair language technology
- Natural Language Processing (NLP)
- Machine Translation (MT), Automatic Translation
- Interpretability, explainability, explainable AI (XAI)
I received my PhD from ILLC, University of Amsterdam, where I was advised by Wilker Aziz, Ivan Titov and Khalil Sima’an.
News
- I’ll give a talk at MilaNLP in July.
- I joined the Gender-Inclusive Translation Technologies Workshop (GITT) 2024 organisation committee.
- I’ll be at EMNLP 2023 to present Dissecting Recall of Factual Associations in Auto-Regressive Language Models.
- I won an outstanding Area Chair award at ACL 2023!
- June 2023: I will be part of the project “An Interdisciplinary Analysis of gender-based discrimination in Translation Technology”, funded by the Digital Sciences for Society program of Tilburg University, a collaboration with Eva Vanmassenhove (NLP/translation), Hanna Lukkari (Law), Seunghyun Song (Philosophy).
- I started a YouTube channel for BlackboxNLP. Check it out and subscibe here: youtube.com/@blackboxnlp.
- You can now find me on Mastodon:
@jasmijn@sigmoid.social
or https://sigmoid.social/@jasmijn.
Publications
Recent pre-prints
- MiTTenS: A Dataset for Evaluating Misgendering in Translation. Kevin Robinson, Sneha Kudugunta, Romina Stella, Sunipa Dev, Jasmijn Bastings.
Recent publications
- Diagnosing AI explanation methods with folk concepts of behavior. Alon Jacovi, Jasmijn Bastings, Sebastian Gehrmann, Yoav Goldberg, Katja Filippova. JAIR 2023.
- Dissecting Recall of Factual Associations in Auto-Regressive Language Models. Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson. EMNLP 2023.
- “Will You Find These Shortcuts?” A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification. Jasmijn Bastings, Sebastian Ebert, Polina Zablotskaia, Anders Sandholm, Katja Filippova. EMNLP 2022. [blog]
See my full publication list on my Google Scholar profile.
Highlighted publications
- The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?. Jasmijn Bastings, Katja Filippova. BlackboxNLP 2020.
- Interpretable neural predictions with differentiable binary variables. Jasmijn Bastings, Wilker Aziz, Ivan Titov. ACL 2019.
- Joey NMT: A Minimalist NMT Toolkit for Novices. Julia Kreutzer, Jasmijn Bastings, Stefan Riezler. EMNLP 2019. [code]
- Graph convolutional encoders for syntax-aware neural machine translation Jasmijn Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, Khalil Sima’an. EMNLP 2017.
You can find a full list of my publications on my Google Scholar profile.
Blog posts
- The Annotated Encoder-Decoder. Explains implementing RNN-based NMT models in PyTorch.
Code
- Interpretable Neural Predictions with Differentiable Binary Variables contains the HardKuma distribution that allows (hybrid) binary samples (with true zeros and ones) that allow gradients to pass through.
- Joey NMT is an easy-to-use, educational, and benchmarked NMT toolkit for novices that I developed with Julia Kreutzer and is currently maintained by Mayumi Ohta.
- FREVAL is an all-fragments parser evaluation metric that I developed with Khalil Sima’an.
Talks
- EMNLP 2020 Blackbox NLP. The Elephant in the Interpretability Room. (PDF)
- ACL 2019. Interpretable Neural Predictions with Differentiable Binary Variables (Google Slides)
- EMNLP 2017. Graph Convolutional Encoders for Syntax-Aware Neural Machine Translation (Google Slides)
CV
- 2019-Now, Research Scientist, Google. Berlin & Amsterdam.
- 2015-2020. PhD in AI, ILLC, University of Amsterdam. Defended 8 October 2020.*
- 2009-2012, MSc in AI, University of Amsterdam.
- 2006-2009, BSc in AI, Utrecht University.
Reviewing / Area Chair / Committees
I was a co-organizer of:
- Blackbox NLP 2022 (co-located with EMNLP 2022)
- Blackbox NLP 2021 (co-located with EMNLP 2021)
I was area chair (AC) / action editor (AE) for the following conferences:
- ACL (2021, 2022, 2023) (Interpretability and Analysis of Models for NLP)
- EMNLP (2021, 2022) (Interpretability and Analysis of Models for NLP)
- ACL rolling review (2021-2022)
- EACL (2021) (Machine Learning for NLP)
- NAACL (2021) (Interpretability and Analysis of Models for NLP)
I reviewed for the following conferences and workshops:
- ACL (2019, 2020)
- EMNLP (2018, 2019, 2020)
- CoNNL (2018, 2019)
- ICLR (2020)
- MT Summit (2019)
- WMT (2018, 2019)
- Analyzing and interpreting neural networks for NLP (BlackboxNLP, 2019, 2020)
- Debugging Machine Learning Models (Debug ML, ICLR Workshop, 2019)
- Workshop on Neural Generation and Translation (WNGT, 2018, 2019, 2020)
- Workshop on Representation Learning for NLP (RepL4NLP, 2020)
- Workshop on Structured Prediction for NLP (SPNLP, 2019)
Contact
- Mastodon
- You can find me on Twitter: @jasmijnbastings.
- My code is on Github: github.com/bastings.