Before working at University of Washington, Bender held positions at Stanford University, UC Berkeley and worked in industry at YY Technologies.[9] She currently holds several positions at the University of Washington, where she has been faculty since 2003, including professor in the Department of Linguistics, adjunct professor in the Department of Computer Science and Engineering, faculty director of the Master of Science in Computational Linguistics,[10] and director of the Computational Linguistics Laboratory.[11] Bender is the current holder of the Howard and Frances Nostrand Endowed Professorship.[12][13]
Bender was elected VP-elect of the Association for Computational Linguistics in 2021.[14] Bender served as VP-elect in 2022, moving to Vice-President in 2023. She is serving as President through 2024,[15][16] and will serve as Past President in 2025. Bender was elected a Fellow of the American Association for the Advancement of Science in 2022.[17]
Bender has constructed the LinGO Grammar Matrix, an open-source starter kit for the development of broad-coverage precision HPSG grammars.[19][20] In 2013, she published Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax, and in 2019, she published Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics with Alex Lascarides, which both explain basic linguistic principles in a way that makes them accessible to NLP practitioners.[citation needed]
The Bender Rule, which originated from the question Bender repeatedly asked at the research talks, is research advice for computational scholars to "always name the language you're working with".[4]
She draws a distinction between linguistic form versus linguistic meaning.[4] Form refers to the structure of language (e.g. syntax), whereas meaning refers to the ideas that language represents. In a 2020 paper, she argued that machine learning models for natural language processing which are trained only on form, without connection to meaning, cannot meaningfully understand language.[26] Therefore, she has argued that tools like ChatGPT have no way to meaningfully understand the text that they process, nor the text that they generate.[citation needed]
Selected publications
Books
Bender, Emily M. (2000). Syntactic Variation and Linguistic Competence: The Case of AAVE Copula Absence. Stanford University. ISBN978-0493085425.
Sag, Ivan; Wasow, Thomas; Bender, Emily M. (2003). Syntactic theory: A formal introduction. Center for the Study of Language and Information. ISBN978-1575864006.
Bender, Emily M. (2013). Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax. Synthesis Lectures on Human Language Technologies. Springer. ISBN978-3031010224.
Bender, Emily M.; Lascarides, Alex (2019). Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics. Synthesis Lectures on Human Language Technologies. Springer. ISBN978-3031010446.
Bender, Emily M.; Flickinger, Dan; Oepen, Stephan (2002). The Grammar Matrix: An open-source starter-kit for the rapid development of cross-linguistically consistent broad-coverage precision grammars. Proceedings of the 2002 workshop on Grammar engineering and evaluation. Vol. 15.
Siegel, Melanie; Bender, Emily M. (2002). Efficient deep processing of Japanese. Proceedings of the 3rd workshop on Asian language resources and international standardization. Vol. 12.
Goodman, M. W.; Crowgey, J.; Xia, F; Bender, E. M. (2015). "Xigt: Extensible interlinear glossed text for natural language processing". Lang Resources & Evaluation. 49 (2): 455–485. doi:10.1007/s10579-014-9276-1. S2CID254372685.
Xia, Fei; Lewis, William D.; Goodman, Michael Wayne; Slayden, Glenn; Georgi, Ryan; Crowgey, Joshua; Bender, Emily M. (2016). "Enriching A Massively Multilingual database of interlinear glossed text". Lang Resources & Evaluation. 50 (2): 321–349. doi:10.1007/s10579-015-9325-4. S2CID254379828.
Bender, Emily M.; Gebru, Timnit; McMillan-Major and, Angelina; Shmitchell, Shmargaret (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. doi:10.1145/3442188.3445922.
^Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-03). "On the Dangers of Stochastic Parrots: Can Language Models be Too Big? 🦜". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21. New York, NY, USA: Association for Computing Machinery. pp. 610–623. doi:10.1145/3442188.3445922. ISBN978-1-4503-8309-7.