Artificial Intelligence and Language Understanding
Artificial Intelligence has made impressive progress in recent years. Nowhere has this progress been as impressive as in the field of Natural Language Processing (NLP), concerned with building algorithms that can parse and generate text in natural language. A new family of NLP algorithms, called Large Language Models (LLMs), exhibit a remarkable ability to generate fluent text. Paragraphs generated with LLMs are grammatically well-formed, topically relevant, and stylistically coherent -- so much so that they can often fool human readers. This technological development raises fascinating questions at the crossroads between philosophy, computer science, and linguistics. Can we say that LLMs really understand language? Do they have any degree of semantic competence? Or are they simply manipulating text strings without encoding their meanings? What does it even mean to understand language in the first place? These are some of the questions we will discuss in this seminar, by reading philosophical work in conjunction with cutting-edge research from computer science and computational linguistics.