Raphaël Millière

When Machines Speak: Language Processing in Computers and Humans

Summary

Building algorithms capable of generating and parsing sentences in human languages has been a long-standing goal of computer science. The recent success of artificial neural networks over traditional symbolic algorithms has led to many breakthroughs. GPT-3, a neural language network developed by OpenAI in 2020, offers a striking example of this progress. Pre-trained on a vast database of books and web pages, the neural network can write essays, summarize text, translate languages, and answer questions. GPT-3’s outputs are grammatically correct, stylistically coherent, and topically relevant. In many cases, the text produced by GPT-3 is convincing enough that people presume other humans wrote it.

Unlike humans, neural networks do not acquire language skills by interacting with the world. Instead, they identify patterns in vast amounts of text and use them to reproduce complex hierarchical relationships between words observed in human languages. While neural networks like GPT-3 can generate text that resembles human writing and thinking, their sole purpose is to analyze a sequence of words and correctly predict which one should be next. This raises interesting questions that intersect with computer science, psychology, neurolinguistics, and philosophy: Do neural networks optimized for text generation represent linguistic features that are functionally similar to the human brain’s representations? In what sense can we say that the outputs produced by these neural networks manifest an understanding of language? What are the broader implications of our answers to these questions for the many current and potential uses of these neural networks on a mass scale? This panel will explore these issues from different perspectives.

Speakers