Benoît Crabbé

April 5, 2023, at 4 PM

Online (Zoom)

Abstract

 

The field of Computational linguistics is currently is going through a period of paradigm shift. Large foundational language models are now ubiquitous, with chat GPT creating the last buzz.

If you ask chat GPT its promises for the future of language sciences, you get the somewhat confident reply: “Large language models like myself hold great promise for the field of linguistics. They offer improved language understanding, access to vast amounts of data, automatic language analysis, and the ability to test linguistic theories. These tools can help linguists to gain new insights into how language works, identify patterns in language usage, and refine their linguistic theories.”

In this talk I will put these claims in perspective with some key modeling directions in computational linguistics: modeling language structure and modeling language in relation with the world knowledge. And I will explain how we eventually end up with the current language models. We will show that given what they are, current language models achieve sometimes surprising results with respect to the modeling of language structure and highlight some potential research perspectives in language sciences and some of their current limitations.

Prof. Benoît Crabbé
(Université Paris Cité)


Benoît Crabbé is professor of computational linguistics at the Université Paris Cité. He is head of the UFR Linguistics and affiliated in research to the LLF lab (CNRS and Université Paris Cité).
His research interests are in computational linguistics and more specifically in natural language understanding, natural language parsing and deep learning.
He is also involved in empirical and experimental issues in linguistics and in cognitive science related to modelling the structure of natural languages.

Other distinguished lectures