Natural Language Processing (NLP) has proven itself to be an essential component of intelligent systems design over the past couple of decades. Whether it's autofill in your favorite word processor or search engine, language translation with Google Translate or DeepL, or virtual assistants such as Siri, Alexa, and Google Home, NLP is nearly everywhere, and is allowing us to get computers to speak our language instead of the other way around.
The majority of NLP tasks, especially simple ones, can be accomplished using statistical methods and traditional machine learning techniques without the need of doing any deep linguistic analysis. However, as tasks become increasingly complex, e.g. chatbots and conversational AI, we may no longer have the privilege of ignoring more thorough considerations of how language actually works.
In this session, I will be talking about some common problems which we face when working with NLP, and then consider some applications of formal semantics as studied by linguists in developing more capable NLP applications.
Ryan is a senior UMN student studying computer science and linguistics. He is currently working on a thesis about using what linguists have learned about mereology and noun classifications to create more robust NLP applications.