What's wrong with language models? Fundamental interactions and neural networks.

by Sergey Tolkachev | at Minnebar 18 | 2:50 – 3:35 in Tackle | View Schedule

From fundamental interactions in physics to biological neural networks and how these models can help create more efficient conversational systems. By abandoning the traditional definition of “language,” we can create a constructive and pragmatic model of information interaction. Fundamental interactions in physics can be transferred to computer science, and the result is an effective model in which traditional engineering concepts such as “tools”, “work”, “productivity” and “power” acquire specific, measurable meanings and help solve a number of problems in artificial intelligence problems.


Sergey Tolkachev

I am a software developer and founder of a startup software company 256gl. Our goal is to deliver AI applications that have the ability to "understand". Prior to starting my own company, I worked as the CTO for Outsell, where we developed one of the first commercial ChatBots for car dealerships from 1999-2001. Before that, I was the Director of Academic Computing and an associate professor in the department of Applied Mathematics. My current research focuses on neural networks, contextual and conversational search, and building tools for personal assistants in healthcare, smart homes, and retail businesses. My credo is to merge science and engineering in harmony.

Similar Sessions

Does this session sound interesting? You may also like these: