Natural Language Processing (NLP) is probably the fastest growing and most impactful Machine Learning sub-field. New state-of-the-art language models have been leapfrogging each other for a few years, and it's hard to keep up. Here we'll investigate interpretable ways to generate text, and see how training set and model size affect the output we see. We'll start with simple statistical models and work toward neural networks that use word embeddings, looking at the quirks along the way!
This is meant to be a fun demo of basic approaches not used in production. If you've ever been curious about how algorithms might generate text, this should be a good intro. If you're looking for ways to improve your models' F1 scores, you probably won't find that here, but I'd love to have you join to help answer questions and improve the general discussion. If you'd like to see any of the demos or ask questions ahead of time, feel free to email me (email@example.com)!
Hello and welcome! I'm Sam. Usually I'm working on cybersecurity and data projects, and now I also moonlight as an entrepreneur trying to build retail investors a research platform. I love climbing and snowboarding, and all my projects are usually built in python and on AWS. Please reach out if you 1) are interested in helping retail investors increase their alpha, 2) have any questions you think I may be able to answer, 3) are a freelance designer, or 4) teach the tenor saxophone (I'm a good student, I swear). You can reach me at firstname.lastname@example.org
Does this session sound interesting? You may also like these: