Three Musicians, One Engineer: Building AI Systems for Musicians in Minneapolis
by Vaish Sagar | at Minnebar20
As engineers, we have a gift: we can build systems for which we are not the intended users.
Minneapolis has a rich musical culture. There are incredible jazz musicians and venues in this city, and there's a running joke about not being able to throw a rock without hitting an indie recording studio. Japan revolutionized the world of music with smartphones. How can we do the same with AI in Minneapolis? It's time we built locally: tools to discover local music, software written for our artists, systems that drive real growth for the people making music here.
The FormatI'm inviting three musicians working in different areas to sit down and talk about the problems they face in their daily lives, and how we can make their lives easier so they have more time to make music:
- An educator
- A live performer
- A producer
DAW Accessibility. You can tell that designers weren't involved in building tools like Logic Pro, Ableton, and Reaper. How do we make these systems easier to use?
Generative AI & Ethics. Generative AI in music raises real questions. We'll talk about existing software, what musicians like and don't like about it, and where the ethical lines are. I explored some of this in my article, "Much Ado About AI: An Artist, a Writer, a Musician, and an Engineer Walk Into a Debate."
AI in Education. How can we use AI meaningfully in teaching music?
Local Music & Trends. What kind of music are Minneapolis musicians making and interested in making? Who are the stakeholders: who's making music, who's listening, who's going to live shows?
Vaish Sagar
I'm an AI engineer, full-stack developer, and two-time founder with a Master's in Computer Science specializing in NLP from Arizona State University. Recognized by the U.S. government with an O-1A for extraordinary abilities in AI, I have five years of experience spanning enterprise data engineering at Oracle, consumer product development, and applied AI research. I've built and shipped two startup platforms from scratch, was a Sequoia Capital Arc Accelerator finalist, and was featured on Fox9 News for consumer tech innovation. What sets my work apart is where it lives: at the intersection of AI and music. I've built AI tools inside DAW environments, applied music theory principles to reduce artifacts in AI-generated audio, and trained a GPT-style transformer from scratch to study musical structures in song lyrics. As a guitarist and vocalist, I bring a musician's ear to my technical work. My current project, "What Does B-Major Sound Like in a Parallel Universe?", is a reflection of that curiosity, of exploring what music could be.
Are you interested in this session?
This will add your name to the list of interested participants. It will help us gauge interest for scheduling purposes.
Interested Participants
Similar Sessions
Does this session sound interesting? You may also like these:
-
Growing a Social Media Community: What’s Working Right Now
-
Don’t Blame the User—Fix the Design: The UX of Streets and Cities
-
📡🕸️ Preppers & Comrades Unite: Building a Decentralized Mesh Network for Resilient Communication
-
The Future of Energy and the 30X Problem
-
How To Sell Your Neighbor's Wi-Fi, For Fun and Bitcoin!