Secure AI Agents from Development to Runtime

by Kishor Patil | at Minnebar20

The Distance Between "Hello World" and "Secure-at-Scale" A developer spins up an AI agent, gives it access to a database, and suddenly magic happens. It’s writing emails, taking actions, and automating the enterprise. It looks flawless. It feels like the future.

I’m here to talk about why that magic often breaks the moment it hits a production firewall.

As a practitioner leading GCP architecture, I spend a lot of time thinking about the distance between a "smart" agent and a "secure" one. The gap isn't usually the LLM's intelligence; it’s the Decision Architecture we build around it.

In this session, we’re moving past the AI hype to look at the actual scars of building secure agentic workflows. We will move through the lifecycle of an agent, from the initial development sandbox to a governed, enterprise runtime.

What we will explore together:

The Development Gap: Why "Prompt Engineering" isn't a security strategy and how we build supply chain trust for models. The Runtime Reality: Encoding enterprise "values" into technical guardrails using GCP’s security suite.

Kishor Patil

This person hasn't yet added a bio.


Are you interested in this session?

This will add your name to the list of interested participants. It will help us gauge interest for scheduling purposes.

No participants yet

Similar Sessions

Help us find similar sessions by signing up for them!