A dev’s guide to AI agents
Five things to know for our agentic future
What’s inside:
Five things devs need to know for our agentic futureResources: Step-up authentication tutorial, “The AI agent access problem” with author of “Securing AI Agents” Chris Hughes, and OWASP’s GenAI Security Project
Five things to know for our agentic future
“Have your agent call my agent” isn’t just a Hollywood thing anymore. AI agents are increasingly “speaking” on behalf of the end user, but without guardrails and contracts that typically bind the human variety.
AI agents and agentic browsers ingest massive amounts of data and are often given unrestricted access.
The challenge is to maintain enterprise-grade security without breaking the user experience.
Your users don’t know what a “token” is. They don’t care about “private keys.” And frankly, they shouldn’t have to.
Their only goal is to get value out of your application. That means they aren’t thinking about the security implications of the AI agent running in the background.
But as developers, we have to.
Here are a few tips for ensuring the software you build is both secure and usable for our agentic future:
Deliver value and security. Don’t make users think about technical concepts like tokens and private and public keys. Your job is to protect the user and their data while letting them focus entirely on the value your product provides. But recognize there are moments where intentional friction is needed. For example, AI agents should be forced to get permission for critical, high-risk actions. This means the user always stays in control.
Bonus: If you’re curious about how to ensure users can approve high-risk actions, check out my tutorial: Five steps to secure your app against rogue AI agents: How to implement step-authentication using VIA’s Zero Trust Fabric
Abstract the technical complexity. Internally, we constantly ask, “Why does the user need to do this?” or “is this step really necessary?” or “will the user understand this?” to help us strip away the noise and ensure a simple, straightforward user experience.
Get comfortable talking about risk. Not all risk is created equal and you’ll just be spinning your wheels if you try to protect everything against everything. Developers must understand the highest risk for their systems. Is it exposing personally identifiable information (PII)? Compromising financial transactions? When creating an application, developers must understand what risks have the highest probability of occurring and focus on reducing the risk around those things.
Authenticate like it’s 1999...or 2026. Traditional tokens that give people access are now being delegated to AI agents, which could operate without being seen and technically take over accounts. Once an agentic AI holds a user’s session token, it effectively is the user. A powerful, fast, and potentially reckless user who can also be compromised by threat actors.
Stop designing for a single AI agent. Design for a fleet. We are heading toward a world where your users (and you, as developers!) will interact with dozens of specialized agents daily. The old model of centralized access controls won’t scale for that.
How I’ve put these tips into practice
Here’s how we’ve put these tips into practice at VIA. Right now, if a user employs an AI agent, that agent inherits all of the user’s permissions, effectively cloning the user’s session. That is a massive security gap. By making the user akin to a ‘local identity provider,’ we allow them to issue restricted, task-specific credentials to their agents. This fundamental concept informed how we designed VIA’s Zero Trust Fabric (ZTF). It ensures that if an agent is compromised, it only has access to that one specific task, not the user’s entire account.
Interested in testing out how to secure your app (and protect your users) from AI agents? Check out my free tutorial on GitHub.
Developers play a critical role in ensuring AI agents operate as designed, playing to their strengths and leaving the critical decisions to end users.
About Jesus Cardenes
Jesus Cardenes, VIA’s Senior Vice President, Product Architecture, is responsible for the technical roadmap and architectural design of all VIA products and its Web3 platform. He is known for his expertise in connecting technologies and platforms to create seamless user experiences. An interesting fact about Jesus is that he loves to cycle during the weekends with his kids!
Resources
You have pressing questions…we have answers. So you can build faster and get back to shipping. Check out the resources below.
Learn how to secure your app from your users’ rogue AI agents using VIA’s Zero Trust Fabric (ZTF) and step-up authentication. In this case, step-up authentication means the user has to re-authenticate to authorize high-risk actions.
“The AI agent access problem” with Chris Hughes.
Chris Hughes, CEO of Aquia, Resilient Cyber podcast host, author of Securing AI Agents, and United States Air Force veteran, dives into why identity and access are brutally hard in an agentic AI world. He also explains how incentives, compliance, and culture shape what actually gets secured.
OWASP’s GenAI Security Project
Trying to wrap your head around the security risks for LLMs, generative AI, AI agents, and MCP servers? OWASP’s GenAI Security project is one of our go-to resources.
Forget the theoretical stuff. This lists the most dangerous risks, from prompt injection to the many flavors of unbounded consumption, and actually tells you what to do about them.

