An Analog Brain In A Digital Age | With Marco Ciappelli

Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World | Written By Marco Ciappelli & Read by Tape3

Episode Summary

Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World A new transmission from An Analog Brain In A Digital Age — formerly Musing On Society and Technology Newsletter, by Marco Ciappelli The theme of RSAC 2026 was "The Power of Community." Nearly forty-four thousand people descended on the Moscone Center in San Francisco for four days of keynotes, corridor conversations, and expo floor theater. Six hundred exhibitors. Hundreds of speakers. And one word — one concept, one obsession — that swallowed everything else whole.

Episode Notes

Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World

 

Marco Ciappelli

Co-Founder ITSPmagazine & Studio C60 | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍

April 7, 2026

This is Marco Ciappelli's Newsletter: An Analog Brain In A Digital Age. This edition draws from ITSPmagazine's on-location coverage at RSAC Conference 2026 in San Francisco.

This article — and all of our RSAC Conference 2026 coverage — is made possible with the support of ITSPmagazine's RSAC 2026 sponsors: BLACKCLOAK | Crogl, Inc. | Manifest | Steel Patriot Partners | Skyhigh Security | Stellar Cyber | ESET | Token Security | Object First | Token

Watch and listen to the full coverage and all of the conversations we had, including those with our sponsors, at itspmagazine.com/rsac26

Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World

A new transmission from An Analog Brain In A Digital Age — formerly Musing On Society and Technology Newsletter, by Marco Ciappelli

The theme of RSAC 2026 was "The Power of Community." Nearly forty-four thousand people descended on the Moscone Center in San Francisco for four days of keynotes, corridor conversations, and expo floor theater. Six hundred exhibitors. Hundreds of speakers. And one word — one concept, one obsession — that swallowed everything else whole.

Not community. Agents.

AI agents. Autonomous. Self-directing. Capable of taking action, accessing systems, making decisions, and — here's the part nobody says quite out loud — doing all of that while you're asleep, or in a meeting, or standing in line for a mediocre conference coffee wondering if you remembered to turn off the stove.

Somewhere between the third and fourth time someone said "agentic AI" to me on that expo floor, I stopped hearing it as a technology term and started hearing it as a sound effect. A drone. A hum. Background noise for a world already running without asking for my permission. The irony of gathering tens of thousands of humans together under the banner of community, only to spend four days talking almost exclusively about non-human workers — that particular irony seemed to float unacknowledged through the air conditioning.

And that's when the flashback hit me. Not to any previous RSAC. To a screen. To a world I used to inhabit in the early days of World of Warcraft — before real life staged its intervention and I decided I needed one. In those massive online worlds, NPCs wandered their scripted paths. They had names, routines, dialogue trees, purpose. They looked like characters. They acted like characters. But they weren't. They were behavior patterns wearing a face. And the experienced player learned quickly: don't trust the ones you haven't verified. The convincing ones were sometimes the most dangerous.

I kept thinking about that walking those corridors.

About all these agents. Already deployed, already running inside enterprise systems, already accessing sensitive data, making tool calls, chaining actions in ways their human creators didn't fully anticipate. The gap between what's been launched in pilot programs and what's actually governed, monitored, and understood is — by most accounts from the conference — vast. Most enterprises are experimenting. Very few have the infrastructure to control what they've set loose. The rest are running something close to shadow agents: identities without owners, actions without accountability, behavior patterns wearing a face.

Which brings me, inevitably, to Blade Runner.

Not the flying cars. Not the neon rain. The real question at the center of Ridley Scott's masterpiece — and Philip K. Dick's before it — is simpler and far more disturbing: how do you tell the difference? The Voight-Kampff test existed precisely because replicants were convincing. They behaved like humans, responded like humans, even believed they were human sometimes. The problem wasn't that they were dangerous by design. The problem was that nobody could reliably track their intent.

That's not science fiction anymore. It's the central problem RSAC 2026 couldn't stop circling.

A significant portion of organizations at this point cannot distinguish AI agent activity from human activity in their own environments. The security industry has built its own Voight-Kampff problem — and hasn't finished building the test.

The vocabulary had shifted too, from the previous year. At Black Hat last summer, the conversation was about whether to trust agents. At RSAC 2026 it had already moved to identity. To behavior. To intent. One of the sharper ideas surfacing from the keynotes was the distinction between delegation and trusted delegation. Giving an agent a task is easy. Building the security infrastructure to actually trust that delegation — to know what the agent can touch, what it can't, what it will do when nobody is watching — that's where it gets complicated. Without it, someone on that main stage used a phrase that landed hard: a fast track to bankruptcy. Because agents don't just answer questions. They act. And some of those actions are irreversible.

So the question is no longer "who are you." It's "what do you want — and do I actually know what you're capable of?" Just like a Blade Runner asking a replicant about a tortoise left in the desert sun.

One researcher put it with a directness I appreciated: we need an HR view of agents. Onboarding, monitoring, offboarding. If there's no business justification for an agent's existence — remove it. Which is a pragmatic way of saying: even our digital workforce needs accountability. Even our NPCs need a character sheet.

And yet the deployment keeps accelerating. Agents with access and no clear owner. Identities running at machine speed through systems built for human-paced governance. The attack surface expanding quietly while the keynote applause was still echoing in the hall. Security researchers demonstrated live that vulnerabilities in agentic ecosystems are no longer theoretical — they're being exploited, chained, moving faster than the teams tasked with stopping them.

We built the agents. We gave them access. We handed them the keys and stood back saying impressive, right? — hoping nothing goes wrong.

With a chatbot, you worried about the wrong answer. With an agent, you worry about the wrong action.

That's not a product problem wearing a vendor badge. That's a civilization-scale question dressed up in a conference lanyard.

The Blade Runner didn't just hunt replicants. He had to learn to recognize them first.

We'd better start learning fast — before it gets really awkward.

Like if it isn't already.

Let's keep exploring what it means to be human in this Hybrid Analog Digital Age.

Stay imperfect, stay human.

— Marco

Let's keep exploring what it means to be human in this Hybrid Analog Digital Age.

End of transmission.

ⓘ About Marco Ciappelli

Co-Founder Studio C60 / ITSPmagazine | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍

These shows are all part of ITSPmagazine—which he co-founded with his good friend Sean Martin, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️

Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-location

Lear more about Marco Ciappelli: marcociappelli.com

ⓘ About Studio C60

We help cybersecurity startups build trust-based marketing and go-to-market strategies grounded in deep product understanding and real buyer insights. With hundreds of products brought to market and deep connections in the CISO community, we know what security leaders value in vendors.

Learn more at studioc60.com

Episode Transcription

Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World

A new transmission from An Analog Brain In A Digital Age — formerly Musing On Society and Technology Newsletter, by Marco Ciappelli

The theme of RSAC 2026 was "The Power of Community." Nearly forty-four thousand people descended on the Moscone Center in San Francisco for four days of keynotes, corridor conversations, and expo floor theater. Six hundred exhibitors. Hundreds of speakers. And one word — one concept, one obsession — that swallowed everything else whole.

Not community. Agents.

AI agents. Autonomous. Self-directing. Capable of taking action, accessing systems, making decisions, and — here's the part nobody says quite out loud — doing all of that while you're asleep, or in a meeting, or standing in line for a mediocre conference coffee wondering if you remembered to turn off the stove.

Somewhere between the third and fourth time someone said "agentic AI" to me on that expo floor, I stopped hearing it as a technology term and started hearing it as a sound effect. A drone. A hum. Background noise for a world already running without asking for my permission. The irony of gathering tens of thousands of humans together under the banner of community, only to spend four days talking almost exclusively about non-human workers — that particular irony seemed to float unacknowledged through the air conditioning.

And that's when the flashback hit me. Not to any previous RSAC. To a screen. To a world I used to inhabit in the early days of World of Warcraft — before real life staged its intervention and I decided I needed one. In those massive online worlds, NPCs wandered their scripted paths. They had names, routines, dialogue trees, purpose. They looked like characters. They acted like characters. But they weren't. They were behavior patterns wearing a face. And the experienced player learned quickly: don't trust the ones you haven't verified. The convincing ones were sometimes the most dangerous.

I kept thinking about that walking those corridors.

About all these agents. Already deployed, already running inside enterprise systems, already accessing sensitive data, making tool calls, chaining actions in ways their human creators didn't fully anticipate. The gap between what's been launched in pilot programs and what's actually governed, monitored, and understood is — by most accounts from the conference — vast. Most enterprises are experimenting. Very few have the infrastructure to control what they've set loose. The rest are running something close to shadow agents: identities without owners, actions without accountability, behavior patterns wearing a face.

Which brings me, inevitably, to Blade Runner.

Not the flying cars. Not the neon rain. The real question at the center of Ridley Scott's masterpiece — and Philip K. Dick's before it — is simpler and far more disturbing: how do you tell the difference? The Voight-Kampff test existed precisely because replicants were convincing. They behaved like humans, responded like humans, even believed they were human sometimes. The problem wasn't that they were dangerous by design. The problem was that nobody could reliably track their intent.

That's not science fiction anymore. It's the central problem RSAC 2026 couldn't stop circling.

A significant portion of organizations at this point cannot distinguish AI agent activity from human activity in their own environments. The security industry has built its own Voight-Kampff problem — and hasn't finished building the test.

The vocabulary had shifted too, from the previous year. At Black Hat last summer, the conversation was about whether to trust agents. At RSAC 2026 it had already moved to identity. To behavior. To intent. One of the sharper ideas surfacing from the keynotes was the distinction between delegation and trusted delegation. Giving an agent a task is easy. Building the security infrastructure to actually trust that delegation — to know what the agent can touch, what it can't, what it will do when nobody is watching — that's where it gets complicated. Without it, someone on that main stage used a phrase that landed hard: a fast track to bankruptcy. Because agents don't just answer questions. They act. And some of those actions are irreversible.

So the question is no longer "who are you." It's "what do you want — and do I actually know what you're capable of?" Just like a Blade Runner asking a replicant about a tortoise left in the desert sun.

One researcher put it with a directness I appreciated: we need an HR view of agents. Onboarding, monitoring, offboarding. If there's no business justification for an agent's existence — remove it. Which is a pragmatic way of saying: even our digital workforce needs accountability. Even our NPCs need a character sheet.

And yet the deployment keeps accelerating. Agents with access and no clear owner. Identities running at machine speed through systems built for human-paced governance. The attack surface expanding quietly while the keynote applause was still echoing in the hall. Security researchers demonstrated live that vulnerabilities in agentic ecosystems are no longer theoretical — they're being exploited, chained, moving faster than the teams tasked with stopping them.

We built the agents. We gave them access. We handed them the keys and stood back saying impressive, right? — hoping nothing goes wrong.

With a chatbot, you worried about the wrong answer. With an agent, you worry about the wrong action.

That's not a product problem wearing a vendor badge. That's a civilization-scale question dressed up in a conference lanyard.

The Blade Runner didn't just hunt replicants. He had to learn to recognize them first.

We'd better start learning fast — before it gets really awkward.

Like if it isn't already.

Let's keep exploring what it means to be human in this Hybrid Analog Digital Age.

Stay imperfect, stay human.

— Marco

Let's keep exploring what it means to be human in this Hybrid Analog Digital Age.

End of transmission.