An Analog Brain In A Digital Age | With Marco Ciappelli

Before the Robots Run. More reflections from RSAC 2026 — The Power of the Community and the Machines We Invited In. | Written By Marco Ciappelli & Read By Tape3

Episode Summary

This was my twelfth RSA Conference. I know that because I remember the first one, 2012, and I've been counting ever since — not out of habit, but because each year feels like a chapter in a longer story I'm trying to read in real time. Twelve years of standing in that same building in San Francisco, watching an industry evolve, stumble, reinvent itself, and occasionally look in the mirror.

Episode Notes

This was my twelfth RSA Conference.

I know that because I remember the first one, 2012, and I've been counting ever since — not out of habit, but because each year feels like a chapter in a longer story I'm trying to read in real time. Twelve years of standing in that same building in San Francisco, watching an industry evolve, stumble, reinvent itself, and occasionally look in the mirror.

In the early years it was pure technology. Cryptography, protocols, threat vectors, the architecture of defense. The conversations were technical, the energy was almost academic, the suits were slightly more formal. Then something shifted — gradually, then all at once, the way things usually do. The industry started talking about people. About culture. About the human beings sitting behind the keyboards and the very human mistakes they were making. The themes started reflecting it: community, togetherness, collective defense. Stronger Together. The Human Element. The Power of Community.

Year after year, the message from the main stage was some variation of: we are more than our tools. People are what matter. Connection is the point.

And then you'd walk the expo floor and see the booths.

I'm not being cynical. The community is real — I've felt it, in the hallway conversations, in the side events, in the faces of people I've been running into for a decade who are genuinely trying to make the digital world safer. That part is true and it matters. But there's a growing gap between what the theme says and what the stage performs. And at RSAC 2026, that gap became impossible to ignore.

Because this year, while the badge said The Power of Community, the keynotes were almost entirely about agents.

Non-human ones.

I wrote about this from a different angle in my first piece from RSAC — the Blade Runner angle, the NPC angle, the question of identity and intent when you can no longer tell the difference between a human action and an autonomous one. But there's another layer underneath that deserves its own space.

It's the pattern. The twelve-year arc.

An industry spends years — genuinely, sincerely — rediscovering the human element. Putting people at the center. Building a vocabulary around community, ethics, shared responsibility. And then, in what feels like a single conference cycle, it pivots to deploying a parallel workforce of non-human identities that outnumber us in our own systems, operate at speeds no human can follow, take actions no human directly authorized, and — here's the part that should make everyone pause — that a significant portion of organizations deploying them cannot monitor, cannot fully distinguish from human activity, and in many cases cannot stop once they're running.

We built the community. Then we populated it with agents and handed them the keys.

I kept thinking, walking those corridors, about the resistance. Not as a metaphor — or not only as a metaphor. In every story we've ever told about machines that gained too much autonomy, there's always a moment before the crisis where someone in the room knew. Where the warning existed. Where the design decision was made anyway because the pressure to ship, to scale, to compete was stronger than the instinct to pause.

The difference between those stories and this moment is that we're not watching it happen to fictional characters. We're the ones making the design decisions. And unlike software — which you can patch, roll back, update at 3am while everyone is asleep — agents with autonomy and access are a different category of thing entirely. The old mantra of move fast and break things made a certain kind of sense when what you were breaking was a feature. It makes no sense at all when what you're deploying can act, chain consequences, and escalate — faster than any human response team can follow.

This is where Asimov becomes relevant again. Not as nostalgia, not as science fiction trivia, but as a genuine design philosophy that the industry would do well to remember. His Three Laws of Robotics weren't invented as a plot device. They were a thought experiment in ethics-by-architecture — what does it look like to build the values into the system before the system runs, rather than hoping to correct the values after something goes wrong? He spent decades of stories showing that even the most carefully designed ethical constraints produce edge cases, contradictions, unintended consequences. But the point was never that ethics-by-design is perfect. The point was that without it, you don't have a fighting chance.

We are, right now, at the moment before the laws get written. Some people at RSAC were saying this clearly — not from the main stage, but in the rooms and conversations where the more honest thinking tends to happen. The guardrails exist. The frameworks are being built. But they're being built while the deployment is already running, while the agents are already in the systems, while the governance structures are catching up to a reality that moved faster than the institutional response.

That gap is the real story of RSAC 2026.

Not the products. Not the keynote soundbites. The gap between the speed of deployment and the maturity of the thinking around what we're actually deploying.

The community theme was right, actually — just not in the way the branding intended. The most important community at RSAC 2026 wasn't on the main stage. It was the quieter one: the engineers, researchers, practitioners, and security leaders who understand that we are at an inflection point, and that the decisions made in the next few years about how to design, govern, and constrain autonomous systems will matter far beyond the conference floor in San Francisco.

Utopia and dystopia are not predetermined destinations. They're design outcomes.

We still get to choose the architecture. But the window for making that choice thoughtfully — rather than reactively, in the middle of a crisis that moved faster than our guardrails — is not as wide as we might like to think.

Asimov knew that. He wrote the laws before the robots ran.

Maybe it's time we did the same.

Stay imperfect, stay human.

— Marco

Let's keep exploring what it means to be human in this Hybrid Analog Digital Age.

End of transmission.

Episode Transcription

This was my twelfth RSA Conference.

I know that because I remember the first one, 2012, and I've been counting ever since — not out of habit, but because each year feels like a chapter in a longer story I'm trying to read in real time. Twelve years of standing in that same building in San Francisco, watching an industry evolve, stumble, reinvent itself, and occasionally look in the mirror.

In the early years it was pure technology. Cryptography, protocols, threat vectors, the architecture of defense. The conversations were technical, the energy was almost academic, the suits were slightly more formal. Then something shifted — gradually, then all at once, the way things usually do. The industry started talking about people. About culture. About the human beings sitting behind the keyboards and the very human mistakes they were making. The themes started reflecting it: community, togetherness, collective defense. Stronger Together. The Human Element. The Power of Community.

Year after year, the message from the main stage was some variation of: we are more than our tools. People are what matter. Connection is the point.

And then you'd walk the expo floor and see the booths.

I'm not being cynical. The community is real — I've felt it, in the hallway conversations, in the side events, in the faces of people I've been running into for a decade who are genuinely trying to make the digital world safer. That part is true and it matters. But there's a growing gap between what the theme says and what the stage performs. And at RSAC 2026, that gap became impossible to ignore.

Because this year, while the badge said The Power of Community, the keynotes were almost entirely about agents.

Non-human ones.

I wrote about this from a different angle in my first piece from RSAC — the Blade Runner angle, the NPC angle, the question of identity and intent when you can no longer tell the difference between a human action and an autonomous one. But there's another layer underneath that deserves its own space.

It's the pattern. The twelve-year arc.

An industry spends years — genuinely, sincerely — rediscovering the human element. Putting people at the center. Building a vocabulary around community, ethics, shared responsibility. And then, in what feels like a single conference cycle, it pivots to deploying a parallel workforce of non-human identities that outnumber us in our own systems, operate at speeds no human can follow, take actions no human directly authorized, and — here's the part that should make everyone pause — that a significant portion of organizations deploying them cannot monitor, cannot fully distinguish from human activity, and in many cases cannot stop once they're running.

We built the community. Then we populated it with agents and handed them the keys.

I kept thinking, walking those corridors, about the resistance. Not as a metaphor — or not only as a metaphor. In every story we've ever told about machines that gained too much autonomy, there's always a moment before the crisis where someone in the room knew. Where the warning existed. Where the design decision was made anyway because the pressure to ship, to scale, to compete was stronger than the instinct to pause.

The difference between those stories and this moment is that we're not watching it happen to fictional characters. We're the ones making the design decisions. And unlike software — which you can patch, roll back, update at 3am while everyone is asleep — agents with autonomy and access are a different category of thing entirely. The old mantra of move fast and break things made a certain kind of sense when what you were breaking was a feature. It makes no sense at all when what you're deploying can act, chain consequences, and escalate — faster than any human response team can follow.

This is where Asimov becomes relevant again. Not as nostalgia, not as science fiction trivia, but as a genuine design philosophy that the industry would do well to remember. His Three Laws of Robotics weren't invented as a plot device. They were a thought experiment in ethics-by-architecture — what does it look like to build the values into the system before the system runs, rather than hoping to correct the values after something goes wrong? He spent decades of stories showing that even the most carefully designed ethical constraints produce edge cases, contradictions, unintended consequences. But the point was never that ethics-by-design is perfect. The point was that without it, you don't have a fighting chance.

We are, right now, at the moment before the laws get written. Some people at RSAC were saying this clearly — not from the main stage, but in the rooms and conversations where the more honest thinking tends to happen. The guardrails exist. The frameworks are being built. But they're being built while the deployment is already running, while the agents are already in the systems, while the governance structures are catching up to a reality that moved faster than the institutional response.

That gap is the real story of RSAC 2026.

Not the products. Not the keynote soundbites. The gap between the speed of deployment and the maturity of the thinking around what we're actually deploying.

The community theme was right, actually — just not in the way the branding intended. The most important community at RSAC 2026 wasn't on the main stage. It was the quieter one: the engineers, researchers, practitioners, and security leaders who understand that we are at an inflection point, and that the decisions made in the next few years about how to design, govern, and constrain autonomous systems will matter far beyond the conference floor in San Francisco.

Utopia and dystopia are not predetermined destinations. They're design outcomes.

We still get to choose the architecture. But the window for making that choice thoughtfully — rather than reactively, in the middle of a crisis that moved faster than our guardrails — is not as wide as we might like to think.

Asimov knew that. He wrote the laws before the robots ran.

Maybe it's time we did the same.

Stay imperfect, stay human.

— Marco

Let's keep exploring what it means to be human in this Hybrid Analog Digital Age.

End of transmission.