Redefining Society and Technology Podcast

How AI is Revolutionizing — and complicating — Cybersecurity. Geeking Out and Musing On the Future of Infosecurity and AI | A Conversation with Matthew Rosenquist and Sean Martin | Redefining Society with Marco Ciappelli

Episode Summary

Join us as we looks into Matthew's Crystal Ball and predict the future of AI, technology, and cybersecurity, as we explore their collective impact on the future of our society.

Episode Notes

Guests: 

Matthew Rosenquist, CISO at Eclipz.io

On LinkedIn | https://www.linkedin.com/in/matthewrosenquist/

On Twitter | https://twitter.com/Matt_Rosenquist

On Medium | https://matthew-rosenquist.medium.com/

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

____________________________

Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________

This Episode’s Sponsors

BlackCloak 👉 https://itspm.ag/itspbcweb

Bugcrowd 👉 https://itspm.ag/itspbgcweb

Devo 👉 https://itspm.ag/itspdvweb

_____________________________

Episode Introduction

Hello, dear listeners! It's Marco Ciappelli of the 'Redefining Society Podcast', and today we have an episode that promises a whole lot of geeking out, pondering, and some classic bantering. The intersection of technology, cybersecurity, and society has never seemed so alive, and I'm excited to navigate it with you.

Now, if you've been here before, you'd remember the brilliant Matthew Rosenquist. Well, he's back, always giving me a hard time, but hey, it’s all in good spirit! We've exchanged a few interesting jabs on LinkedIn recently, and that’s kind of what led us here. It's amazing how online conversations can blossom into profound discussions, isn’t it?

And speaking of which, Sean Martin decided to crash our little party today. Though he claims he’s here just to join Matthew in poking fun at me, I know he's got a treasure trove of insights from his three decades in tech and cybersecurity. So, welcome, Sean! Hope you’re ready to redefine some societal norms with us.

Alright, so what's today's big question? Well, remember how ITSP Magazine started at that unique crossroad of cybersecurity and society? We've expanded since then, encompassing the larger realm of technology. The connections are becoming denser, and the implications? Oh boy, they're growing by the day. And then there’s AI. We can't really sidestep it anymore, can we? It's here, evolving, and redefining our societal landscape.

Matthew had a speaker event recently about cybersecurity and AI. I thought, why not pull him and Sean into a room and unravel this puzzle? Between Matthew's forward-thinking perspectives and Sean's vast experience, we're bound to touch upon some hard-hitting truths.

So today, we're connecting the dots, discussing AI's role in the future of cybersecurity, and diving into its implications on our society. We’ll discuss the tech side, of course, but I’m especially intrigued about its potential for social engineering. I mean, is AI's most significant threat the way it can manipulate us humans because of our inherent gullibility?

There's so much ground to cover. I can’t promise we’ll have all the answers, but hey, it's the journey and discussion that counts. So, buckle up! It's time to redefine, muse, and maybe even challenge some of our preconceived notions. Let's get this conversation started.

Listen, enjoy, think, share, and subscribe to my podcast!

_____________________________

Resources

 

____________________________

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Marco Ciappelli: Well, hello everybody. This is Marco Ciappelli on Redefining Society podcast. And, uh, for those that are watching, you'll see that I have a co host or a guest. I think it's a guest. I'll go guest today. Yeah, today guest. It's easier. A lot of less pressure, right? I'll take, I'll take the burden here. You'll take it on your shoulders today. 
 

And, and the other one is my good friend, Matthew Rosenquist. And he, he's... Always, uh, welcome on the show and give me a hard time because he loves to do that. So for you listening, here it is. Matthew, how are you  
 

[00:00:37] Matthew Rosenquist: doing? Great. And somebody has to do it, right? Especially with all these mental meanderings, uh, this journey that we always take. 
 

[00:00:48] Marco Ciappelli: Yeah. You know, I, I love how we, give a little bit of hard time to each other, but we're always. there to collaborate. And these actually came up from a post I did on LinkedIn as often happens. And that's where conversation starts. I think, Sean, you, you probably found so many guests just by jumping on a conversation on LinkedIn. 
 

[00:01:07] Sean Martin: I invited myself to this one, not because of the topic. It was to give you a hard time alongside Matthew.  
 

[00:01:13] Marco Ciappelli: All right. Do we have it? Oh, yeah, yeah. I know the topic. So the topic is... As you know, people, uh, listener, audience, uh, ITSP magazine was actually born as a, uh, a magazine at the intersection of cybersecurity and society. 
 

Then we figure, Hey, what the hell? Marco doesn't know much about cybersecurity, but he knows a little bit about technology and society. So we open up to technology and then we split it and all of that, but they're all connected and there is no doubt about it. And I have a feeling that they're even more and more connected than what they used to be in here. 
 

You know, Sean and Matthew may, may have an opinion on it. Um, and AI. Oh God, we can't have a podcast without talking about AI. So today we, I reconnected on Redefining Society. I want to reconnect it with cybersecurity. I know that Matthew was having some speaker event not too long ago about cybersecurity and AI. 
 

So that's what we're going to talk about, but most of all having a good time. So let's start with you, Matthew. Uh, who are you? And why, why are you here? Who invited you? Literally,  
 

[00:02:20] Matthew Rosenquist: why, why are you, that's a good question. Why am I here other than to keep you two in check ? Uh, I'm a ciso, I'm a cybersecurity strategist, and I have a passion for cybersecurity and technology. 
 

And, you know, I love looking at what, what has happened in, in the past where we are right now, right inside the jar looking out, but even more so what's ahead of us. What wonderful things are we going to experience and what horrifying crises are going to come about. That's what really interests me. And that's why I like coming on the show and talking with you gentlemen. 
 

[00:02:59] Marco Ciappelli: I love it. I love it. And we love to talk to you and, uh, and I love to talk to Sean too. So who is Sean?  
 

[00:03:06] Sean Martin: I have to say that the, your passion is infectious, Matthew, and hopefully some of it rubs off on us. Uh, who, who am I? Uh, I don't know doing. doing tech and cyber for 30 years, uh, kind of runs through my blood at this point. 
 

And, uh, the host of the Redefining Cybersecurity podcast, where I look at all, all things operationalizing cybersecurity. And to, to Matthew's point, and I, I guess, Oftentimes on, on my show, I'll end up at a point where I'm asking, can we not just do something differently and get a different outcome than the way we've always been doing it, which kind of leads to what's the future hold as well. 
 

So I just launched a new newsletter, uh, the future of cybersecurity, where we, I'm looking at, uh, different scenarios and stories where. Perhaps it'll get people to think about what's possible, not just how do we get out of the hole that we've dug ourselves into. So that's me.  
 

[00:04:07] Marco Ciappelli: I love it. I love it. We always connect with stories. 
 

It's always been in our DNA here at ITSP magazine. And again, story is everything. It's a podcast. It's a book. It's a movie. It's a, it's a presentation. So I'm curious, what, what is the latest story? Not fictitional, although, I don't know, maybe you go there too, who knows. Uh, that you're told what is the latest story 
 

about AI and, and cybersecurity.  
 

[00:04:35] Matthew Rosenquist: Well, you said real, and I was thinking, okay, well, what's real, right? If I create something synthetic, is it real? Because I can see it and hear it and, you know, watch it in a movie, uh, you know, a movie may be fictitious, but the movie itself is real. And I think that's where we're kind of at with AI, right? 
 

Generative AI, which is the hotbed right now. It's the hotbed in social media. It's the hotbed whenever people are now talking, you know, in the news, uh, we've seen upticks and people join, uh, you know, like CHAT GPT faster than they have any other social media in history. So it's reaching down to every person. 
 

And it's about creating something synthetic, partially synthetic, you know, in what we can see and hear and maybe believe..  
 

[00:05:32] Sean Martin: It's interesting because I, I caught a snippet of, uh, no, don't judge me, um, fool us, the magician show Penn and Teller. And, uh, one of the magicians is a linguist and the whole, the whole point of his act was that he had real props and not real props. 
 

But his point was they're real in your mind because he treated this, these non real invisible props, just like the real ones that you could actually see and hear and all the other things. And I, I feel that. That AI is a bit like that. I feel that bad actors and the threats that they, uh, that they spew on society are like that and you can't really see them, but you know, they're there, even if it's just in your mind until they become real, right. 
 

And, uh, it it's bridging that gap and understanding the impact. And then obviously. What the, uh, what the options are to change the outcome in case it's one you don't want.  
 

[00:06:33] Matthew Rosenquist: Yeah, generative AI, it is a powerful tool and it can add to, change, or warp. Reality. All of our realities. So such a powerful tool, you know, as I've always said, uh, with the outstanding and miraculous benefits that such powerful tools bring into our lives and enrich our lives. 
 

It comes with an equitable amount of risk as well, because those tools can be misused or used specifically against us. And the more we embrace them, the more we trust them, the more believable they are, the more the bad guys really kind of like them. So we have to take the good with the bad. None of us want to get rid of AI, hopefully. 
 

Um, we, we all want the benefits. Are we thinking enough? About those accompanying risks and mitigating them. So it's not a severe detriment.  
 

[00:07:35] Marco Ciappelli: So here's something I'm thinking, and it's been a kind of a progress. And of course, we're not in the redefining cyber security. We're redefining society. So I like where you're going with this, because when people think about AI, In general, not general AI, in general. 
 

Be careful! Be careful!  
 

[00:07:55] Matthew Rosenquist: I know, I know. Words matter when you're talking that. What is he talking about now? Terminator versus cartoon flower. Okay.  
 

[00:08:05] Marco Ciappelli: So, you think about... Technology. You think about computer software code and all of that. But more and more, we talk about ethics, we talk about copyrights, we talk about creativity, we talk about taking jobs away, maybe doing jobs transformation. 
 

And now you're just saying that all you said for me was social engineering, right? So, so are we, is the real risk more on the Is this social engineering that AI can do to us as human because we're gullible or there is also the technical element, pure technical cyber risk?  
 

[00:08:48] Matthew Rosenquist: Yes and yes, but all the risks don't come at us at the same time. 
 

Right? They tend to roll forward based on a whole bunch of other underlying gears in the engine of the world, right? Has to do with innovation and adoption and easiest types of attacks or manipulation, all those kinds of things. So, short term, right now, we're seeing the bad guys use generative AI, um, to enhance phishing attack in a couple of different ways. 
 

Right. You've seen the phishing attacks where, you know, the, the grammar is wrong, or they're trying to use some example, just seems funky and it's really easy to detect. So from a quality perspective, that's the first thing you can use. Gen AI to sound very intelligent, very businesslike, very professional, and you can use Gen AI to emphasize urgency and, you know, Events and all these things. 
 

So the quality of phishing goes up. That's number one. Number two is the scalability because Gen AI, you know, you're not paying somebody in a sweatshop to write these right and try and customize them and everything else. You can have Gen AI kick this stuff out. And especially when you start down the road, not right now, but down the road, when you start integrating other databases of, uh, stolen data and profiles, you can customize them because you know exactly what they're talking about for the last two years on their Facebook or LinkedIn or Twitter or whatever, right? 
 

And a tool can automate that. AI can automate and integrate all those aspects automatically. With good grammar and emphasis. Right. The third area, uh, which I found interesting is around language translation, because the U S is a huge target, right? So you need to speak English. What if you want to translate that to 46 different languages and attack targets in 46 different regional areas around the globe? 
 

Oh yeah. Hey, I can do that for you. Right. Just like that. So it now expands the total available market for any professional or semi professional phishing organization, right? They sound more professional, they can hit a greater overall TAM and they can, you know, from a scalability, they can do it better and faster. 
 

And that's just what we're seeing now. You can turn the dial forward. And it gets even worse.  
 

[00:11:19] Sean Martin: So I'm, uh, I'm actually putting the final touches on, I don't know when I'll let it fly, but, uh, I have an article in the works where I look at, uh, AI in its current state and the role of a security program within a business. 
 

And then, uh, longer term, 10 years out and in this story that, uh, that I'm, that I'm building. AI is the savior because it has,  
 

[00:11:46] Matthew Rosenquist: it absolutely can be. Because,  
 

[00:11:48] Sean Martin: because think about what, what makes AI powerful. It's all the data that has access to, and that's where the human fails in many cases. Um, and in this story, I'm not going to give it all away, but in the story where uh, uh, security. 
 

Uh, leader has to connect with peers and connect with ISACs and ISALs with share information sharing organizations and with threat intel feeds and with their own information. They have to pull all this together. They might use some technology to present a dashboard that gives them a red, green, yellow, or some kind of rating of risk. 
 

But it's very hard to capture all that in the heat of the moment when there's an unpatchable zero day attack taking place that's widespread. And understanding the risk and communicating it in a way that can be understood by yourself, by your, your team, by your peers, by the executive leadership team, by society, right? 
 

Shareholders. Under, one, understanding, two, communicating it, and then three, back to your third point, perhaps in different languages.  
 

[00:13:04] Marco Ciappelli: But also you mentioned translating the story according to the way that can be more effective. I love that part a lot.  
 

[00:13:14] Matthew Rosenquist: So here's a perfect example of that, right? We can even right now with generative AI, because again, phishing is one thing. 
 

There's a whole bunch of other different types of attacks that right now we've shown that you can use generative AI to basically scan for vulnerabilities. Right? You can look for vulnerabilities in code, you can point it to a web page or a server farm or whatever and have it do the analysis, identify things, and it's, it's taking, okay, known vulnerabilities and mapping it to the environment and it can even run tests and then come back. 
 

And you can say something like, hey, give me the top 10 vulnerabilities on my server farm. And it comes back. Well, okay, so bad guys can use it to say, hey, what's the top 10 vulnerabilities in Sean's server farm? Well, that's, that's not good. Because again, it's doing it very, very fast. And AI Is designed to give you the highest confidence results. 
 

It's really an optimization schema when you're talking about machine learning, deep learning, right? So it's finding best path optimization, uh, curves and, and all sorts of things. So great, it's gonna find the absolute worst vulnerabilities for Sean. I'm the attacker. I now have what I need, but it doesn't end there because you can also use generative AI and related tools to then conduct the attacks. 
 

Automatically. And when Sean blocks one, oh, let me go to the next one. And now I'm going to use this vulnerability, right? So it creates automation. But on the other hand, as Sean was saying, AI potentially could be our savior. In fact, it will have to be added to the mix. Because Sean, on the other hand, is also using generative AI, maybe even the same code base, and saying, hey, what are my vulnerabilities? 
 

Okay, now I need to go close those before Matthew comes in and attacks. Oh no, Matthew attacked. Hey AI, start an investigation, do the forensics, produce me reports and metrics so I know exactly what's going on and what I should be doing to counter his attack. So now it's... Sean's AI playing active defense, and Matthew's generative AI playing active offense. 
 

Doing it at speed and scale that a regular security operations center wouldn't be able to handle.  
 

[00:15:44] Marco Ciappelli: Wow. When is this movie coming out?  
 

[00:15:49] Sean Martin: Where this connects to society, I mean, maybe some of your listeners, Marco. Yeah. they may not completely understand what phishing is. They probably don't understand they  
 

[00:15:59] Marco Ciappelli: know what phishing is. 
 

They do.  
 

[00:16:01] Sean Martin: Yeah. If they listen to your show, certainly. Uh, they may not know what a security program is within an organization, but where, where they will recognize where this can impact is in a healthcare setting or in a banking environment or, uh, the industrial control systems that control the grid and the flow of, uh, of, uh, electricity and the distribution of, uh, for, uh, for transportation and things like that. 
 

Even the transportation networks, all those things hit us directly in society. So if you, if you can picture your favorite environment, whether it's pulling money out of the bank or, uh, or taking a flight somewhere, if you picture two AI systems battling it out, um, and, and hopefully one side saving it from, from the other side, destroying it, uh, that that's kind of what. 
 

What the society impact of this is  
 

[00:16:56] Marco Ciappelli: and maybe maybe the visual is two huge transformer robots fighting one against the other. Rock'em sock'em AI, which isn't, you know, it's a classic example of kind of, you know, it's adversarial AI, except you can use it to be the critique and the painter , Painting some artificial digital art. 
 

In this case, it's not for leisure. It's not a nice Van Gogh flower coming out. So, yeah, it can affect society. How deep is AI in our society? Does it bring more vulnerability using AI to do? I mean, does it open more doors, more access?  
 

[00:17:41] Matthew Rosenquist: I think it does, right? I think it is the stellar winning tool for disinformation and misinformation campaigns. 
 

When you can use a deep fake and make the current president say something not so nice, or your political favorite, you know, favorite do something, um, or a company you don't like, or a boss you don't like, either in video. Audio, email, right, or all of the above. You know, you don't like your current boss because he's a tyrant. 
 

All right, let me create a virtual version of him with Gen AI. Have him join, right, the board meeting or the department meeting and cuss somebody out with you being belligerent, right. Or speak badly about the founders or something like that. Ha ha ha. Right. So it isn't just, okay, random spam anymore. Uh, and I'm going to spoof Sean's name. 
 

I'm gonna spoof Sean's name, but I'm also going to write that email the way Sean writes the emails. Right? The way he addresses people. His tone and character within that email. And people will recognize that, right? We have certain tells when we talk. We have the same thing the way we look and the same thing when we're, um, you know, in written conversation. 
 

And all that. Can be duplicated. You can go to an AI, a Gen AI system, and say, Write me three chapters in the style of Sean Martin about cyber security. And you'll get crap. It's a lot of X's and O's. A lot of red lines. Very Sean Martin ish. So it, you know, it isn't just phishing emails, it's impersonation, right? 
 

And it can go even beyond that. Um, you know, I was watching some, some interesting people on YouTube and they were actually showing how to create a synthetic identity to be an influencer on Instagram and things like that, right? Follow these 10 steps, you create your own influencer and it is somebody that does not exist in real life. 
 

But you can see them in their picture. You can see them in videos doing things around the house or out in the park. You can hear them. Right? A synthetic voice, not someone's voice, it is completely generated and new, but a beautiful sounding voice and they're talking and the lips are matching what they're saying, very believable. 
 

You can put them in awkward situations, or provocative, tantalizing, you know, uh, kind of situations to be able to gain more followers. What harm could be done with that, right? Mmm, or maybe you just want to copy and you make a synthetic Whomever your favorite social personality is and again put them in those Inappropriate awkward, you know funny situations. 
 

Are you going to get more attention? Can you broadcast and reinforce messages? Maybe probably can you can you sell to people better? Yo, yes Right now we've got armies of marketing people trying to get wording right and and campaigns perfect and it can take weeks to months. Some of these school tools can do it in a matter of seconds. 
 

Which kind of gets back to what you were saying. There might be some job loss, but that's a different topic. Cause I think there's actually going to be job net gain. But anyway.  
 

[00:21:22] Marco Ciappelli: I just recorded and published not too long ago about this app. I talked to the CEO where he just creates your fake background. 
 

It's a photo app, but it's not just photo. I mean, it's like, here is me in front of the Tour Eiffel where you've never been in Paris. So we, the whole concept was also how it can be used by modeling agency, advertising agency, cost nothing. Once you have. Created the engine and there is no cost of flying somebody in location. 
 

There is no studio, there is nothing. And it's like, you know, so here's another probably industry that is quite worried about it. And another one that is like, Oh, cool, more money.  
 

[00:22:09] Sean Martin: Yep. I see not just industry, industries. Oh yeah. I mean, the model can be digital, the background can be digital.  
 

[00:22:19] Matthew Rosenquist: Your favorite actor can remain the same age for the 20 movies they're going to do for you. 
 

In fact, they could be deceased and you could still be doing movies with them because you own the intellectual property rights of their image and voice and so forth. Mission Impossible 97! Still doing his own stunts. Yeah, still doing his own stunts!  
 

[00:22:46] Sean Martin: Well, this, this, Marco, this, this brings up kind of, where it gets much more exciting to me than, than a security program. 
 

Granted, I like technology and I like project management and... So that world is, is a geeky world for me and I love it. But mind blowing is, is this world that we're talking about where there's so many moving parts, so many people involved, people trying to get money, people getting screwed out of money. And it, in there is a line of. 
 

Of ethics, and societal rule, and maybe not as many laws as there should be, and, I don't know, I just, I think, we talk about it often on the, on the magazine, that, uh, technology kind of outpaces regulation, for sure, and I think technology now is outpacing people's view of, of what's right and wrong, um, it may not seem wrong, the little thing they're doing, certainly they have the bad actors, but a lot of people might try, because they can, you know, An experiment with AI in whatever form, because they can, and not realize that it might be doing harm. 
 

[00:23:55] Matthew Rosenquist: It's free, cheap, and easy.  
 

[00:23:58] Marco Ciappelli: It lowers the entry level for even someone who wants to be a criminal. I mean, I mean, we were having this conversation in cybersecurity years ago you can rent a farm for a dose. Attack or you can rent whatever robots and boats and so the key now it comes when we talk about this kind of changes is that especially the new generation, uh, they don't care anymore about that experience. 
 

And I'm getting somewhere with security here. So that the, the line between what is real and not real or real, but in a different dimension, it's, you know, it's, it's all there. And I feel like the opportunity to steal money in the metaverse or to attack someone, uh, yeah, I mean, you talk about impersonation. 
 

What if you can get into impersonating the, the, the doctor at the pharmacy or the prescription and it... And it's a target real attack. And we didn't even mention war, real war game, you know, so I don't know if you want to go there. Do you want to go like dark and doom and gloom?  
 

[00:25:11] Matthew Rosenquist: Oh, we can go dark. That's my happy place. 
 

Right? Oh, definitely. You know, even right now we're seeing, you know, these gen AI, generative AI systems create, think about that. And that's one of the worries. Oh, well, there won't be as many software coders and jobs will go down. That's not going to happen. But just right now, without the addition of Gen AI, by 2025, 
 

it's estimated that there will be over 300 billion lines of code that we have to secure. Yeah, that's impossible. It's impossible that the much smaller number we have now, but now that you're going to have Gen AI create even more code, right? Think about it. Right now you have to go to, um, some vendor and say, Hey, I want a word processor. 
 

So you have to buy their code. Maybe in the future, you just go to your computer and say, create a word processor for me. I got to write an email or create a word processor for this, or just create the email and it writes the code for whatever you need, does the job and deletes it. Right. And it's just using best practices and grabbing other code snippets. 
 

And when you ask it later on in the day, it'll create another piece of software. But with even better information, better code, because it's pulling it together later in the day, and all these other AIs are finding best practices and sharing it. It's, it's on demand. If you wanted to get a picture of Marco, right, in front of the Statue of Liberty, You would have to coordinate things, right? 
 

You would have to get them off the no fly list. You would have to, you know, get an airline ticket and, and, you know, uh, an airline willing to fly them over there and, and it would take a while to get that one shot. Or you can go to CHAT GPT or any of the other, you know, Leonardo AI or whatever and say, Generate a photographic 8K image of Marco in front of the Eiffel Tower, right, and you'll get it. 
 

And they go, no, no, no, no, no, at night. No, with less clouds. Yeah, more crowd. More everyone staring and pointing at him. Okay, great, now I can do things that I can't even do in real life. Have a hundred thousand people pointing water balloons at him while standing upside down on the top of, you know, the Eiffel Tower. 
 

Okay. It can do it. And it can do it like that, you know, put them in a dress. Guinness Book of World Records. Yeah. So, you know, it's instantaneous. And where does that put us, right, from trust in digital services and products and our life out there?  
 

[00:28:00] Marco Ciappelli: God, do you remember when just to say photo or didn't happen? 
 

That's not part of that anymore. It's a photo and maybe it didn't happen.  
 

[00:28:12] Matthew Rosenquist: Yeah, you know, so, you know, we're, we're even, I, I'm worried about it because you talked about, hey, you can use AI to generate your, you know, a fake background of who you are. What about creating an entirely synthetic identity? 
 

Entirely synthetic from their name, your look, your voice, your education history, your background, your accent, and then also generate a bio, a synthetic iris scan, fingerprints. Things of that sort. Okay. Let's see. I could use that potentially to apply for jobs. I could harvest that with, with, um, you know, dark web information about people and file for credit cards and loans and everything else. 
 

I could even be an online personality, right? Completely synthetic.  
 

[00:29:04] Sean Martin: Let me, let me ask you this. Do you feel, cause we had some conversations around encryption and quantum, quantum ready, quantum, Safe encryption, quantum ready, quantum resistant, quantum resistant. There we go. Uh, and during the conversation The point was made that it can't be broke broken today, but at some point it will be yes And the point was made that there are bad actors Collecting all the information now They're collecting the keys that they can get a hold of now and sure the data might be five years old at the time Quantum breaks the key, or it makes the key unusable, invalid, whatever, and now they have access to all this information nobody thought was an issue because it was protected at the time. 
 

So with with that in mind, because I believe that's a case that that's real, um, with that in mind, do you feel, because you mentioned iris scans and fingerprints, do you feel That there's a, I don't want to call it a market, but a movement perhaps, where this type of information is also being captured in a way that can then be... 
 

[00:30:28] Matthew Rosenquist: Not, not for synthetic yet. So this is one of those attacks  
 

[00:30:32] Sean Martin: down the road. Printing a new thumb of mine. Well,  
 

[00:30:36] Matthew Rosenquist: you know. As trust start degrades more and more, right? We're seeing social media have all sorts of bots, fake accounts, things of that sort. We hear companies, um, that are, you know, going through and hiring temps or people remotely. 
 

And the person that interviewed is not the person that's that, that actually, you know, they hired, right? And so there's, there's all sorts of misdirection and fraud and things of that sort. Um, let's take, for example, dating sites, your dating app. What do you think Gen AI can do? Automated, highly automated, creating 10, 000 new identities a minute and flooding dating sites. 
 

That's kind of a problem. And as we get more and more down the road, we want more confidence. And are we talking to a real person? Right. Biometrics was one of those things. It used to be passwords, but passwords are weak. And unfortunately once, you know, uh, they can get hacked. Okay. Then people wanted to move to biometrics. 
 

Well, okay. Right. Sometimes they're tracking where your mouse is for a captcha, right? The, the new versions of captcha. It's not whether you can actually tell which is the fire hydrant. They're actually watching your mouse screens and they're looking at your previous browser history, right? Things of that sort. 
 

Um, they don't care if you can pick out a fire hydrant. They really don't. You can pick the wrong stuff. Uh, but eventually we will want to get more tangible and more real. And this type of technology is already a couple of steps ahead of that, and it's just about trust.  
 

[00:32:22] Sean Martin: So people sitting at a computer may not realize it. 
 

[00:32:28] Matthew Rosenquist: I mean, that the person they're flirting with online, uh, is 100, 000 that were created yesterday. And.  
 

[00:32:36] Sean Martin: Yeah, you might see stories in the news. You might see examples on Tik Tok or, or Instagram or something. But what I want to connect to is like for, for example, uh, electric vehicle that has a lot of advanced technologies in it, it it's, it's watching everything around it. 
 

It's paying attention to itself, monitoring the person driving. Right. So it's looking at eyes and movement and. And, uh, in relation to its surroundings and all that to make decisions. Yeah, it's, it's, it's using data. It's using AI presume under the hood to. Make decisions to protect the driver if they're actually have their hands on the wheel, protect them if they don't, protect them if it's in autonomous mode, and And it's, it's showing the outside world to the person sitting on the inside So you can see the trucks and the cars and the cones and the road lines And and that knows when the lights turn green versus red and all that stuff. 
 

It's all powerful stuff. Now add to that The, the ability to create fake versions of that, which it's collecting all that data so that it can easily do it, right? To me, that's, that's a real life scenario where things, AI is heavily involved in what we're doing and It's not this fun and games. I'm gonna create a movie of looking at somebody else This is a real life movie driving down the road or the highway looking everything around me and looking at me  
 

[00:34:19] Matthew Rosenquist: Yeah Uh, you know, and, and, you know, when we get into those kind of, you know, deep learning environments as well, you've got, it's like matter and antimatter, right? 
 

You've got, uh, adversarial learning that can unwind some of the things. And the classic example is you have your car, right? It's driving down the road and it's scanning the environment. And one of the things that scans is a speed limit. And that speed limit says 25 miles an hour. Well, it's gotta recognize it's a speed sign, and it's legitimate, and then it has to read that speed sign. 
 

And it isn't simply matching it. Right? It isn't optical character recognition exactly because it could be foggy. You could, uh, somebody may have put graffiti on it, right? Somebody took a shotgun and blew part of it away, whatever. And so it's using AI, which is great because in poor conditions, suboptimal, it'll work. 
 

But you can use adversarial AI to be able to modify that. In fact, some, some colleagues of mine, uh, back at Intel Labs, they were able to create like a pic, a little sticker. Wasn't very big. Slap it on that thing. And instead of the car recognizing it is a, it is a speed limit sign and it's 25. It was a speed limit sign at 125 miles an hour, right? 
 

And so, and it was just a small little sticker. It wasn't somebody put a 1 on it or anything like that. It was a small little sticker.  
 

[00:35:54] Marco Ciappelli: And that was a physical... Yes. thing. Like, now you could probably do that by, you know, making hallucinate your AI. Well, yes.  
 

[00:36:06] Matthew Rosenquist: They did that at a live conference. They had a live conference where there was a camera setup and the camera was watching people walk by and it would categorize them male or female or this or that, right? 
 

Um, and a cart would get pushed by and it would identify the person in the Some colleagues of mine made up some shirts. With again, this random pattern on it, you know, it looked like some art vomit, right? And they walk past the camera, and it identifies them as an elephant. Right? And it's just a t shirt. 
 

They're not wearing a costume or anything. But the t shirt was able to mess with the algorithms because it's adversarial AI. It's looking at the AI and reverse engineering and then mucking with it, right? And just this one, you know, 12 by 12 little pattern was enough to undermine This highly intelligent, very well trained artificial intelligence system. 
 

It was used anywhere?  
 

[00:37:18] Sean Martin: That world needed an elephant, clearly.  
 

[00:37:20] Matthew Rosenquist: Yes! But you could make it think anything, right? Anything within certain parameters of what it scans for and other things.  
 

[00:37:29] Marco Ciappelli: You know, it'd be funny if we, if we do a, uh, on the dating side, if, if you make an artificial intelligence being the one that data and artificial intelligence, see what the hell happened there. 
 

Uh, that would be fun. So pretty crazy. I want to finish with a question that I want you both, uh, answer, which is, um, are we ready? For this business, Sean, I know you talked to a lot of, you know, like a cybersecurity on the business side where they fork the money, give the budget. I mean, if you could tell the story that you were just told now and Matthew, this is for you too. 
 

They're going to be like, yeah, I'll let, uh, my company in 20 years worry about that. Um, is it still perceived? No real? What's your take, Sean?  
 

[00:38:23] Sean Martin: I'm optimistic that, uh, we end up in a place that We, we most of us survive . I'm think, I'm, I I can say that generally for both society and, and business. Um, leaning more into the, into the business end of things. 
 

I think, I think it's gonna get worse before it gets better, and there's gonna be, it's not gonna be the biggest breach ever that, that triggers a change. I, I don't know what the change is. I'll have to think about it. I ask the question, but there there'll be something that triggers a reevaluation for the ways in which we. 
 

Manage, risk, and our, our cyber controls, and maybe even those two words change completely. I'm just throwing it out there. That may be the problem, I don't know. But the point is, I think it gets a little, I think it gets worse. There's going to be a trigger. I think we reevaluate and reset. And, uh, with that, uh, we end up in a place where we can hold our own against the crap that we created that's fighting us. 
 

[00:39:29] Matthew Rosenquist: So I agree with everything, everything that Sean said. Um, I'll add a different perspective to it. It is normal in our industry that the attacker uses innovation first. They use it for bad things in new, innovative, unknown, unforeseen ways. They get to go for a little while, but as soon as we feel pain. We respond in security. 
 

It means we're giving the money and the attention and, hey, go fix this, right? I feel pain. It goes back to the first axiom of cybersecurity. Cybersecurity is not relevant until it fails. The moment innovation is used to make it fail, okay, it's now relevant. We're now going to try and fix that. And that's where the arms race comes in. 
 

In answering your question more directly, are we ready for it? I would say yes, tentatively, knowing there's going to be incidents, knowing we're going to fail and have to respond and counter to it. But the one thing that I'm talking about with, to companies, to CEOs and to boards, uh, specific to Gen AI is Most of the cyber attacks we're going to see for the most part are just a new twist or improvement or optimization on what we already kind of know. 
 

We already suffer from phishing, right? And email and text, you know, vishing and, and all these other things. And even once in a while we get, you know, somebody impersonating somebody on a video call, right? We're going to see more of that. It's going to be more efficient. It's going to be more, but it's not too, too surprising, right? 
 

There may be some things down the road, the bigger issue right now for these companies that they're not looking at, because all that is sexy. That's let's talk about deep. Fakes and impersonating the CEO and business email compromise. Yeah, that's sexy. There's a non sexy side to this. And it's really about those companies that are embracing and rushing as fast as they can to get their version of generative AI. 
 

Out into their products, their service, have their customers exposed to it because it's cost effective, it's new, it's, it's creates buzz, it'll generate revenue, things of that sort. All great reasons, but anytime you are going to push forth untested, untried, not fully baked technology and connect it back to your very sensitive data stores and systems and everything else and try and push it out there as fast as possible, you're going to create vulnerabilities. 
 

And those are the vulnerabilities that when exploited, it's going to go into your backend systems. It's going to harvest your data. It's going to undermine your availability. It's going to be inserting malware. It's going to be, um, doing digital extortion on you. Why? Because you were too focused on, we have to get this out there. 
 

And I'm more concerned about accuracy of my AI system than, um, what's that word? Oh yeah, the security of it. No, no, no, no, no. I want uptime and I want accuracy. This other thing you're talking about, security, we'll fix it later, right? And again, that's a repeating story throughout time over the last 30 years. 
 

That, I think, is the bigger risk than these social media attacks and all the sexy stuff we want to talk about.  
 

[00:43:05] Sean Martin: And oh, by the way, uh, one bad cyber event and... You're quick to market, uptime, and accuracy could all go out the window.  
 

[00:43:14] Matthew Rosenquist: Yeah, yeah.  
 

[00:43:16] Sean Martin: Each one of those three can be compromised.  
 

[00:43:19] Marco Ciappelli: And I think this is a great end for a show that is about... 
 

Redefining society. I mean, we need to re learn again how to be a society and now that we don't even know what is real or unreal and does it really matter in the end, we don't know, but, uh, rushing towards, and I get to use my favorite phrase, uh, blinking lights and funny noises. It's just because they're cool may not be the good way to do it. 
 

So with this... Can I make one more comment before you close, Marco? Yeah, of course. One more thing.  
 

[00:43:55] Sean Martin: One more thing. I know it's not your, not my show, but I'm doing it. The magic show that I saw, it has a message in it, which I think is important. And so the show had three coins that were real, three coins that were not. 
 

And the whole trick was about multiplying the number of coins that one was able to create. Um, so the message was... With greed this particular trick lost all the coins ended up with none. So the point I want to make is don't be greedy Don't be greedy. Think about what you're doing Goes back to the to the ethics. 
 

Anyway, so sorry Marco.  
 

[00:44:39] Marco Ciappelli: Oh, no That's another thought that we can put in people's head So I'm, I'm all about that. We never give you an answer. We just give you more questions to think about. And I think that makes a successful episode. So I want to thank, uh, fake Sean and fake Matthew. Their AI was here today. 
 

Uh, this was not them at all. And, uh, and I hope, uh, honestly, I hope you, you got something good out of this. And, uh, yeah, it was a little dark, but, um, I think we're positive people in the end. I agree. Cool. Stay tuned, everybody. Subscribe. And I know that Matthew will come back because we're near the end of the year and we love his prediction on the future. 
 

Not that this wasn't already a lot of it, but, you know, maybe for next year instead of...  
 

[00:45:27] Sean Martin: Maybe he's holding back. He has a good.  
 


 

[00:45:29] Marco Ciappelli: know. I think, I think he knows things that we don't know. And  
 

[00:45:32] Matthew Rosenquist: I've got a crystal ball. I can't see it, but other than the human sacrifice, it's necessary to get it to work. It's great. 
 

[00:45:43] Marco Ciappelli: Oh man, time to turn off the crystal ball, switch the off button. Thank you so much. And I'll catch you later, everybody. Take care. Bye bye.