Redefining Society and Technology Podcast

Keeping Up With Technology and Societal Impacts of Generative AI | A Conversation with Justin "Hutch" Hutchens | Redefining Society with Marco Ciappelli

Episode Summary

In a candid conversation with Justin "Hutch" Hutchens, Marco Ciappelli delves into the rapid advancements of Generative AI, its societal repercussions, and the challenges of keeping pace with its evolution.

Episode Notes

Guest: Justin "Hutch" Hutchens, Host of Cyber Cognition Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/hutch

____________________________

Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________

This Episode’s Sponsors

BlackCloak 👉 https://itspm.ag/itspbcweb

Bugcrowd 👉 https://itspm.ag/itspbgcweb

Devo 👉 https://itspm.ag/itspdvweb

_____________________________

Episode Introduction

Greetings to my ever-curious audience. It's Marco Ciappelli from the Redefining Society Podcast. You know, there's this buzzing feeling I get every time we're on the verge of another technological evolution; it's a mix of thrill and trepidation. And right now, it’s all about Generative AI.

In today's world, keeping up with the pace of societal change is like trying to paint a masterpiece on a canvas that's constantly shifting shapes. Just when I feel I've grasped the essence of a topic, it evolves, leaving me to start all over. It's as if our societal canvas has turned into this dynamic, perpetually morphing entity. But isn't that what makes our era so exciting?

In my recent podcast episode, I had the pleasure of connecting with Justin "Hutch" Hutchens, a prominent figure who's been deep in the trenches of AI, especially at its intersection with risk and cybersecurity. Hutch, as he's fondly known, has an intriguing perspective on where AI is heading and the potential societal implications, particularly the rapid advancements in Generative AI.

Our conversation touched on everything from the pace of AI innovations to the potential societal impacts and the looming regulatory challenges. One of the most striking revelations was the sheer speed at which AI is advancing. If you think we've seen rapid evolution so far, brace yourself. The horizon promises even faster, more profound shifts. With companies competing to outdo each other in AI capabilities, where does that leave us, the people and societies that will live with the outcomes?

The challenge, as we discerned, is not just about the technology but also about the ethical, philosophical, and societal considerations. While we can't predict the future with certainty, discussions like these are crucial. They allow us to ponder, prepare, and perhaps steer the direction in which we're headed.

Join me in this riveting exploration as we dive deep into the societal impacts of Generative AI.

_____________________________

Resources

 

____________________________

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Marco Ciappelli: Hello, this is Marco Ciappelli on Redefining Society podcast and I gotta say I'm struggling in Redefining Society right now. It's changing too fast. Every time I start Redefining and I'm like, okay, I'm getting somewhere and then all of a sudden it's like, whoops, let's get the eraser. And, uh, and make a change on that line. 
 

So, uh, luckily I get to talk on Redefining Society with a lot of, uh, of people that inspire me. A lot of people that even in this case, they also have a podcast on RTSP magazine, the Cybercognition. And, uh, that's what they think about. They think about what's going on in our society. And in particular, yeah, generative AI seems like it's really changing. 
 

A lot of fields, and some people are scared of it. Some people are excited about it. Maybe the truth is in the middle. I don't know. I usually go there. But I wanted to catch up with Justin Hutchins, which is here on the show. If you're watching the video, it's right there. If you're listening, I'm not lying. 
 

He's right here. Justin, welcome to the show. Thanks, Mark. I appreciate you having me on. Yeah, I'm glad that you, you jumped on as soon as I asked you because, uh, we don't really need to prepare. For this, there is no q and a. Well, there's never AQ and a on my show anyway. But in this case, I just wanna pick your brain and, and see what you think is that We made a joke before start recording that if we have having this conversation two months ago, even two weeks ago, it would've been completely different. 
 

The river is moving fast.  
 

[00:01:39] Justin "Hutch" Hutchens: It's, yeah, it's, it's crazy how fast things are moving in ai.  
 

[00:01:43] Marco Ciappelli: Okay, so let's start with a little bit about yourself, just a quick bio in case people haven't heard your show and who you are and why am I actually excited to talk to you about AI?  
 

[00:01:55] Justin "Hutch" Hutchens: Yeah, awesome. Uh, name's Justin Hutchins. 
 

I usually go by Hutch. I, uh, have... Had a long time interest in artificial intelligence and specifically the intersection between artificial intelligence and risk, uh, to include cyber security. Uh, I am the author of the soon to be released book, The Language of Deception, Weaponizing Next Generation AI, uh, which looks at multiple different facets of different ways that adversarial threat actors could misuse. 
 

Emerging artificial intelligence. Uh, and then also the host of the ITSP magazine podcast, Cybercognition, which, uh, also kind of looks at artificial intelligence, uh, mostly from a risk perspective, but, uh, in general kind of looking at kind of the ways in which AI is increasingly becoming entangled in these ideas of philosophy and the ways that we, uh, even understand culture in the world around us. 
 

[00:02:52] Marco Ciappelli: And everything has come together. I think we talked about this already when we were announcing your show and we had a conversation. We met because you were having a talk of one of the big conference about this. And you know, we talk about cyber security. You talk about philosophy, ethics, things happen and the world of education, world of jobs, uh, anything. 
 

I mean, health care and there is a people that are going against it and don't want it. And, uh, people that really go blinded into it. So it, it, it touches on pretty much, I don't want to say everything, but, a good portion of our life. So, uh, how do you track all of that?  
 

[00:03:35] Justin "Hutch" Hutchens: No. So, so it is tough. I can't, so even working in the field and having, uh, so I, I work for a company called trace three and I lead, uh, research and development. 
 

So of course, like any R and D professional right now, a big part of my focus is AI, and even with that being kind of my nine to five job, it's tough keeping up with everything that's going on. So I can't imagine for kind of somebody that has a casual interest, trying to keep up with just the. in a given week, how many things are changing in this industry? 
 

Um, personally, I, I am consistently looking at kind of new white papers. Uh, there is a, uh, another podcast called Last Week in AI that I think, uh, actually my, my last guest on Cybercognition, Jeremy Harris, is one of the co hosts of that podcast. And they do really well to actually hit kind of the, the, the, Uh, just a little bit of depth, but just a ton of breadth as far as everything that happens week to week. 
 

Uh, and he was telling me on the podcast how they spend over five hours for every single episode just preparing because of how much, uh, is coming in. So it's, it's a challenge. I mean, there's, you really have to make a deliberate effort to even try to keep up with everything going on.  
 

[00:04:48] Marco Ciappelli: Oh my god, just the name last week. 
 

I mean last week years ago. We would have said like, uh last month Maybe just already a stretch, right?  
 

[00:04:59] Justin "Hutch" Hutchens: What's funny is this this is a long time running podcast that far precedes CHAT GPT and he he was telling me kind of how it's transformed from like A small amount of time for preparation to now just in recent months just the insane amount of time They have to put into it. 
 

So yeah, it's it's yeah, it's definitely apparent how much How fast the acceleration of change is, is picking up.  
 

[00:05:22] Marco Ciappelli: So a lot of people think that too fast is, uh, is the problem. Um, I'm probably one of that. If I start putting my head of sociology, you know, um, and, and maybe from a philosophical perspective, no, but from a societal perspective, yes. 
 

I mean, regulation, we know they're always behind in technology. So I'm, I have a feeling that's, that's the real hurdle. Right there. We can't regulate. It's too fast.  
 

[00:05:53] Justin "Hutch" Hutchens: Yeah, I think that is one of the significant challenges is that regulation is always going to lag behind innovation. I think for any industry, that is true to some degree. 
 

I think probably more so in artificial intelligence. But for a lot of industries where that is the case, you see... Uh, regulation, instead of trying to regulate the specific details, generally, you will set up some kind of agency that is put in charge of, uh, managing oversight of a particular area. And I think if, and hopefully we do get to a point where there is some kind of regulation around this, but if we get to a point where something is successful, I think it's going to have to be some kind of model like that. 
 

There's no way that they can, uh, legislate down to the specific details, because by the time anything gets passed, all of the... The technical details are going to be irrelevant because of how fast this is moving. Um, but I, but I, I do think that this is the, the topic of regulation is. That's an increasingly important one because I think we are, to your point, we're almost moving too fast to the point where, uh, yes, we're, we're enabling business in ways that we've never seen before, but I, but I think there's really risk in two different areas. 
 

One is that that implementation risk of if you're an organization and you are just trying to keep up with everybody else, that's mainlining these new technologies into their. Operational workflows, um, you're likely to do the same without the appropriate safeguards and, uh, that could introduce tremendous risk to your organization, but there's also the adversarial risk, which is kind of the topic of the book that I. 
 

And I'm going to be putting out in the near future is the fact that as we introduce these new capabilities, threat actors are as fast, if not faster, picking up these same capabilities and using them for nefarious purposes. And what's crazy is we're already at a point where we look at something like GPT 4, which is already. 
 

So extremely powerful and, uh, a lot of people have, there's even been white papers that have kind of adapted some of the leading IQ tests to evaluate its levels of logical reasoning. And we're seeing it already perform in the top 99 percentile of, uh, humans. And now Google is already talking about, uh, Their Gemini model, which is likely going to come out at some point this year, that is supposedly five times the computational power of what we saw with GPT 4, and we're talking about in, in,  
 

[00:08:20] Marco Ciappelli: in a matter of time, sorry, did I get that five times,  
 

[00:08:23] Justin "Hutch" Hutchens: five times the computational power of GPT 4, so. 
 

Yeah, and it's hard to even fathom at that point because we, so with with these large language models, we have this idea of emergent properties where, as they continue to scale, as we continue to make them larger and larger, make bigger neural networks that are supporting them and more layers, uh, and ultimately more parameters, which is kind of how we measure. 
 

The computational power of LLMs these days. Uh, they start becoming able to do things that they previously weren't able to. With some of the large, the small initial large language models, we saw just a basic ability to maybe answer questions and do some autocomplete. But as we've continued to scale these up, we've seen them just naturally Develop these new capabilities from logical reasoning from being able to do code completion from being able to translate from one language to another, uh, things that they were never deliberately trained for. 
 

And so there's this immediate question of we already have these. Profound capabilities with something like GPT 4, we do that 5X. What new capabilities are going to be unlocked that we can't even foresee that are suddenly going to be available to everyone in the general public? And so I think there is a tremendous amount of risk, uh, that we need to start considering as. 
 

Essentially, these tech firms just compete against each other to make bigger and bigger models.  
 

[00:09:48] Marco Ciappelli: Yeah, so I think I have two points. I'm going to start with, um, the reason why we, we do it like this. So regulation, not going to catch up anytime soon. We could self regulate. Not going to happen. So, but why it's not going to happen is because from a business perspective, when you have a market, and I'm going to give you an example of the, you know, it's something I experienced myself, where you may want to hold on from an ethic perspective, test it, be sure, definitely cybersecurity, but even, ethically speaking, but then There is a first player. 
 

It comes in. So, you know, uh, the script, for example, to edit podcast as a language instead of a timeline, right? So looking at the, at the wave, sound wave, you, you actually remove a word and that's gone. You don't need to cut it. And all of a sudden you're like, wow, this is amazing. And then within a month, Adobe now has that. 
 

Neural engine in Photoshop and Premiere and all the tools that you use. And then you go, there's canvas that you can do the same thing. Uh, meet journey, daily three. And then all of a sudden it's like, yeah, just put your input in Photoshop. It's going to generate whatever you want. So the point is, if you don't jump on the train, the train is gone and somebody. 
 

[00:11:18] Justin "Hutch" Hutchens: And I think you're absolutely right. It's, it's almost disadvantageous for American businesses for us to implement regulation. And then I think it also begs the question of if it would even be effective because with multinational companies, you always have the issue of what is commonly referred to as regulatory arbitrage. 
 

Basically, if there are very stiff regulations in the country where I'm wanting to do business, but I want to. Continue to do business. I just move my operations elsewhere where I can get away with it. Yep And so I we we see that commonly in financial where people move to the caymans in order to do their their shady financial business And and essentially we could likely expect that we would see the same thing with artificial intelligence if we implement strict regulations People will move where they can Continue to do operations. 
 

Yeah.  
 

[00:12:09] Marco Ciappelli: So do you think there is, cause that's a big question for me. And, and I agree with you, I don't know, any industry, like, okay, if I can't do it here, I'm going to do it somewhere else, but then people, the government is going to say, well, no, you can stay here because we want your money. You want you to pay taxes and hire people. 
 

So it becomes an economic decision. But as a humans, um, can we think about it? A little bit, or are we just doomed to just follow this?  
 

[00:12:46] Justin "Hutch" Hutchens: It's a tough question. I, so I don't think that obviously there are significant challenges with effectively regulating, and I think those have to be taken into consideration with any potential regulation that we try to put forward. 
 

But I think what it really gets to is the importance of Global partnerships, and even that has significant challenges related to it, getting everybody on the same page, especially when you have, uh, general adversarial views of one another as pertains to other things. But, um, so, so, I mean, it's, it's a really challenging problem to solve. 
 

And unfortunately, I think there's, there's no shortage of problems and potential risk here, but the solutions, uh, while. There are some out there. They tend to be kind of almost leaning towards idealistic and in some ways almost unrealistic because of the challenges that you have to overcome in order to even implement that. 
 

Now, I'm by no means suggesting that we shouldn't try, but I do think that there are significant challenges ahead in regard to trying to put the guardrails on this. And I, I, what I talk to people about frequently is I think that it's fascinating because in. If you look at any of the historical, you go back 10, 20 years, you look at any of the science fiction, if you look at any of the writing from any of the main futurists, there was always this perspective that advanced artificial intelligence was going to kind of break out of its guardrails, break out of its sandbox, and take over the world. 
 

And what's fascinating is that we In truth, there never was any guardrails. There never was a sandbox. The moment we had anything that was even remotely comparable to human intelligence, we immediately were like, put it out on the internet, start connecting it with everything, give it agency and autonomy to actually take actions without human intervention. 
 

Um, so it's, it's fascinating to see that kind of this idea that it would overtake those controls never really was even an obstacle for AI in the first place.  
 

[00:14:55] Marco Ciappelli: Well, once you connect it, I remember I was reading a couple of years ago, three years ago, um, Max Tegmark book, um, Life 3. 0. And there is like all these scenarios, you know, to like Nick Forsstrom. 
 

And funny story is that the problem always happened when, when you take the, the hardware and you connect it to access to the internet. Some scenarios like, okay, well, let's put it in between four walls. There is no cable, one thing and another, but then some scenario. It manipulates people into help to escape. 
 

And you're like, well, we're not that stupid. Well, the truth is we don't need it. We already put it on the internet. So I don't even need to worry about that. Uh, so my other point that I wanted to ask you is, so five, five times the power, and we don't know exactly what to expect, is going to learn things on his own. 
 

So are we, is this the real concrete step toward general? Yeah. Artificial intelligence?  
 

[00:16:04] Justin "Hutch" Hutchens: So, it's an excellent question. I think, uh, it really depends on how you define artificial general intelligence. If you define artificial general intelligence as something that is able to generalize to such an extent that it's able to use the information available to do new and unique things, kind of tackle those zero shot problems, then in a lot of ways, we're already seeing at least the early signs of general intelligence with GPT 4. 
 

I mean, you can always Uh, take something like that, give it, uh, the specifications for an API that was not in its training data. Something that's been published since it was even trained. And tell it kind of what you want it to accomplish with that API, and then basically just create a very simple relay or interface to take its commands, execute them within the API, and then return the output. 
 

And we're already seeing systems that are capable of taking action in that general form. That is completely outside of its original training set. But of course, most of those actions right now are still very much in the digital world. That is to say, they can interact with APIs, they can interact with systems, but they can't necessarily interact with the physical world. 
 

And I think a lot of people, when they think of AGI, they think of kind of what we saw in the old... Science fiction movies with robots that are roaming around and without any training, they can figure out kind of simple things like go make me some coffee or, or go. Brush up the stuff on the floor or something like that. 
 

Um, obviously we already have robots that can do that, but they're, they're, in most cases, they're deliberately trained for those particular tasks. They're not generalizing. They can't kind of, uh, adapt to the world as needed. Uh, what's interesting is that the transformer architecture, which was the basic architecture that is used for most of the generative AI systems that we're seeing these days, uh, everything from DALI to CHAT GPT to. 
 

Some of the different audio models and stuff like that. Um, there's actually been a white paper that was recently done by Google. And I say recently, at this point, it's not that recent because it was, I think, over a year ago. But it looked at how the transformer architecture could actually be used in the same way. 
 

So kind of with each of our different medias that we've used transformers for, you basically take the input data and you tokenize it into small pieces of data. So for language models, you take, you basically break down the, the language into individual words. Those words each have some kind of numerical token, and that's actually what the system is computing on is those tokens. 
 

Uh, same thing with image generation. You basically break down a particular image into. Uh, squares of pixels, uh, in most cases you've got something like 16 by 16 patches of pixels, and each of those are individual tokens that are handled. And what this paper did was it looked at how we could actually tokenize kinetic actions for robotics in the same way using that transformer architecture. 
 

And the way that they did that was they basically had a bunch of Employees that, uh, remotely controlled different robotic systems that had a video camera looking at those interactions and they would append to those different video interactions. The Uh, language text describing the actions and then would also tokenize the, the specific, uh, changes to kind of the, the rotation, the adjustment of the different mechanical components of the robot. 
 

And what they found was that by doing this, the systems were able to generalize in a way that was very similar to what we see in terms of generalization with language or generalization with images to where robotic systems could be put in completely different contexts. 
 

So, what's interesting is while these initial innovations existed in language models, it seems that the exact same technology is likely going to be the foundation for physical robotics in the near future and kind of that next step of general intelligence beyond just digital tool usage of potentially having robots in the physical world moving around us that are capable of interacting with the world. 
 

[00:20:39] Marco Ciappelli: Well, so well, I'm thinking this and I'm thinking, okay, I hope we're not going to Crazy for maybe people that are not expert in this, but I think you explain it in a way that makes a lot of sense in term of if you got the data, which I can translate in human as knowledge. Then you can use that knowledge slash experience to adopt and that's what we do as human, right? 
 

So that the learning thing is like, hmm. Have I been in this situation before? Not quite exactly the same, but seems similar to me. Uh, maybe I can, uh, you know, this is doesn't look like that wall and that window, but it's still, it's probably seems to be a different kind of wall and window. Maybe I can go through it. 
 

So if we assume that we're really getting to a human way of thinking here, uh, nobody's born with the knowledge, right? I mean, Again, AI is human more than we think.  
 

[00:21:43] Justin "Hutch" Hutchens: Yep. And, and what's fascinating is, is while it does, it does seem that way on the outside and all external indicators would indicate that if you look at really the way these systems work, um, the language models really are autocomplete engines. 
 

The, the, the image generation is really just autocomplete based on the context that it has available. Um, essentially the robotics stuff is exactly the same. So while it is. It's fascinating to see that just kind of computational actions based on probability become, from the outside looking in, all of the things that you would expect from an intelligent entity. 
 

And so it, I mean, it really does kind of raise some questions of kind of what is consciousness in the first place? What is... What is free will? What is? And I think that's where we're starting to see again that entanglement of emerging technology with the questions of physics and philosophy and kind of those underlying fundamental sciences that in the past were kind of two very separate things that 
 

[00:22:54] Marco Ciappelli: That's where we get very philosophical because it's about, what makes us human. And that's, that's a big question. So I want to, I want to touch today's with a little bit of a quick reflection before we end on what is actually happening in our society with this. So we've seen, um, Go on strike. 
 

We've seen actor go on strike. We have seen, we see a lot of fear in a lot of other jobs. I was listening to an economist from Harvard that he was like, you know, we've gone through this before. We've gone through this with the Industrial Revolution. We've gone through this with the computer, the digital revolution. 
 

And, uh, Sometimes it's going to level in the middle. Sometimes it's going to level towards the top. But we will be able, as humans, to find new jobs, as we've always done. But in the meantime, I think what I want to go with you, in the short term, I feel like there is already more benefit than negative. Uh, consequences, finding cures, medicine, pharmaceutical, scanning, and I'm saying the health department, but I can go in a lot of different places. 
 

I mean, there are applications for resolving the climate change, which we're running out of time, and maybe we just bought ourselves time. So, a view on that.  
 

[00:24:28] Justin "Hutch" Hutchens: I think you're absolutely right. This is in its very nature. This is a disruptive technology, so it's not inherently bad. It's not inherently good, and I think both of those things come together. 
 

There are it's disruptive in the way that it is able to power new capabilities. It is able to introduce new ways of thinking and new ways of approaching problems. That is unlike anything we've seen before, and we're going to see tremendous. New developments come out of this, and I think you're right. A lot of that is going to be net positive for humanity and society as a whole. 
 

Uh, but of course, with any disruptive technology, you've also got kind of that double edged sword of with that. Innovation also comes new risks and just at the rate that we're moving those those risks themselves are um hard to tackle while we're reaping the benefits, but I I do think I I think it's interesting the The disruption of jobs topic because I recently saw a twitter post that kind of stuck with me Which was, uh, humans doing hard jobs on minimum wage while the robots write poetry and paint is not the future that I wanted. 
 

And I think it does, it does highlight a very interesting turn. And you, you compared this to the industrial revolution. And I think in a lot of ways, this is very similar to the industrial revolution, but what is different is we're used to automation and technology displacing unskilled labor. Kind of your blue collar factory work that we've seen that for decades. 
 

And what's uniquely new about this technology is that it does stand to potentially disrupt the areas that previously were untouchable by technology, which is your creative areas or your highly skilled areas like coding and, um, or medical diagnostics or review of legal precedent. So I think to your point there, there's right now there is, I think we're in a situation where people that decide to take the efforts to to make the effort to learn these technologies are going to benefit tremendously, but people that don't are going to get left behind. 
 

And I think that the biggest change that we're seeing is we're Increasingly moving away from a culture where education is foundational, where you get your education early on in life, right after you get out of high school, and then you're, you're set for the rest of your life. And we're moving towards a world where in order to keep up, you are going to have to engage in lifelong learning and continuing to improve and adapt your skills. 
 

Otherwise, you are going to get left behind. So I think for people that have that That motivation, that drive, and enjoy continuing to learn, uh, this is a net positive. I do think that there's others that are less interested in that and more interested in just kind of enjoying life and sitting back and relaxing, uh, that this is going to potentially be problematic for. 
 

[00:27:26] Marco Ciappelli: I agree with you 100 percent because the way I see it also, you said it's similar to the industrial revolution, but I see it's even more similar to the digital, the computer revolution. I mean, there've been people, and that's why there is a gap, I think, in generational gap at a certain point, maybe in the eighties, seventies, when people didn't jump on that train is the fact that it's like, yeah, I don't, I don't need a computer. 
 

I'm going to run my business like I've always done or individually. I mean, they were just not excited. Maybe like me or you or others that say, hey, it's new technology. I'm going to. I'm going to try it. At least I know, right. And then slowly, maybe at that time, if you didn't use the computer in your business, even the daily business, um, the store, the shop, whatever, it, it, you, you paid the consequences, but it took time now. 
 

It's gonna take no time. We'll go back to what's new this week in AI, and if you haven't at least given a try, don't say I'm not gonna ever touch CHAT GPT because it may, like you said, it may be your... Maybe your problem. So, yeah. Last question. You want it? Where's the plug? Can we unplug it if it goes, if it goes, uh, really weird? 
 

[00:28:51] Justin "Hutch" Hutchens: So what's, what's interesting is shortly after the release of CHAT GPT there was a joke job post that was circulating or circulating around that was basically kill switch engineer and it was it was over the top kind of like it is your job to stop the the robot apocalypse and create the solution to to call it quits um but but i think there actually is a a valid discussion to be had there of kind of anytime we're implementing these what is the The back out plan if things do go terribly wrong. 
 

So, um, I, I don't want people to kind of disregard the potential for AI altogether out of fear that this is going to be the end of the world. But at the same time, I think that there is, there does need to be some kind of consideration around whether it's an actual physical kill switch or pulling the plug from the wall, uh, all the way up to just understanding the best ways to. 
 

Uh, because unfortunately, I mean, the truth is, and we know this from anybody that works in cyber security, uh, once you talk about... Digital capabilities. There are already stuff like wormable malware that it moves from one system to another pulling the plug for where originally started is, is not necessarily going to, uh, to disrupt the, the capability in this increasingly interconnected world. 
 

So, um, yeah, it's a fascinating discussion. I think that, honestly, as much of a joke as that post was, I think that, um, Thank you. I think there's some validity to that. Interestingly enough, OpenAI actually has since kind of introduced some new, uh, capabilities around, uh, super alignment and talking about how they're going to align super intelligence as it becomes more capable than than any living humans. 
 

[00:30:44] Marco Ciappelli: So, um, maybe that is maybe there is that one thing where we. We all, hopefully, as humanity are going to agree on, on something, maybe not about the way we use it, but about the way, what if it's going to go crazy on us, how are we all going to stop it? And, you know, there's always Dr. Evil, but I think for the majority, we kind of want to have a plan. 
 

I think it's like, it's the same thing. Like, you know, the asteroid is coming. Uh, there's no countries anymore. We're going to have to come together and have that plan. Which I don't know. Uh, I don't know which one come will come first, but, uh, I'm, I'm more afraid than the asteroid than the AI right now, but  
 

[00:31:32] Justin "Hutch" Hutchens: yeah, I mean, maybe this is, this is the thing that pulls us together and creates global unity around solving a problem. 
 

I mean, fortunately, it didn't work with climate change, though. There have admittedly, there have been some, some global efforts there, but I think also there's, there's enough, uh, politicalization, Yeah around that  
 

[00:31:55] Marco Ciappelli: Well hutch this was uh, I don't know how people are feeling about it. I I have fun talking about these things Um, oh, yeah, it's fast. 
 

I I hope people are not getting too scared. I mean, of course we go to the extreme. Um, Scenarios and, and maybe we come to something that everybody can understand. But if, uh, if you guys enjoyed this conversation, you're definitely going to enjoy, uh, subscribing to my channel and definitely to Justin's channel as well, because he, he has some very quick, um, stories, well, not the last one, because you had a guest actually, so that went on much longer, but, uh, when you do your own, uh, one voice, uh, podcast, it's, uh, it really, you make people think, so. 
 

Thank you for taking the time and, come back, maybe not every week because it's too much to follow. But, uh, you know, once a month, every time you want to share something with me, I'd love to have this conversation.  
 

[00:32:53] Justin "Hutch" Hutchens: Awesome. Thanks, Marco. I really appreciate you having me on.  
 

[00:32:56] Marco Ciappelli: All right, everybody, uh, take care, stay tuned for the next episode, subscribe, and, uh, we'll see you soon.