Redefining Society and Technology Podcast

Introducing 'Cyber Cognition Podcast' | A Conversation with Podcast Host Hutch | ITSPmagazine Podcast Network with Sean Martin and Marco Ciappelli

Episode Summary

On this "Cyber Cognition" podcast introduction episode, Sean Martin and Marco Ciappelli are joined by Hutch to discuss the plans for the show, our collective views on the current and future states of artificial intelligence, and what the listeners can expect from Hutch as he uncovers the countless ethical considerations surrounding the use of AI.

Episode Notes

Guest: Hutch

On ITSPmagazine  👉 https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/hutch

Hosts:

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

______________________

Episode Sponsors

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

______________________

Episode Introduction

The Cybercognition podcast, hosted by Hutch, explores the impact of artificial intelligence on society, with a focus on the ethical considerations surrounding its use. The podcast explores the cultural and social impacts of AI on our lives, mental health, and interactions with others. The podcast aims to keep listeners aware of ongoing AI trends and their impacts, with a focus on the ethical considerations surrounding the use of AI.

Marco and Sean express their cautious optimism towards AI, acknowledging its potential risks while also highlighting its benefits and potential for democratization. They look forward to the many topics that Hutch will explore on this new show.

Overall, the ITSPmagazine Podcast Network offers a diverse range of podcast series that explore various aspects of technology and its impact on society. Many hosts, including Hutch, take a thoughtful and philosophical approach to the subject matter, encouraging listeners to consider the implications of technology on our lives and the world around us.

We hope you enjoy this conversation with Hutch and enjoy listening to his show. Be sure to share, subscribe, and leave a review if you like what you hear!

______________________

Resources

______________________

For more podcast stories from Cyber Cognition Podcast with Hutch, visit: https://www.itspmagazine.com/cyber-cognition-podcast

Watch the video podcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllS12r9wDntQNB-ykHQ1UC9U

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.

_________________________________________

voiceover00:15

Welcome to the ITSPmagazine Podcast Network. You are about to listen to the cyber cognition podcast, a show about artificial intelligence and how it is transforming the world around us with your biological, sentient and mostly rational human host Huch Knowledge is power now more than ever

 

Sean Martin  00:43

Arca Shan cognition rolling are a

 

Marco Ciappelli00:49

little rusty the coffee helping the cogs that a little rusty I need to I need I just did some coffee to see if I could. I don't know if the coffee is solvent. I don't know for for rust to loosen? Yeah. I never try that. It's certainly loose my my brain quite a bit. So if I'm a little hyperactive right now that's that's why but the good news is that I don't need to be hyperactive I don't need to entertain because Sean, we have a host of one of our show that we had a pleasure to, to talk before RSA Conference not too long ago. And all of a sudden boom, we have a podcast

 

Sean Martin  01:36

guest host. And on a topic that's on in the cogs everywhere. It's It's the it's the things that I don't know if it's keeping the cogs moving or slowing them down. Well, I guess time will tell that. But hutches with such Thanks for Thanks for being on.

 

Hutch  01:58

Hey, thanks for having me. Excited to be here. Yeah,

 

Sean Martin  02:00

this is super cool. And we're thrilled to have your show cyber cognition on as part of the Podcast Network. And yeah, I think the stars aligned and you come from the, from the world of space as well. So the stars align, they'll say, for you to talk to an audience about AI and an all that all that comes with it. So before we get into it, though, I mean, we're gonna we're gonna talk about your show. Of course, that's kind of the whole purpose of this to kind of give folks a taste for what's coming. But I think it's important for them to know who hutches so maybe, I think start back with some of the the early days, maybe even before some of the space stuff you did, how, how did you get into technology and what what, what kind of brought you to this point where AI is such a, such an important topic for you.

 

Hutch  03:00

So I was actually originally when I got out of high school, I went in to school at university Houston in Texas, and was majoring in philosophy. And one of my favorite classes while I was in there was symbolic logic. And I remember, at the time, I was putting myself through school actually working as an overnight Customer Service Manager at Walmart. And there was a guy that frequently came in as a customer overnight. And it was one of those people that has ultimately ends up having a significant impact on your total life trajectory, but you don't realize it at the time. But he came in one day, and he saw my Symbolic Logic book that I was working through. And he had a technology background himself. And he started he asked me to explain some of the stuff that I was doing. And a lot of it was kind of the same things if then disjunctions, conjunction logic, stuff like that. And honestly, I don't even remember what language he was working out of. But he kind of took the exact problem that I was working with. And he's like, you're you're essentially doing coding. And on paper writes out I think it was JavaScript just writes out more or less the the coding equivalent of what I was doing. So I had always had kind of an interest in technology, but that was really what kind of made me take the deep dive in and start playing around with it a lot more. I ultimately ended up making a move to a company that was it was a helpdesk job for a triple play company that did the television, internet and home phone service over it was one of the first ones fiber optic to the home. So very early on before its time we had all kinds of problems. The technology wasn't ready we nobody knew how to handle fiber. We had people that closed the cabinets on the fiber and broke all connections. So the company ended up going under and I was without a job along with close to a 100 people that worked at the company with me who were all looking for more or less the same jobs in the area. So without any kind of academic credentials, I kind of scrambled to find anything and ultimately ended up joining the Air Force. And ended up being a fantastic opportunity for me, I got access to technology that I otherwise would have never been able to play around with. Because unlike many of the other military branches in the Air Force, you actually get good technology and not hand me downs. And not a knock on anybody else, just the horror stories that I've heard from some of my friends in the other branches. So the rest is history, I did a lot of work in the cybersecurity field and also a lot of interest in personally a lot of interest in social psychology philosophy, obviously, given my background and what I was originally looking at, and also artificial intelligence. So for the artificial intelligence side, I started playing around with it probably about a decade ago, with trying to master the markets and make myself rich through day trading. And so did all kinds of different models with psychic learn Python. From there started also playing around with language models, and testing some different security threats and risks around those, and had been doing that for a long time actually was working with some of the early GPT models before Chet GPT came out. So I feel like in that regard, I got a good head start on what now everybody is paying attention to. And now that it is becoming once again, such a hot topic, I thought it would be a great opportunity to take what is otherwise a passion and interest of mine, and build out this podcast and really start addressing for me, what is some of the major issues related to society and the social and cultural impacts involved with the integration of artificial intelligence into our daily lives?

 

Sean Martin  07:10

No, no lack of conversations to have there for sure.

 

Marco Ciappelli07:15

No, that's for me. It's just like kicking the door open for me to start talking for hours about this. But that's why I'm gonna leave you go first, Shawn.

 

Sean Martin  07:26

Well, I, I almost wanted to ask the question, given your background. I mean, when we talk about technology, we certainly in the cybersecurity world, we talk about weakness and vulnerabilities and the the opportunity to exploit those for gains in a number of different ways. A lot of which are technology oriented, but some do crossover into the physical world and greater societal impact. But do we? So it's easy, certainly, from coming from this field, it's easy for me to make that that view of vulnerability to exploit, do human have the same weaknesses in their intelligence? And yet, we don't talk about him that way. So I'm kind of wondering, for people who can kind of see my hands on the screen, the weakness in intelligence, human versus machine, or artificial? And, and maybe even the ability to, to exploit it? Are they on the same level or different?

 

Hutch  08:37

I think there are a lot of parallels. There's also a lot of differences too. But I think some of the parallels that definitely stand out for me with artificial intelligence and machine learning. No matter how you try to tailor that data to avoid it, there are inevitably going to be biases associated with that. And I think that is absolutely true of human cognition as well. We are naturally inclined to make judgments based on a finite number of experiences that we've had in our life. And those experiences inevitably lead to bias decision making. And so I think, in that regard, we have a vulnerability that's comparable to what you see in artificial intelligence. I think for for some of the earlier artificial intelligence models, stuff like vanishing gradients also could probably be compared in some regards to forgetful memory or something that was once relevant in your and present in your memory. That and vanishing gradients of courses is really one of the factor gradients themselves is one of the big factors that's involved in a weighting the different connections within neural network systems. And one of the problems that you have, especially when you're going through a recurrent analysis of multiple different layers, Erz is that those gradients themselves can increasingly become expect are exceptionally large in the form of exploding gradients or exceedingly small in the form of vanishing gradients, to where certain factors that previously would have and potentially should be considered in a conclusion will almost become trivial or completely irrelevant because of the fact that those gradient values just kind of build on top of each other through a multiplication process. So I think in that way, you've also got something that is comparable to kind of loss of memory over time, or even in the same regard, if you look at exploding gradients, kind of hyper fixation on certain things. Or even analogous to like human paranoia, I guess. So. So yeah, I think there are definitely a lot of parallels. I think there are other ways that we see significant differences and deviations between human cognition and machines. Obviously, one of them being that machines are extremely analytical. And while I think that that is true of humans as well, I don't think that we are or ever will be capable of doing it on the same scale that systems are, whereas machines have, or humans have other vulnerabilities that are more related to our emotional temperaments, and our I guess, really, for lack of a better word, our sentience that currently does not exist. And I don't expect based on any of the current models that we're building will exist anytime in the near future for artificial intelligence.

 

Marco Ciappelli11:39

All right, I'm kind of my clock, my brain is less rusted now. And it's going a million miles an hour, because Nick has I mean, this is the stuff I think all day long. And Shawn knows it, I think in terms of sociology, philosophy, and how it applies, and all of this. And then we were just started using an artificial intelligence on ITSPmagazine. It's a soft lunch, it's based on chat GPT for and it's about the all the content that we have. And so it's going to suggest if we've talked about something, what our guest and host and I was writing, I created this, we created this character. And I wanted to explain that artificial intelligence really is it's, it's what knows, and how we learn something, I don't know if it makes sense to you. So my point is, it's only that because we fed that kind of information, we put some kind of emotions in there or, but but it's not the human emotion. So as you're talking to all of this, I guess, a book that came in my mind was blink, and if you've ever read blink, but it's kind of like how it's it's famous sociologist that wrote it, Mark, I don't know, I'll get to that. But it's about when you have the perception of something. And then you apply logic. And often your first perception that blink of an eye of maybe an antique expert, knows if something is false, or a fake, or a true piece of art. They cannot you know, before you actually start applying all the logic to see if is based on their unique experience. Yeah, it's something that you just can't explain. And so

 

Sean Martin  13:43

let's see, it's the train gut. Intuition. Right, right.

 

Marco Ciappelli13:47

Well, we call it intuition, but it's also our own experience, something that kicks in and when you apply that to a machine is very large on this to say, like you said, I don't think we can ever replicate this intuition in a machine. So it's up to us to create the machine that we want it to. To have.

 

Hutch  14:10

Absolutely. Yeah, I think there it don't get me wrong. The the latest generation of artificial intelligence capabilities with transformer architecture are incredible pieces of technology. But but they really are just statistical engines that are based on like you said, right based probability, right. And, and nothing more than that. And I think and that's actually a very, that's actually what my first episode is really about. Is people

 

Marco Ciappelli14:38

going there or not. I'll let you tell me about that. Not Tell me about it your first episode.

 

Hutch  14:45

So the first episode is the Elisa effect, and it is focused on just that which, so in the 1960s, a researcher named Weizenbaum created a what is the first documented chatbot call? Luisa and even then, when people would interact with it, even though the system was extremely simplistic compared to what our technology is capable of today, but even then people would have a tendency to feel like they are interacting with something that is more than just a mechanical system, a rule based system. And so what the episode really looks at is kind of the history behind the Iliza effect, what it is, but also what it means for us in the modern world, because while it was a factor, then back in the 60s, when you introduce the technology that we have today, which really even our modern systems are still rule based systems, we don't write the rules anymore, we write the systems that themselves write the rules based on the training data that we provide. But there's still ultimately there is a mathematical necessity involved in the way that the output is. Results from the input that is provided and the configurations of the system itself. So unfortunately, because of the complexity of these systems, and how impressive they are, at mimicking and replicating that sense of intelligence, and an even sense of emotion, and sense of kind of factors that we think of as being uniquely human, because of their capability and doing those things, our tendency to fall for the Elisa effect is dramatically more than it was then. And then you're seeing even extremely intelligent, well educated people that are starting to speak about these systems as if they are conscious. And I think another thing that we're seeing is we're seeing a problem with just our day to day rhetoric where we talk about we use loaded terms like news or thinks when referring to these machines when they don't know anything, they don't think anything they are programmatically generating output based on input that's providing in there. And so I think there's that tendency to speak about these language systems in that way even further exacerbates that problem that we have of wanting to anthropomorphize these systems of wanting to assume that there's more under the hood than there actually is.

 

Marco Ciappelli17:17

We want that. That's, that's what it's really driving this. We I mean, I was reading an article about some researcher at Microsoft that they say, Well, you know, I think it's really becoming sentient. And some other people were like, well, that's what you want it to be. What's gonna like when you we see faces everywhere, right? You may want it personally, I think the idea is, I think it's human nature to to have this creator, you know, approach to thing replicate who we are, and how other object thinks and so forth. Anyway, a quick note, before I pass to Sean, is that the book I was talking about was blink, The Power of Thinking without thinking, Malcolm Gladwell, that's, that's the book. So we're famous author, just I blanked on that. I blinked and blanked on it. That's how it goes. So we're Shawn,

 

Sean Martin  18:11

we weren't trained to remember that piece of information.

 

Marco Ciappelli18:13

That's what makes me human. I just, I could record the remember the concept, but I couldn't remember the factual data about that

 

Sean Martin  18:26

will be because this is such a broad topic. I mean, I started with kind of my background and kind of started with the cyber realm of things Marco went into how human like it might be, could be, will be, won't be, shouldn't be. And those are just two, two shards on gigantic iceberg. I don't even know if those are those are probably above, above the water shards that we can see and think about and probably a huge amount below that. So talk to me about kind of the scope of your show. What what do you if you can kind of contain it, give people an idea of where you'll go, how you might weave through some things what you hope people will get get from it, by by listening to and guests for some episodes if you choose to have them on.

 

Hutch  19:19

So the intent of the show is partly to keep people aware of the ongoing trends associated with machine learning artificial intelligence and how we are seeing those integrated into new capabilities. And, and that itself is something that is moving tremendously fast unlike anything that we've seen before. So there's there's that component of it, but really the the part that I'm most passionate about and the part that I think is going to show through in many of the episodes that I haven't and really even in some of the guests that I have in mind that we will have on the show in the future is speaking to the cultural and social impacts related to artificial intelligence, because I think those are, were already significant. If you look at even the older firms of artificial intelligence where we have integrated machine learning capabilities into the way that we've done targeted advertising, and the way that we have created systems that are constantly giving you those instant gratification, dopamine hits as you scroll through, and you're delivered content that's uniquely tailored to what appeals to you. So I think we've already seen significant social and cultural impacts related to artificial intelligence. And I think in this next evolution of newer emerging artificial intelligence and generative AI, I think we're really just at the next stage of that we're seeing an even further push for instant gratification for having knowledge directly at our fingertips at the very moment that we want it to be able to instantly create things in our minds just with our words, and have it instantly manifest in front of us. So I think that in that regard, and also in kind of the just the psychological impact related to having so much more additional capabilities that absent about technology, we would not have, I think we're going to continue to see significant impacts, some good, some bad, related to people's lives, mental health, and the way they interact with others within society.

 

Sean Martin  21:33

I want to get I want to get your thanks for that, by the way, because I want to get your thoughts on this. Because as you're, as you're describing, what you're going to talk about now how you're going to kind of talk about it. I'm thinking that we've probably already been living with a lot of AI and didn't know it. For example, I'm sure I talk with a big, big distributor who sells stuff through through prime, right, for example, and I'm certain that half the time if not all the time, I'm not actually chatting with a person. When I'm when I'm on there.

 

Hutch  22:09

Yeah, Shawn, you're absolutely right. Artificial Intelligence is already everywhere.

 

Sean Martin  22:13

So it's been normalized, but probably an only because I'm in this space that I that I probably realized that that's happening. Right. Now, I guess, have we switched over with GPT, four and an interface that everybody can now access and API's that drive a bunch of new systems where it's raised, raised awareness again, so that it's surfaced that this is being used? And are we going to reach another point where it's normalized, and it kind of goes away?

 

Hutch  22:45

I think there's a couple things that make this generation of artificial intelligence uniquely different. One is that I think the approachability of it has democratized artificial intelligence. And what I mean by that is that artificial intelligence for decades now has been accessible. There's been open source libraries, there's been capabilities where anybody can play with it. But the bar was much higher, because it required a certain amount of technical skill set in order to be able to use that artificial intelligence effectively, with the latest generation of machine learning specifically, with using transformer architecture to build models where the interface is normal human language, it has made systems to where they are usable with out any kind of technical background, any kind of expertise, if you can speak human language, and not even just English, because we're starting to see more and more of these models and languages all around the world. If you can speak human language, then you can interface with these systems and you can use them very effectively. So I think that's one reason why this is significantly different than the previous AI that like you said, is everywhere, but it's under the hood and it's operating behind the scenes based on the the the choices of a few handful of technical companies. The other reason why I think that this is uniquely different is that we are starting to see a consolidation of artificial intelligence capabilities. And what I mean by that is, rather than in the past, we used different types of AI for different capabilities convert or convolutional neural networks were used for computer vision purposes, recurrent neural networks and long short term memory architecture was used for language models. Different classification and regression algorithms were used for mathematical analysis with transformers and the type of architecture that chat CBT and GPT for based off of, it is a multimodal type of Artificial Intelligence. So basically anything that you can encode into data, not just texts, not just words, but audio or pieces of a picture, in order to create visual capabilities, we can now use the same type of artificial intelligence for all of these different capabilities. And so rather than having to focus on acceleration and advances in a bunch of different disciplines of artificial intelligence, any new groundbreaking capabilities or advances in the area of transformer architecture, machine learning and artificial intelligence is going to contribute to AI and every single one of these disciplines. So you also have this accelerator where it's not just kind of a variety of different areas that we're having to increase our capabilities in. But we've got the kind of the traditional exponential growth Technology Accelerator on top of the fact that everything is now consolidating, and advances in one area mean really advances in all areas.

 

Marco Ciappelli26:06

Wow. So you know, what I was thinking is, remember, the fifth element lupus on movie where the fifth element, digest an entire amount of culture and learn the language in few minutes watching TV? And, and that makes me think like how Before, we used to refer to the Library of Alexandria, where all the knowledge was, and then the TV and radio and now there is the internet, I You just plug into the internet and everything comes together. And I think it's hard for people not to think that this as a creating something so powerful, because it has the all the knowledge of the world. But then can this something so powerful act on it? Is that a Jarvis goes crazy? Is that some of this sci fi scenario? So I think it's natural for for humans looking at this and be like, holy shit. Scary, right? So what, what is your feeling about this with your background and philosophy and your approach with this podcast? I mean, where are you trying to go with this? Are you trying to say, Yeah, we need to push pause, or are you going to be more of? Let's welcome? I don't know.

 

Hutch  27:26

I would say that's a tough question, because I have I, realistically, I would, I would like to say that if it was feasible to push pause, I would be all for it. The problem is, I don't think that it's a realistic solution. Because if we push pause, that doesn't mean that everybody's pushing pause, it's nearly impossible to get everybody on the same page. As far as that is concerned. I think, unfortunately, the the cat is out of the bag. Pandora's box is open, so to speak. There's there's no going back. And so for me, I think a couple things come out of that one, I would like to see more openness in the industry. Because I think that we're we're no longer in a situation where we're well past a situation where nobody's going to have access to this technology, somebody's going to have access to this technology. And then it becomes a question of is that remain in the hands of a few extremely powerful tech companies or governments? Or is it more democratized? And I think probably the lesser of two evils which I think both are concerning and have risks, but for me, the lesser of two evils is for it to be in the hands of the masses, so that there's high visibility on it. And so that, I think in that regard, there will be more attention given to making sure that actions are taken to mitigate those risks overall. I think I'm optimistically cautious is I think the best way to very briefly describe my opinions on artificial intelligence, I think there's a tremendous amount of risk. And I do think that that is something that I intend to highlight with a lot of the episodes that we have is some of my concerns, both for the immediate present. And then also, I think a lot of the episodes are going to look in kind of that futurist perspective of speculating based on current trends, what does lie in the future for artificial intelligence, machine learning? And so I think in that regard it it likely will appeal to people who, like science fiction, and like the idea of just imagining what the future may hold, because I think there's no question that is going to be drastically different than what it is today. And I think we just have to take our understanding of what's happening now and the way that things are moving. And I think it's not only entertaining to speculate about what may be the case in the future, but I think it's also prudent and kind of well advised to consider those risks and start having dialogues about those in advance

 

Sean Martin  29:58

and I I'm having been playing with it for a while I'm excited for what's possible. And, but also very afraid of the areas that you said, one might have little less impact than other. I'm afraid of, of it in the hands of general public, I'm afraid of it in the hands of big corporation and afraid of it in the hands of government for sure, for different reasons. Because the common thread or through all three of those people. Right, and, and not everybody has the same intentions?

 

Hutch  30:34

Well, I think I think if nothing else, so there was there was a project recently called Chaos GPT, where somebody took basically a service connected version of chat GPT, that is able to take actions autonomously. And they basically gave it the directive of destroy the world. Right now, there's not enough service connections and plugins for chat GPT to successfully comp or accomplish that objective. But I think it tells you something about human nature that somebody out there was just like, let's try and see what happens. So I think there you're right, people is the big concern in every single one of the possible options. Yeah.

 

Sean Martin  31:10

And it could just be as simple as some kid, how many kids can I believe with one?

 

Marco Ciappelli31:21

Do you want to play a game of thermonuclear war? Like, I mean, somebody's gonna do it. And I want to wrap this conversation, because I think your connection between philosophy and AI, and technology, as well as some of our other, you know, new, new show, actually on the program, and I am loving having this conversation, myself and I, and I look at this fact that years ago, you would talk about, if you were studying philosophy, you were as far as possible, on on the line of left to right, or A to Z, from technology. I mean, it was either philosophy, or math, right, or technology. And now, the extreme are going on a circle with touch. And I feel like, we talk so much about ethics, and philosophy. And in a way, it's a way to know ourselves. Like, as we try to understand artificial intelligence, the truth is, we are really trying to understand how are we going to use this tools? How, what is the knowledge that we're going to put in the machine? And what is the capabilities that we're going to put, which ultimately comes down to? Are we going to play thermonuclear war? Or are we going to play? Let's fix the environment and, and all the diseases and so forth, and get from somebody like you that is going to touch on all of these with a philosophical, philosophical approach. I cannot wait to honestly listen to all the conversation that you're going to bring on the on the table,

 

Sean Martin  33:12

the conversation I'd love to have, but I'm not able. So I'm glad you

 

Marco Ciappelli33:17

will just listen to assess, can you can you? Can you outline maybe some of the topics that you're thinking to bring on in the next episode.

 

Hutch  33:28

So the immediate next episode that I'm looking at is going to talk about our increasing reliance on technology and artificial intelligence, how we're seeing that progress, as new technology is rolled out, it's actually going to draw a parallel between artificial intelligence and the the the Borg, the collective from Star Trek. And really the the whole idea of Resistance is futile, you have to assimilate. And I think that we are seeing that both in our personal and professional lives. If we don't adapt, if we don't integrate these capabilities and these skill sets into our own lives, then we fall behind, we get left behind, we're not able to compete, businesses are not able to compete personally, we're not able to compete in the job market. And I think that there are when you follow that trend to its logical conclusion of where that ends up. I think that is setting a somewhat of a dangerous precedent that as technology increases, we're going to see a situation where we're increasingly losing our own humanity in something about our own level of cognition, by that increased reliance on technology. So really excited on pulling that episode together. And I've got a whole list of other ideas that are not on the top of mind. So but there's definitely more to come and I'm excited.

 

Marco Ciappelli34:46

us judge it to be there you go.

 

Hutch  34:50

Absolutely. Nothing else. It's a great idea generator.

 

Marco Ciappelli34:55

Exactly. Well, very exciting. I'm happy to We took the time to have this conversation, it was fascinating the first one that we had, and I'm so glad that I really love when a guest turned into a host. Because it means that it clicked it means that it was already up in the air for something to to happen. And this is a topic that again, like Sean said, we know quite a bit about it. But it sounds to me that you definitely know way, way way more than not. So I am looking forward to listen to all your, your your episode. And once in a while, I think I'm just going to knock on your door and say, Hey, why don't you come on my show and talk about it?

 

Sean Martin  35:36

Absolutely. Have questions.

 

Marco Ciappelli35:40

I have questions to help me to answer those. But in the in the meantime, I want to of course invite everybody to listen to the first episode, which is fascinated already a strong connection between psychology and, and technology. And it's out there, your own show. It's available on Spotify, Apple, Google podcast, and everywhere you listen to your show. And so for everybody, subscribe. There'll be links of course here and it's easy to find on on ITSPmagazine. So Shawn Olivia the honor to to wrap these in the end recording button when you're ready.

 

Sean Martin  36:22

I want to thank you so much for joining us and bringing this conversation and for taking the time today to give us some insight and where you're planning to take things so the audience can learn more and then follow you cyber cognition podcast. How much and your profile there I'm sure folks, can we talk here? If anybody's like Marco and I they're gonna have questions too. So named, I presume your show will be one that sparks a lot of engagement. Why did you think that were? What about this? And they're not getting the answers they want from chat. GPT so send them all to hutch, you can answer? Always excited to have a conversation. I love it. All right. Well, thanks, everybody for listening and watching this episode and stay tuned for more from much more from us and the rest of the family history Podcast Network.

 

voiceover37:30

We hope you enjoyed this episode of the cyber cognition podcast with pucch art of the ITSPmagazine Podcast Network. If you learned something new in this conversation made you think then add this show to your favorite podcast player. Subscribe to the ITSPmagazine YouTube channel and share the ITSPmagazine podcast network with your friends, family and colleagues. If you represent a company and wish to connect your brand to our conversations and our audience, visit itspmagazine.com to learn how to sponsor one or more of our podcast channels. We hope you will come back for more stories and follow us on our journey