Redefining Society and Technology Podcast

Voice of the Future: Exploring AI, Speech Tech, Ethics, and Regulations | A conversation with Nigel Cannings | Redefining Society with Marco Ciappelli

Episode Summary

Nigel Cannings, a lawyer and AI and Speech Technology expert, unravels the complexities of artificial intelligence, providing insights into ethics, regulation, and the human connection to technology in this captivating episode.

Episode Notes

Guest/s Name Nigel Cannings, CTO at Intelligent Voice [@intelligentvox]

Bio ✨Nigel Cannings is the CTO at Intelligent Voice. He has over 25 years' experience in both Law and Technology, is the founder of Intelligent Voice Ltd and a pioneer in all things voice. Nigel is also a regular speaker at industry events such as NVIDIA GTC and holds multiple patents in Speech, NLP and Confidential Computing technologies.  He is an Industrial Fellow at the University of East London.

On Linkedin | https://www.linkedin.com/in/nigelcannings/?originalSubdomain=uk

Google Scholar | https://scholar.google.co.uk/citations?user=zHL1sngAAAAJ&hl=en

____________________________

Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________

This Episode’s Sponsors

BlackCloak 👉 https://itspm.ag/itspbcweb

Bugcrowd 👉 https://itspm.ag/itspbgcweb

Devo 👉 https://itspm.ag/itspdvweb

_____________________________

Episode Introduction

Nigel Cannings, a lawyer and AI and Speech Technology expert, unravels the complexities of artificial intelligence, providing insights into ethics, regulation, and the human connection to technology in this captivating episode.


In a world bursting at the seams with technology and information, the clash between the marvel of innovation and the ethical boundaries of creativity is becoming an increasingly prominent discussion. Welcome to the Redefining Society Podcast, hosted by Marco Ciappelli, where today's conversation delves into the complexity of artificial intelligence, speech technology, and the moral dilemmas that come with our pursuit of the unknown.

As a co-founder of ITSPmagazine Podcast Network and an observer of technology's impact on society, I, Marco Ciappelli, find myself at the epicenter of this intriguing debate, steering conversations that push the boundaries and challenge the status quo. With this particular episode, we embark on a journey guided by the wisdom of Nigel Cannings, an AI expert and speech tech specialist, whose story interweaves the excitement of technological breakthroughs with the ethical questions that surround them.

Nigel's extensive knowledge in AI, particularly in speech technology, sheds light on both the potential and the pitfalls of this exciting field. His hands-on experience ranging from detecting vulnerable customers and spotting fraud to grappling with biases in AI paints a comprehensive picture that doesn't shy away from the darker corners of technology. His background in law and his transition into technology add layers to a dialogue that is as fascinating as it is pertinent.

In our conversation, Nigel and I explore the unseen worlds of voice scams, deep fakes, and the revolutionary growth of AI models. We discuss the ethics of AI, touching upon copyright issues, technology advancement, and the human element in the loop of data collection. We probe the question of fraud, the need for regulation, and the very essence of what it means to create something that appears intelligent.

As I converse with Nigel, we drift between utopian visions and dystopian scenarios, yet the focus remains on the here and now. Where have we come from, and where are we going? How did we reach this point of sudden explosion in AI, and what does it truly mean for humanity? Is this a paradigm shift or merely a fleeting innovation?

Our exchange exposes uncomfortable truths, such as the exploitation of workers in AI development and the blurred lines of legal boundaries. Yet, through the dialogue, we find shared dreams, a thirst for understanding, and a passion for unraveling the entangled web of ethics, technology, and human experience.

So join us, dear listeners, as we delve into an episode that promises to be both enlightening and provocative. If you've ever pondered the impact of AI on your life or the moral complexities that come with innovation, this episode is for you. Let us redefine society together, exploring the intersection of technology, cybersecurity, and humanity.

Your curiosity is the key, and the conversation has just begun. Subscribe, share, and stay tuned for a dialogue that promises to provoke, challenge, and inspire. Welcome to the Redefining Society Podcast.

_____________________________

Resources

____________________________

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________
 

[00:00:00] Marco Ciappelli: Hello, this is Marco Ciappelli with Redefining Society podcast on ITSP magazine and uh, You know, we we keep redefining society and the more and more we talk lately, I feel like we're mostly redefining artificial intelligence and CHAT GPT and everything that it's doing. Everybody's talking about it. 
 

We're talking about how to use it, how not to use it. We are making big claims of Reason with the, the, the, the creation and the, the, or the atomic bomb and how a big threat is for our society. But I, you know, me, I don't like to be on that side of dystopian, although I do have fun with dystopic view and scenarios here and there, but I, I try to be objective and I think that one of the good thing to do, For objectivity is to look at where we come from to look at the past because Um, we were joking here with nigel my guest today that i'm gonna let introduce in a few seconds That it seems like all of a sudden like every other technology. 
 

Honestly, it's just come out of the Nowhere. And all of a sudden we have large language models, we have AI, we have airplanes and everything, but people have been working on it for a long time. It's not an overnight success. It took a while to get here. So this is what we're going to talk about here today with Nigel and also about Where we're standing and the future and the ethics of it. 
 

So enough about the introduction. I want to hear the introduction from Nigel Who you are? What are you up to? And why do you wanna you like to talk about ethics and AI? How about that?  
 

[00:01:50] Nigel Cannings: Sounds good Marco great to be uh on the podcast. So yeah, i'm Nigel Cannings I'm the CTO And co founder of Intelligent Voice. 
 

We're a software company based here in London um I've been working in and around, um, natural language processing, language models and that sort of stuff for a very long time, actually. Um, about 16, 17 years now since I started in the NLP, natural language processing game. Um, I trained as a lawyer. I've been around language a very long time, uh, but became a technologist when I realized that technologists were more fun than lawyers, which is not saying much, but you know, they are very slightly. 
 

Um, and you know, we're talking a lot at the moment about AI and the use of GPUs, graphic cards, and so on to do all this incredible processing. Um, but you know, I was using these things 10 plus years ago, um, to try and turn speech into text. And I sort of accidentally invented. What they now call inferencing for speech on GPU back in, in 2013. 
 

So yeah, I mean, I've been around this technology a very, very long time now. Um, and these language models we talk about, people have been using speech for 30 years. Um, as you say, it looks like it's just kind of exploded. So I'm, I'm really interested to kind of look at. Where these things have come from, why it's suddenly taken over the world now, and also, you know, what does it really mean for humanity? 
 

You know, is it like, is it an internet level event? Something which is so profound that it's a paradigm shift, or is it something which is going to end up being a little bit more Incrementally, not perhaps as scary as everyone thinks.  
 

[00:03:38] Marco Ciappelli: Yeah, and I feel like that's something that, I mean, we hear both sides, right? 
 

I mean, you can open the internet today and look at different magazines and papers, academic research, letters that everybody signs, and we need to pause. But in the meantime, we're the one. Going on and pushing forward with this if we tell you we need to pause so that's that's another conversation there but I mean There is the the the utopian vision and the dystopian vision Which I think it's always nice to look for me into these two extreme the reality of things is for me is where we are now and Because I have you that you've been on this for for a while What was the Eureka moment that made this big difference between what you were working on, recognizing language and translate it into text, and to actually create something that is simply putting words one after another, but it seems like it's, it's intelligent. 
 

Let's put it that way. Yeah.  
 

[00:04:45] Nigel Cannings: Yeah, it does. It kind of, it does, it does appear to be intelligence and, and. Yeah, I mean, this idea of a language model, as I said, it's been around for a long, long time. So, you know, I've been working in, in speech recognition for 13 years now. And we used to use language models to try and help, um, to try and help effectively determine what was the next word. 
 

So, you know, I always use the example, if someone was saying, the cat sat on the. You'd expect the next word to be MAT, and so we used to use statistical models to say well if you've got the choice between the word RAT and MAT, you're going to pick MAT because that's the most likely one. So, those statistical models have been used in speech recognition for a really long time, and actually we call them language models, you know. 
 

So, you can begin to understand where the concept of a large language model came. It's something which was... really little and now they're really big and we were trying to predict one or two words. You know, we used bigrams and trigrams two and three and four word sequences to work this out and now we're getting into context windows of a hundred thousand for things like Claude. 
 

So, you know, think things have changed but You know, bearing in mind that even a couple of years ago, the concept of a large language model was still something which was really only known today to scientists. So, you know, 2016, you know, people were beginning to, to look at the, the, you know, transformer models and this sort of stuff. 
 

2018, you know, 2018 was really the birth, the genesis of, of this new technology. And Google had a model called BERT. And BERT was the first really kind of good open source language model. But you could only predict about five or six words, you know, ten at the outside if you're having a really good day. Um, And but that was really the foundation of this work. 
 

But again, data scientists knew it. My team, you know, we used it for a load of stuff. We were doing emotion recognition and all sentiment recognition, all of this type of stuff using that. Where the change came and what really changed it. And it was something that OpenAI did that was brilliant, really, is they went from turning these things into something which could just predict the next sequence of words from a previous sequence of words to actually understanding human instruction, and understanding human instruction in a way that humans wanted to interact with them. 
 

And it's this idea of, of RLHF, Reinforcement Learning with Human Feedback, is something they invented. Which was really sticking a human in the loop of training these models. Now, it was brilliant, but also, going back to the ethics question, there were some massive problems in terms of how they... Assessed the data that came out of it. 
 

Um, you've probably seen the same articles I have. That a lot of the data that was being processed and cleansed by open AI was processed by workers in Kenya who were being paid next to nothing and were being effectively forced to read the most graphic pornography, the most Awful descriptions of bestiality and pedophilia and all of these things, and they were effectively slave labor. 
 

They had no choice in this. Um, and so whilst, you know, OpenAI did an incredible thing in terms of moving the needle in terms of where these models could go, there was a great human cost along the way to getting us past that eureka moment and to where we are now.  
 

[00:08:40] Marco Ciappelli: So, apart from that, and of course, it's a big, it's a big issue, uh, there is many other big issues with the fact that, yeah, you, you, you force people, actually I'm learning this now, to be honest, I haven't heard of it, but you also Yeah. Harvest tons and tons of content that you don't necessarily apparently have the right to and and that's where the line gets a little blurry because I talk a lot with with artists and people in the video game industry and writing books and it's like humans by nature. We do look at things. We don't just. 
 

We're not born with an idea, right? We, we educate ourself and we are in a way the consequence of our experience. And so, um, when we touch on that side of things, what is your perspective on it? I mean, in a way it's like, how do you educate without actually. Getting things from other and where you become the line of the lawyer in you that it becomes a legal issue for copyright  
 

[00:09:51] Nigel Cannings: it was really it's a really interesting question this one because I I I ponder this one quite a lot because I'm, actually I'm on the on the slightly more libertarian side of this one. 
 

I have to say um because and and I, I've got a real issue with, you know, how we, how we regulate these things. And, and I think that's, that's, but I think it's a separate question as to how you build them. You know, regulation of their use, regulation of, of the output, thinking about what they do is one thing, but where you actually get the content from, um, you know, if you think of AI as like a really big human brain, effectively, of course, it's got to end up. 
 

Reading the internet, right? It's the only way it's going to do it in the summer, you know, if you and I had the capability and the time to read and absorb the internet, we would because  
 

[00:10:47] Marco Ciappelli: it's my dream,  
 

[00:10:48] Nigel Cannings: exactly in a heartbeat, you would do it, you know, and it's only, it's only really a lack of, of, of a, you know, the fact that you and I are not going to live to be 100 a million. 
 

Marco, right? It's really sad, but we're not. But if we were, we would go out and we'd absorb this knowledge. So, I, I actually think, and, and the same way, you know, you, you can't complain about, um, scraping the internet for artificial intelligence on the one side, and then do a Google search on the other, and, and not... 
 

I don't see the irony of that. I mean, the fact is, the only reason why the modern internet exists in the way it does, really, is because Google and other search engines went out and took every single piece of content they could and turned it into a knowledge base. We have access to knowledge in a way that, you know, when I was at school, I remember, If I wanted to find something out, I used to have to get on my bicycle, and I had to cycle to the local library, hoping it was open, in the rain quite often, and open an encyclopedia. 
 

And, you know, 99. 9% of all the essays I ever wrote were just cribbed effectively from the Encyclopedia Britannica. Um, so, you know, It's this massive explosion of knowledge. So no, I think that, frankly, if it's out there on the internet, if the content is there, then effectively it's a pretty fair game. Um, and I don't, and I think, you know, shutting the stable door at this point, you know, people like Reddit and Twitter, now known as X, um, kind of saying that you can't have it, I think it's just, it's churlish. 
 

I think we do need to allow that. to happen. I really do. So, no, I'm very much on the side of that. Where it gets interesting, though, is in that question, and you touched upon it, for things like musicians. So, you know, what, I suppose the question becomes, where does, where does influence stop? And where does copyright infringement start? 
 

Because how many How many artists, if, you know, if you look at the biography of, of, you know, Artist X, it doesn't matter who it was, they will all say, well, I was deeply inspired by, you know, Bob Dylan or whoever it might be, or the Beatles, or, you know, I sound like an old white man here, don't I? But, you know, I mean, it's, you know, but whoever it might be. 
 

[00:13:14] Marco Ciappelli: It sounds like you are about my age.  
 

[00:13:16] Nigel Cannings: And so that influence is everywhere. And it has been throughout That the history of these things, if you copy and copying something, you literally got to copy it, then absolutely right. If you spew out the same words from an Elvis Costello song, you know, then yes, you shouldn't be able to do that. 
 

But each one of these artists has been influenced by the past. And our job as humans, I think, is to demonstrate to the machines that they're great at. Inspiring creativity, but they're not a substitute for it. You know, a, a, an AI is never going to experience an emotional breakup. They're not going to break it with their partner. 
 

They're not going to experience death. And, and we take, we take so much inspiration. Anyway, from the natural world, from things around us, we're very good as humans at taking inspiration from things, you know, you'd never suggest that because you heard birds tweet in a particular way, and it inspired a song that somehow. 
 

That wasn't creative. The birds themselves have no intelligence. So I, I think of, you know, I think of the birds in the trees as being the, the AI, you know, they're kind of providing a level of inspiration or the, the way stars are arranged in the sky, you know, those are naturally occurring phenomena. Our job as humans is to say to the AI. 
 

Yeah, nice, but we can do better.  
 

Yeah, no, I I  
 

[00:14:44] Marco Ciappelli: love this perspective because it's honest and I have to say Overall, I'm in agreement with you. So, you know, and I I love how you use that quite, you know difference and the line in between Influence versus infringement infringement. And that's, that's really the case for me. 
 

Um, so from a technological perspective, uh, for people that, again, looking for that Eureka movement, uh, moment, you, you said, you know, the, the invention of the approach to be different, but was it also possible because We have more, much more powerful computer, we can run incredible harvesting of the internet. 
 

We can run algorithms that we couldn't run with the power of computing we had a few years ago. Is that also one of the, the convergence of technology and ideas that allows us to do it? The next step,  
 

I think, I think the irony is going to be that we're going to discover in a few years time that we didn't need all the processing power. 
 

Okay. Right. I mean, that, that, that's the stupid thing about it. So we're  
 

explaining that.  
 

[00:15:56] Nigel Cannings: Okay. Okay. And, and actually, you know, think, think about anything. Right. So if you. I remember the very first mobile phones. I can remember walking around the city of London and seeing people carrying these massive bricks of battery, right? 
 

You know, you remember those things, huge things, the Motorola phones with the thing beside it. And over the course of 20 or 30 years, we've gone from that to something which can do 100, 000 times more that will sit in your pocket and will, you know, will give you talk time of two days. So all technology, be it software or hardware, someone always finds a way. 
 

of making it significantly more efficient. So, you know, at the moment, we are spending millions of dollars to train these foundational models. And, and that is a real problem, because, you know, you need to have a thousand or two thousand GPUs. And, and I, I spent a lot of my life around, GPUs and GPU cards. 
 

When you look at that, that's a lot of racks of servers. You know, that's a lot of building space. That's a massive amount of electricity. And that only, you know, I can't afford that. Um, you know, you need to be a Meta or a Google or an IBM or someone like that to be able to afford that. So, you know, that, that's, that's a problem for us, the fact that people need to do it. 
 

And again, the same with, with, with the data actually. The data that you need is not internet size, which is good. So people are building, um, these fantastic models with, you know, terabytes of data rather than petabytes of data. So that's good. So actually, the, the amount of data required to train a model is within a human grasp, but the amount of processing power isn't. 
 

So yeah, what's happened is we've got to the point where there's There has been a kind of convergence between the availability of processing power. The algorithms themselves are actually very simple, um, but they have big problems within them. One of the reasons why it takes so much power is because effectively... 
 

In these models, you're trying to make, if you had a sequence of words, you're trying to make, you're trying to understand how each word relates to every other word and how important that is. So every time you add another sequence of words to it, the processing time goes up quadratically. So, you know, normally we expect a workload to go up in a linear fashion, you know, I start one process and it takes X amount of time. 
 

If I had another process, it's 2X because it's just twice as much work. Whereas in language modeling terms, these large language models, it kind of goes through the roof. So, We're at a prop. We're at a point at the moment where we've managed to do this. It's required massive resources to do it. You're absolutely right. 
 

Huge resources. But I think that in five or ten years time we will look back on this and we will laugh and say, God, how stupid those people were back in the in the early 2020s to have used a thousand GPUs or five thousand GPUs to train this thing. Because we now know you can do it on a laptop. Um, so. 
 

And, of course, what that will mean is the next big rise, um, whatever that might be, and I don't think it's AGI, not yet, but there will be other forms of, of intelligence that will come along out of this, will then take the next thousand GPUs to build. So, you know, it's always the way with, with technology. 
 

You, you go over that, you go over a hump, and then you kind of, 
 

[00:19:39] Marco Ciappelli: I love this and it made me think about a parallel that I want to go into this because one of the talking points that I have on the notes with you is regulation of AI worth it? And I'm not asking you the question yet. I want to make a comment first. When you said you can run. This model, you can build your own API on your own laptop and create your own little artificial intelligence. 
 

I mean, for fun, on ITSP Magazine, we created our own Tape 3, we call it, and it's our assistant and guide. And, you know, you put it on the website, you create a knowledge base and You know, and in answer question about what we do. I mean, we're sometimes it's great. Sometimes like Where did you find out this, you know, the classic little hallucination Um, and I think it's great, but i'm running kind of a parallel here with democratizing this When the internet Was, became accessible to everyone. 
 

Mozilla Netscape. I was excited. I was in college at the time. I was like, this is the best thing for democracy, for culture. I want to go to the Louvre tonight. And it wasn't, of course, 3d or anything like that. And then social media came. Everybody's a journalist. Everybody put crap out there in the internet. 
 

And so on one side, the access is great. On the other side, too much access, maybe. It's not. So, um, do you foresee an issue with access for AI to everyone to become, again, this kind of, well, it's great, but it's unregulated. So, overall. What's your position on the regulation side?  
 

[00:21:33] Nigel Cannings: So, yeah, I mean, this, this goes back to me with my lawyer hat on again. 
 

Um, I mean, one of my real concerns about trying to regulate AI is regulations on this sort of thing are always years behind where the technology already is. Um, you know, we're already thinking about the new, new thing, you know, we're well ahead in terms of thinking what's coming next. And so, you know, we're seeing, you know, we are seeing some, we're seeing some, what regulation, to me, what regulation should do is it should spark a conversation. 
 

You know, the idea should be to make people step back and think about what. You know, what it is they want out of the way they use data. And that's really what this comes down to is about how we use people's data. And there's a, there's, there's still a big conversation that we have to have in society before we can even have the AI regulation one. 
 

And that is, are we, are we willing to sell our data for free? In effect, are we, you know, do we accept the fact that every single scrap of data that goes into social media, that goes into our email, that goes into anywhere else is effectively accessible to big business to sell more stuff to us? So, because the AI thing kind of, I mean, to me, the, the dangers of giving an ordinary person access to AI is relatively low. 
 

I mean, what I'd like to think and where I'd like to take it is actually to say, how do we, how do we help people? Better protect their own data to start with, because if I could take my data and put my arms around there and say, that's Nigel's data, and I then had access to AI to query and manipulate that data, that would be amazing, you know, if I could, and I was looking at a tape through before the podcast, you know, you've, you know, it enables you to, to look at the content that you guys have created and ask questions about it. 
 

That is an amazing thing to be able to do. And, and so I would like to be in a position where I, as Nigel, can take all of Nigel's emails and all of Nigel's social media posts and all of this stuff and actually manipulate it in some way. You know, inspire me for my next email. You know, I want to write an essay on something, you know, and, but I can't do that because at the moment all of my email data is controlled by Google, by Outlook, , you know, this is the thing. So, I don't actually think that AI itself is going to be as dangerous as we think it is. 
 

I don't think that the regulators really understand what they're regulating. Um, I think that they, they will be much better off, as I said, looking at the underlying issues of data in our society, the security of that data, and the ability to give in. I, I would pay. Marco, I would pay to have the ability to take all of my data and put it somewhere securely. 
 

I, you know, to, to be able to, you know, a lot of the work that we've been doing over the last few years. So one of the things that I'm really interested in is, um, confidential processing. So I've been working and my team have been working on confidential inferencing. The idea of being able to take encrypted data and to process it. 
 

In the encrypted domain, and return an encrypted result. So, you know, to be able to say, here's my email, me, my email, I encrypted, I give it to my, my language model. I say, answer me some questions on this, or write me an email based on this, and send me the answer back. Without a cloud provider ever being able to see that. 
 

Now, that for me, is the, is the direction we should be heading in. You know, my background, where my legal background comes into this is that I'm paranoid about people's data. I'm paranoid about people's security. So I'd like to see a push towards effectively deregulating AI that benefits people individually and Regulating the hell out of AI where it's actually affecting people collectively, you know, so really, so I'm wishy washy, but a really interesting thing that has been going around LinkedIn today, um, around the fact that Zoom have changed their terms conditions, um, so the, what it says is people saying that Zoom have changed their personal conditions to enable them to, um, use your data to train their AI. 
 

And if you read into it, that's 80% true, maybe, but there is a lot of it in there. But what's interesting is, whilst this news story is coming out today, I went on the Wayback Machine and actually saw that even though people are looking at... These terms conditions dated 31st of July, actually the change took place in March. 
 

And so what's happened is a big company have gone out completely changed the way in which they interact with you and your data. And so that means that any transcript of a call that I've had in Zoom, they're allowed to use and do what they want with. I've given them a license to do that and I gave it to them six months ago. 
 

So that's the sort of thing that I would like to see regulated. Because where the hell else am I going to go? I like Zoom. What am I going to do?  
 

[00:27:17] Marco Ciappelli: Well, plus if you don't use Zoom, you're going to use Teams from Microsoft. You're going to use Google. I mean, you still end up not owning pretty much anything. 
 

[00:27:28] Nigel Cannings: Exactly. And that I think is, you know, that is where the regulation is needed. It's transparency in data, security in data. Because who at the moment, where's the alternative? Where can I go at the moment to say, actually, yes, you know, there might be some small Norwegian provider somewhere, and I believe there is a small Norwegian provider somewhere who will actually do it. 
 

But, but you know, in terms of, I can't send a You know, I can't send a meeting link to someone just using this example. I can't send a meeting link to someone for some obscure web conferencing platform in the far reaches of the internet. You know, it's got to be a mainstream provider, but none of the mainstream providers provide me with the security I need. 
 

And it's the same with mail and all of these things.  
 

[00:28:15] Marco Ciappelli: And you just went into that famous convenience versus security. Which exactly it's it's this word on that hangs on everybody's head here Um, well because we went on cyber security and if you don't mind, maybe we stretch it five minutes more here So you're not too concerned about The general AI and it's not here. 
 

I you know, this could be an exercise to think about it, but Definitely where I used I like to say that intelligence may be not the right word to use with AI Maybe we need to call it. I don't know influence or something something would start with the I but it's not intelligence So right now, and probably for when I produce this, it will be, will be over by a few days. 
 

There is Black Hat for people that know it. That's Hacker Summer Camp. We cover it as ITSP Magazine, as we were in London a week, a month ago, as we were in RSA Conference, San Francisco. And AI is the new buzzword. It's AI is everywhere. Security, uh, AI driven attack, AI driven social engineering, because you can customize phishing email the way that human can't because CHAT GPT can probably do that. 
 

You can scrap the internet. So a lot of talks around the use of AI and And, uh, large language model to, to deliver a cybersecurity attack. So what, what, what's your take on, on the security threat that are most maybe relevant for you nowadays coming from language?  
 

[00:30:06] Nigel Cannings: So I think, you know, of course, the ability to produce code quickly from prompts and that type of stuff, it's inevitably going to spark an increase in attacks, of course it is, because we, what we're doing is we are, you know, even the script kiddies now have got access to some really powerful stuff. 
 

But... That means it's incumbent upon us elsewhere in the industry to use the same stuff to try and plug the holes. You know, the fact is it's always, it's always been that way that, you know, exactly someone, someone out there is trying to attack you. You're trying to defend against it. They will always use the most up to date weapon. 
 

And that's been the same since, you know, people were throwing spears at each other, you know, it's, it's always, you know, someone throws a spear. Okay. So we'll design a shield. Great. Okay. You know, you design a shield. So we'll, you know, I've got, I've got a bigger bow and arrow now, you know, and this thing escalates like that. 
 

Right. So that's, that's the way of these things. So, you know, I think that, you know, I think that in, in cyber security terms, you know, we just have to continue to use the same, uh, the same tools and understand the tools. What worries me though, and you touched upon it there, is the, is the social attacks, the human level attacks, because that's a bit more difficult. 
 

Because when... And again, phishing emails, I think that the phishing email thing has been a little bit overplayed because it only really works if you understand the person that you're sending it to. So if I'm, if it's a really well crafted social engineering attack, then I might be able to craft something based on public things I find out about someone, maybe. 
 

Maybe, maybe not, but actually the ones which are more difficult on the human level are things like the deepfakes. And for me, I mean, I work in and around audio all the time, but deepfake audio for me is actually one of the ones that really keeps me up at night because that is one where it's actually relatively straightforward to engineer an attack whereby and, you know, we hear people talk about it. 
 

Some it's apocryphal, some it isn't. But, you know, you get, you get a call from your boss, you know, you're the accountant and you say, um, I need you to pay so and so, you know, uh, the invoice is coming in, I need you to pay them. So it's a two pronged attack, you've got a legitimate looking invoice, plus your boss saying we need to pay it urgently. 
 

[00:32:37] Marco Ciappelli: Plus they spoofed the phone number, so it looks like... That's with that too.  
 

[00:32:43] Nigel Cannings: Yeah, and, and, you know, we, we hear things about the, about the ransom demands. I mean, there will be some of those, but again... So, but, you know, it's, it's when you get into, because humans are not as good at detecting artificial intelligence. 
 

I mean, AI can't. I mean, OpenAI had to withdraw their text checker because they discovered that their LLM text checker couldn't tell whether an LLM had written something or not. You know, so even OpenAI can't tell. But the written things, I mean, that's why I said, I think social engineering, uh, particularly deepfake audio, um, I think is going to start to be a real problem for people. 
 

And so we have to find ways of ensuring, you know, that you've got a form of two factor authentication in there, which means that, you know, even just a password, you know, if I phone you up and I ask you to do something unusual, you know, you need to make sure that I include the word carrot. In there somewhere, because if I've included the word carrot, it's real because the deepfake audio won't know that so you know that we're going to have to start thinking a little bit more laterally on the human to human interactions. 
 

[00:33:51] Marco Ciappelli: Yep, or just hang up and call that person and be sure that the bank is not going to call you and now we're going to educating the users that to be honest, they don't want to have this kind of thing. So it's it's it's a problem. Um, I don't know. I could go in a lot of places, but we are at 35 minutes. I have to say I really enjoyed this conversation because we actually, I think you brought some angles that are, you know, honest. 
 

I think it's because you have that double hat of technologist and lawyer. So I, I appreciate that. Uh, I would like for you to come back again. I, I'm planning to have some like more panels with. Talk about this. So, um, I would I would appreciate if you if you had good time and um, Come back and uh, I don't know So how one one last question and now i'm gonna close because I undetectable Voice spoofing Um, I mean you're working on it, right? 
 

[00:35:04] Nigel Cannings: Oh, it's coming. I mean, I mean seriously it's coming.  
 

[00:35:06] Marco Ciappelli: So so is Undetectable like I've I've played around With with ai and you know with my voice and it's scary Scary good. Except that I make a joke and my voice, when it's spoofed, it doesn't have the Italian screw up word in English that I have, so you could detect me. 
 

But from an intonation and all that kind of stuff, it's, it's scary. So, how do you think? An AI could detect the AI in that. Is there like an ID in the voice? Is it some, something that we just can't spoof?  
 

[00:35:44] Nigel Cannings: So at the moment, and I say at the moment, I can, I can tell you which engine generated your fake voice. 
 

So, and, and we, we, there's a, there's a blog post on our, on our website about this. So we, we were showing how you can detect. So someone was spoofing Barack Obama, right? And, and so at the moment you can tell the engine cause the, the, the, the signature, the, the kind of the, the X vector signature kind of points back to an original. 
 

But already we're seeing that because the way in which this stuff is being generated is changing that that signature is breaking down. Um, and it won't be long before you don't need too much data about someone. You're absolutely right. There are certain things you've got to get them in different places. 
 

Scenarios how they laugh, how they cry, um, but for for kind of day to day stuff, it's going to be really difficult to tell. And actually you're probably only going to be able to do it if you've got some really good samples of what someone Normally does.  
 

[00:36:57] Marco Ciappelli: And you can run it like a comparison.  
 

[00:37:00] Nigel Cannings: You're going to have to do a comparison to get it and, and, and look for those subtleties because there's something in, as I'm sure you're aware, in, in avatar terms, this concept of the uncanny valley that, you know, where, you know, we as humans, you know, we don't want avatars of cats, but what we really hate is an avatar, which is. 
 

almost looks like someone, but isn't quite them. So, so, so, you know, it's a really weird thing. And so humans are actually quite good at detecting really subtle differences between a fake and the real thing. And we, and it's about how do we capture that type of thing. It will normally be some sort of nuance of speech and, but you'll have to be familiar. 
 

So if we're not familiar with the person then detecting that tell is going to be really, really hard.  
 

[00:37:53] Marco Ciappelli: Wow. All right. And with that, uh, I'm just going to disclose that, uh, I wasn't myself and Nigel wasn't. This is that guy Evil Twins having a conversation here. No, no, this, this was real as is real is with the Finance Society Podcast. 
 

Uh, stay tuned. Any, uh, way to connect with, uh, Nigel, you'll find it in the notes. There'll be a written up about our conversation as well. Please share it, subscribe, get in touch. And, uh, yeah, Nigel, thank you so much. I really looking forward to have you back on the show.  
 

[00:38:29] Nigel Cannings: Thanks. Thanks for having me on Marco. 
 

Thanks a lot.  
 

[00:38:32] Marco Ciappelli: Bye everybody. Stay tuned for some more kind of crazy conversation lately, actually. Bye.