Technology often bears the brunt of blame in news narratives, obscuring the underlying human and systemic responsibilities.
Guest: Daniel Castro, Director at ITIF's Center for Data Innovation [@DataInnovation]
On Linkedin | https://www.linkedin.com/in/danieldcastro/
On Twitter | https://twitter.com/castrotech
On Facebook | https://www.facebook.com/CenterForDataInnovation/
On TikTok | https://www.tiktok.com/@datainnovation
On Instagram | https://www.instagram.com/centerfordatainnovation/
On YouTube | https://www.youtube.com/datainnovation
____________________________
Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast
On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________
This Episode’s Sponsors
BlackCloak 👉 https://itspm.ag/itspbcweb
Bugcrowd 👉 https://itspm.ag/itspbgcweb
Devo 👉 https://itspm.ag/itspdvweb
Episode Introduction
Hey listeners, welcome to a brand new episode of Redefining Society, where we dive deep into the intersection of technology, society, and humanity. I'm your host, Marco Ciappelli, and today, we're not just unpacking a contentious debate; we're setting the stage for a crucial conversation about the often misguided blame game played at the expense of technology.
Joining us is none other than Daniel Castro, a name that resonates with authority in the realm of information technology and internet policy. As the vice president of the Information Technology and Innovation Foundation and the director of ITIF's Center for Data Innovation, Daniel's insights are the ones that fuel informed debates and policy-making. His acumen for navigating complex issues like privacy, security, and the digital economy has made him a go-to source for leading media outlets and a pivotal figure in shaping internet governance.
On our show, we're known for cutting through the noise and getting to the heart of how technology intertwines with our social fabric. In a world quick to cast aspersions on the digital scape, Daniel's perspectives are more important than ever. His piece, "Maybe Everything Isn’t Tech’s Fault," serves as a launchpad for today's discussion, urging us to question the ease with which we attribute the pitfalls of human judgment and systemic flaws to the convenient culprit that is technology.
Together, we'll explore the real stories behind the sensational headlines and unpack the layers of accountability that often go unnoticed. Are we unfairly demonizing the tools and innovations that have become integral to our modern existence? Let's find out.
Get ready for a conversation that promises to be as enlightening as it is provocative. This is Redefining Society, and it's time to challenge what you think you know about technology's place in our lives. Stay tuned.
_____________________________
Resources
Maybe Everything Isn’t Tech’s Fault (Article): https://itif.org/publications/2023/09/28/maybe-everything-isnt-techs-fault/
____________________________
To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast
Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9
Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.
_________________________________________
[00:00:00] Marco Ciappelli: All right, everybody. Welcome to another episode of Redefining Society on ITSP Magazine. My name is Marco Ciappelli. You should know that by now. And on Redefining Society, we talk about technology and society, society and technology, and how one affects the other. We do it intentionally or not, and I'm quite excited about this conversation that we're about to have today with Daniel Castro, which I will give the word in a few seconds here, because I read an article that he wrote, and it's, uh, it's about, uh, well, I read you the title. Maybe everything isn't Tech fault. And I fully agree on that. So I invited him to come on the show and have an open conversation about this and to to see where he comes from with this with this opinion and why he decided to To write an article on the, ITIF, which is the Information Technology and Innovation Foundation, where Daniel is Vice President and Director for the Center for Data Innovation.
So, enough about me. If you're watching the video, you can see Daniel is here, and for you listening, the audio, I guarantee, is here, and... This is his voice. Welcome, Daniel.
[00:01:26] Daniel Castro: Thanks, Marco. It's good to be here.
[00:01:28] Marco Ciappelli: Yeah, I'm excited about this. I don't know. I just get excited for weird things. Uh, but you know, that's why I talk about technology and society because it's fascinating and sometimes, uh, you know, make you think.
And I think this is, this is exactly that the moment where you, you have to think and maybe look back and say, is this perception? About technology. The right one. Is there a right one anyway? So a little bit about yourself and let's dive in into this article of why you wrote it and what it really inspired you to do that.
[00:02:05] Daniel Castro: Sure. So I'm Daniel Castor. As you said, I am the vice president of a think tank called the Information Technology and Innovation Foundation. I've been working on tech policy for the last 15 years or so, and it's been fascinating to see people internally. These debates that go on both in Washington, but also just more generally the kind of debates we have at our dinner table.
So for how technology is changing, how it's changing society, what it means for our children, our work lives, you know, our everything. And one thing that I've noticed is that The tone and the conversation around tech has become increasingly negative, right? It used to be that, you know, not everyone loves tech.
I'm definitely a tech enthusiast. I'm a nerd. You know, I love to kind of geek out on computers and gadgets and all of that. Um, and I, I've always done that, but you know, not everyone liked it and that was fine. Um, but there wasn't necessarily a hostility towards technology that I think is just an undercurrent.
of so many of the conversations around tech right now. And, um, yeah, this, this recent article I wrote, um, I was kind of spurred to write it after I saw, you know, two headlines that were kind of dominating the news during a particular week. Um, one was about somebody's family that was suing Google, um, after a man drowned.
Um, after he, he drove off of a bridge that had been, that had collapsed. Um, and he was, you know, following Google Maps. And so, you know, the argument was that this was Google's fault. Um, and there are a lot of headlines around that. Um, and you know, I mean, first of all, I mean, clearly it's a tragedy. I mean, some, you know, someone died.
Someone, you know, um, he was part of a family. Um, You know, that, that's a tragedy. And, but, you know, the question of, well, who's at fault, you know, and all these headlines were, you know, just, you know, just, you know, focused on the question about, you know, it was Google Maps and shouldn't Google have to pay for this?
And why didn't they fix this? And I just, you know, thinking, okay, I'm Maybe, but you know, the average driver, you are responsible for your driving. You know, if you can't see where you're going, you shouldn't be driving. What if there was a child in front of this vehicle? What if there was a, a pet? You know, what if, if, you know, if the person can't see, if they don't know where they're going, that's their responsibility.
When I took my driver's, you know, exam, they didn't say. Um, you know, you need to be qualified unless you're following someone else's direction. Because you need to be qualified yourself. You need to be aware of your surroundings and, and that's the kind of thing that, you know, I just, I think we, we kind of have lost some common sense here about, you know, who's responsible in, in these different situations.
And, and, you know, there's a tendency to just blame tech to begin with. And I think that's maybe not so productive.
[00:04:53] Marco Ciappelli: And in the way the game is just at the beginning, because if we have issues, With that kind of technology, the map has been around for a very long time, and it's an immense help. As you say, it's a tragedy, but if I read correctly on the article, it was also a private road where this happened, which makes it even A little less responsible for whomever it's taking care of fixing that.
But and I'm thinking like, all right, now we're going into self driving cars. We're already having this ethical conversation about who is responsible for the decision that the algorithm makes it, you know, save the pedestrian, save that pedestrian, save the driver, and who is responsible for that, the car, the driver, or wherever it is.
And But this has happened already. Way before this. So with, with AI and, and autonomous vehicles, it's gonna get even worse. Um, but you wrote an article, and I wanna start from the, a little bit more in the past in 2017, I believe, where you were noticing that this shift in the perception and in the way, well, not perception, actually the way that.
Portrait technology in, in, in magazine and social media was already changing from the 80s. I mean, I'm a teenager of the 80s, so I remember when TV was the devil. You're watching too much TV. Then there is the video games. Uh, I don't even want about going to 60 with rock and roll is the devil, but you know, the radio.
So there's always been some kind of, it's just getting exponentially, I don't know, worse, this relationship, I guess, that we have with technology. We want to fix everything, maybe, I don't know.
[00:06:40] Daniel Castro: And, you know, you could even go back further than that, you know, people were upset when, you know, the printing press was created, but not only the printing press, but you know, when, um, you know, they, they started creating, you know, pulp novels, um, you know, you had this pulp fiction and people were complaining about, oh, this, anyone can be an author, not anyone, but, you know, more people could be authors instead of just the, you know, the, the more, um.
Um, elite among society. And, and that's always, you know, been a concern and people thought that was immoral and we're concerned about, you know, who had access to spread ideas and knowledge, and we've certainly seen that debate play out with the internet. Um, and you're right, the study we did back in, um, you know, about, I guess now it's been probably five or six years ago, we looked at.
The, uh, tone of coverage around technology, um, over a number of decades, uh, in, in the popular press, you know, New York Times, Wall Street Journal, Washington Post, those types of publications. And we did see a remarkable shift, um, over time where, you know, it used to be. Maybe more neutral, um, you know, covering advancements, you know, space exploration, you know, whatever was going on.
Um, you know, there were always critical voices, but it was much more balanced. And now, you know, you see this real shift and, you know, it was just kind of clearly documented in a kind of sentiment analysis we did of these stories, where it was significantly more negative. And you know, one thing that's, you know, noticeable to me is there are these new, um, you know, we have a lot of new media, you know, new, uh, publications online only that have been created, some focused exclusively on technology and innovation.
And if you look at Their mission statements, their mission statements aren't cover what's happening in the world of technology, or, you know, even maybe critically cover it. They're, they're specifically focused on, um, you know, power dynamics that they see, whether or not they exist on, you know, the exploitation of, of technology manipulation of consumers.
You know, they're, they're coming at it from, you know, a preconceived view of, of what is going on here. And. I mean, if you think about kind of the traditional academic neutrality of, of, you know, academic research or just kind of the more neutral point of view, of course, no news is completely neutral, but at least an attempt to be somewhat unbiased in the coverage, giving it a fair coverage.
I think we've left that behind and it's no longer reporting on what's happening, the different sides of it. It's really about, you know, how technology is. It's causing various harms because that is, those are the headlines that get clicks and those are the headlines that sell more subscriptions. And, you know, so it's a little bit of this, uh, unvirtuous cycle that we're, we're trapped in where, you know, the bad news sells.
And so that's the news they're going to push.
[00:09:26] Marco Ciappelli: Yeah. And that's exactly what I was thinking. Uh, you can apply this to about everything. Right. I mean, it's, it's the bad news. So they sell more than, uh, than good news. And, uh, when you do something bad, everybody points it out. And when you do something good, people are just like, yeah, whatever.
I don't need to leave a comment for, for that. That's what I expect you to do. So in a way there is the entire news cycle that it definitely went on. Not being objective anymore, if ever have been, but definitely not this bad. But, but let's, let's focus on the technology. I mean, with AI, you know, same thing right now.
I mean, you can talk about generative AI and there are strikes, you know, legit reason. I'm not against that, uh, but then the perception of all of it, it's always about how it's going to take jobs, how it's going to, uh, make changes in society that they're all negative, but AI is making our life a lot better already.
So why are we not talking about that? Is that an economic reason only or is it? Fear of the unknown.
[00:10:40] Daniel Castro: Yeah, you know, I think that's one of the most interesting parts of the conversation around AI right now is that, you know, this is something I've of course been tracking very closely. And um, you know, the headlines of, you know, if, if somebody, If a company were to use AI to make a hiring decision like Amazon, you know, had rolled out this AI a number of years ago, a pilot.
They didn't even actually roll it out in practice. They had done a pilot around using AI for hiring. Um, they found that it was biased against women. Um, and so they stopped the project. Um, that's actually exactly what you would want to see a company to, you know, experiment with the technology, um, you know, uh, maintain oversight to see if it was causing any problems before it went into production.
And if it does, either fix those problems or stop the project. And that's exactly what they did. But instead of headlines, you know, either saying that, you know, you know, American company is correctly providing oversight of AI. The only headlines you'll ever see about that incident is, you know, Amazon employed sexist hiring algorithms.
And there's just this disconnect between, you know, the reality of what happened and the headlines. Um, and there's also just this disconnect between, um, you know, where there, you know, as you said, there's, there's so many benefits right now from the technology. Um, there was a great study I saw recently where, you know, scientists were using AI.
Generative AI specifically to, you know, explore, um, you know, some medical research and creating new compounds. And, you know, part of what they were able, what they were doing in this specific research was trying to see if the, the rate of discovery could be increased. And they were able to show that, you know, the, the work they were able to do with this.
Generative AI model was, you know, equivalent to something like, you know, 80 to 100, uh, scientific, you know, years of research, um, that otherwise, you know, it would take that, that long. And if humans actually going out and trying to do the same work, you know, that's incredible opportunities, right? Like, you know, we, we've seen them use generative AI to create, um, a new compound that is, you know, uh, resistant to, um, you know, the, the antibacterial resistance that can overcome that.
But these are things that will have significant impacts on people's lives. Those are not the headlines, right? The headline is, you know, Tesla crashes
[00:12:59] Marco Ciappelli: because somebody decided to sleep, uh, on the highway. Uh, so, yeah, but, but see, again, it's, I go back to often that it's, it's not really a technology problem.
It's a, it's a human problem. So, for example, when you look at the bias and, and sure, there are bias because. They train this algorithm on actual fact and thoughts and, uh, and, and way to live and beliefs that human have. So I like to say that AI is human after all, because it's just scaring. The things that we do, uh, that we do in society.
But all of a sudden, you just expect so much from technology. And I think that's, that's the problem. Like even to call it intelligence, it's a little too much in my opinion to start with. And you're setting the bar so high. And then when you see that this bar gets set higher and higher because all the failure that they're telling us that technology is having, you just make people live in, in fear of that and not really appreciate it.
So how do we change this, this, uh, this vicious cycle?
[00:14:17] Daniel Castro: Yeah. You know, and I, I think, you know, there's this always this tension between. You know, the kind of optimism and pessimism people have about the future and then whether or not people are nostalgic about the past.
You know, whether they think it used to be better, and we're somehow getting worse. Um, because I think that, that shapes a lot of this. And that's where, you know, there's, there's so many issues. Um, you know, one of the, the other incidents I wrote about in this recent article was around someone who was wrongly arrested, um, and the police had used facial recognition.
And, you know, there's been, um, a handful of these cases over the last two years. Um, and, you know, all of them, of course, are again, you know, uh, tragedies. I mean, someone's being arrested, they're being detained, their lives are being impacted. These aren't good things. Um, but the, again, there's this kind of myopic focus on the technology side of it.
And that's where I think it's, it's misleading and harmful because in each of these cases, The issue was fundamentally about poor police work. It was about somebody who, you know, police that, first of all, weren't following procedures because under no conditions should anyone ever be arrested, uh, simply because there's some kind of match in the facial recognition system.
That's clear in every, every vendor that sells this to police. That's clear in every kind of standard that's ever been put out within police departments. Um, and, and that just should never be happening. Um, but two, you know, the bigger point is. There are lots of people that are wrongfully arrested in the United States every year.
Um, not because of facial recognition, but because of other poor police work. And in so many cases, you know, we have this kind of, um, you know, myopic, uh, focus on just the technology portion of problems that it distorts and distracts from, from bigger issues. You know, we could get facial recognition technology working perfectly.
And, you know, if that's Always focus on that. We're still missing all the people who are wrongfully arrested in the United States. If we focus on, you know, addressing the potential harmful impacts on, on, you know, teenage mental health of social media, you know, we're still gonna have tons of teenage mental health problems in the world, um, and in our country.
And so, you know, it's not that these issues aren't also important, but they're, they're prioritized at a level that doesn't. Reflect reality doesn't reflect the size of the actual problems, and I think that's where, you know, it's it's, you know, we just need a better perception in our communities about where the problems are, because if we don't have that, we're going to push for the wrong solutions.
And that's, you know, that's the only way we can we can solve some of these issues. And, you know, if we can reduce wrongful arrest across the country, that will also take take care of the facial recognition problem. If we can address mental health in this country that will address. That's the social media problem.
You know, in so many of these cases, you know, we're going after, I think, some of the wrong problems or focusing too much attention on the wrong parts of the problem. Yeah.
[00:17:12] Marco Ciappelli: And you mentioned in the article to that a lot of this problem, some of those you just mentioned or obesity or, you know, the addiction to, to certain kind of technology that may bring to this problem.
They were problem that existed before too. So it's. They were not born with the technology that we point the finger at. And I go back to maybe we, again, we expect too much from technology because we're lazy in doing other things like instead of. Paying attention, you'd just prefer to have that easy button that yeah, it's just gonna do everything the Jetsons are here I don't I don't need to do anything anymore and we need to be in control We need to be still thinking that it is a tool and the big fear and I definitely want your opinion on this Is that if we don't understand technology, how are we going to regulate it?
So, I mean, we've seen cases of people in Congress that honestly, they didn't really clearly know much about a conversation with Meta or with Google. Or so if we don't cross that line, I think the problem is going to get worse and worse.
[00:18:31] Daniel Castro: That's right. And the good thing with, I think some of this technology is more and more people are using it and it's in some ways, you know, not necessarily inescapable, but.
It's integrated into many people's lives. And so they see it and, you know, we no longer have, you know, the members of Congress who fundamentally don't understand, you know, what the internet is, you know, you can go remember, you know, Senator Ted Stevens saying, you know, it's a series of tubes, you know, that was this famous, you know, disconnect between a, you know, um, and, you know, much older, uh, Senator and not really understanding technology.
I think, you know, increasingly, I, you know, I hear. You know, not only congressional staff, but, you know, members of Congress themselves actually sitting down and using some of this technology. And I think that dispels some of the harms and some of the fears around it. When it, you know, it's, it's fear of unknown when you don't understand the benefits of it and you don't understand, you know, the ways that you might use it or not use it.
That's when I think people's, um, it's, it's easier to imagine what might go wrong than what might go right. And, you know, that's where it's, it's easy to kind of get up, caught up in the fears about how we need government to step in and protect us from this uncertain future. Um, because I would argue that, you know, in many of these cases, the, the biggest risk is a lack of AI adoption.
And we're talking about this technology, right? The biggest risk for most people You know, it's not that you're going to be in an unsafe vehicle. We have safety regulators for vehicles and they're paying a lot of attention to these issues, and I think they will continue to pay attention. I think the bigger risk is that we're not going to have, you know, AI deployed in radiology as quickly as we might otherwise, and that that is going to cause.
You know, people to not have tumors found and, you know, that will have life and death consequences. But those are harder to measure, right? Those are, you know, harder to see. And, but, but they're significant. You know, we think about education in our country and, you know, so many students that are falling behind in learning, especially after COVID, and, you know, lack of funding for teachers.
Well, you know, you look at what Khan Academy is doing with the, You know, you can do, use generative AI to provide personal tutoring, personal interactions with students, answer their homework questions for someone that doesn't have a parent or a tutor that they can go to. These are things that could significantly impact, you know, the quality of life of, you know, millions of students in our country.
But do we have any kind of national strategy from the Department of Education about how to use AI? No, we don't. They're only thinking about what are the risks to student privacy and some of these other things. And that's where, you know, again, we just, we just have to get them, one, to understand the benefits, but two, to actually, you know, have some, um, I think enthusiasm for it, right?
I mean, some of this has to be driven by, you know, interest. You think about the space race, you know, back in the, you know, 60s, people were excited about that. You know, there were some people were scared about it, but a lot of people were very excited about it. And it was that enthusiasm and that vision that, you know.
led to a national program in space that got us where we were, got us to the moon, you know, got so many, you know, technologies that we now take for granted like GPS, which apparently is causing us to crash and die. You know, I mean, that's, that's where, you know, it's kind of all come full circle.
[00:21:45] Marco Ciappelli: Yeah. I think that the example of the space race is, is excellent.
Actually, I wrote a newsletter not too long ago about how When we go to space, and I was lucky enough to have astronauts that I had conversation with, and you know, the, the overview effect when you go in space and you, you look back and you understand how the world is, the planet is just one, everything is synergic in between.
And so in a way, going to space is a way to look back into. Into understanding the planet. And so I made the assumption where looking at AI, we never talk about ethics so much in the public conversation as we're doing now with AI. I mean, now I go back and I wish I studied philosophy, but at the day that I was doing it, it's like, what are you going to be?
A teacher, a philosopher, now you're actually in the conversation all the time because we are kind of looking back at ourselves. And it's a good moment to do that. Following your article, the only way to do it is to really understand. And the problem is that these articles that are so, you know, raising the alarm for the coming of a new atomic bomb, AI is going to destroy us.
First of all, they don't even explain if it's... Generative AI versus general AI versus the tunnel, AI, which is very good in what it does in health care, looking at, scans and see things that a doctor or human can't possibly see because they, they learn of this pattern. But again, I think the change needs to come with education.
I mean, we, we just shut the door, like you said, we don't understand it. Like, let's not use it. But that train left the station. There is not that option and it will be a bad one. So how do we educate people? How do we change? I'm not asking you the answer. I mean, if you have it, bring it.
But, um. How do we break this circle where people are so, I don't want to say gullible, but they don't want to do their research. They don't want to read three articles on the same topic and then made up their mind. We didn't train, we didn't educate people in the in the system, in the educational system, to do that.
It's more like, this is the truth, it's on the internet, it must be true, right?
[00:24:16] Daniel Castro: And I think your point about ethics is also, it's a great one, because so many people are, are talking about ethics now, and I think what, uh, is taking a lot of people, especially policymakers, a long time to catch up on is that You know, ethics, there's not one answer out there, you know?
And so, you know, when they say, you know, the, the solution that's being put forth often is, okay, well, you just need to, you know, we need rules to make sure you're building ethical AI and then they think they've solved this problem. And of course, if you go and you ask the philosophers, they say, well, wait a second, there's no.
You know, there's no one answer to what is ethical. You know, there's, so many different theories about what to do. I mean, no one, you know, people can't settle on, you know, basic questions, you know, should you lie? Well, it depends, you know, famous, you know, philosophical questions, Charlie problem, right?
All these things that come up again and again. Um, and, you know, and they're saying, okay. But we want you to, you know, code rules, right? Hard and fast rules into, um, these machines that embed ethical principles that we can't even agree upon. And, you know, I, I, I think that's where, there, there also needs to be more recognition of the fact that, you know, we live in a messy, complicated world and there's not simple answers, uh, many times.
And, you know, so the starting point is not that, okay, there's this one right way to do things. And it's just a matter of, you know, getting everyone to march down that path. That's, I don't think, the world that we want to live in. That's a very, you know, dark world, actually. That's more of a dystopia, where we say there's only one right way to live.
And so, you know, I think there has to be more recognition, too, of, of the fact that there are. Lots of different perspectives and diversity of thought, and we should encourage that diversity of thought and views in this space. Um, but you know, um, when you do that, of course, there's, you get everything, and you know, some of those maybe thoughts and ideas aren't the best.
[00:26:11] Marco Ciappelli: Yeah, I mean, culture and ethics is definitely the big, uh, the big issue, but, but people love to hear that. I mean, they have, they want to live in a, I mean, people, most people, they do want to live in a world that is, it's black or white, it's good or bad, it's evil or it's, uh, holy. I don't know. I don't want to go in religion here, but the politics as well.
I mean, people, sometimes they want to be told. They, they choose the source and then whatever the source told them, if it's a newspaper or, or a political party or religion or whatever, then that's what you, what you take. Uh, again, it's the easy button. And then we just need to learn to be a little bit more, uh, you know, making our own mind on, on things and not be, be afraid of it.
So, well, I hope that conversation like this and articles like the one that you wrote Will make people think, at least. Don't judge before you try something. Like I know plenty of people that they say, I would never use AI. And I said, well, have you tried CHAT GPT? Do you even know what you're talking about?
Like, no, I'm not, I'm not going to mess with that. Then I'm not going to take your opinion on it. That's for sure. Well, um, Any tips or advice that you want to give to people that maybe start thinking about Technology with a lot of fear and that they may want to you know See if they can change their mind
[00:27:44] Daniel Castro: I guess the maybe the last point I'd make is I think often the technology sector and tech companies and Um, you know, just that whole, you know, Silicon Valley it's thought of as, it's not your neighbors, it's not your friends, it's not your family who's working there, even if it is, it's this, this kind of, uh, remote other force and it's, you know, I think a lot of people have tried to characterize it as something, um, harmful and negative, but more than that, just something that's outside of the rest of it.
You know, our community. And I think that's really unhelpful. I think, you know, we remember that, you know, these are, the same people that, you know, work at any other business and, they work at the grocery store, the post office or whatever, you know, they're, they're part of our community.
It's not an us versus them. You know, and, you know, I've never really had conversations with technologists, whether they're at a company or open source or government who isn't, you know, trying to listen to their, users and, and make things better for them. And so I think, you know, the, the point there is, you know, instead of looking at it as an us versus them problem, it's, it's really about, you know, a community problem.
How do we, how do we solve these problems together? And I think, you know, if we take that perspective, it's less about, cracking down on this, the big bad. Uh, Wolf, it's out there and more about, you know, just kind of working hand in hand with our community to build a better future, which is what technology and progress should really be all about.
[00:29:08] Marco Ciappelli: Right. I totally agree with that. And to be part of it, right? I mean, if you, if you want to criticize it, to criticize it with the intention to, to help everybody else to make it, to make it better. I love that perspective. Well, Daniel, uh, Thank you so much for this time and for writing that article that inspired this conversation.
I hope people can read it There'll be link to that article and to your social media so they can get in touch with you if they want to for the today information and technology and innovation foundation ITIF and Stay tuned. If you have comments, questions for us, put it in the, in the social media.
Don't be afraid They're not gonna bite. It's a it's okay. It's okay. Don't be afraid Okay. Thank you everybody. Stay tuned for the next episode. Thanks again, Daniel. Thanks Marco