Redefining Society and Technology Podcast

The Philosophy of Technology | A Conversation with Daniel Sanderson | Redefining Society with Marco Ciappelli

Episode Summary

Explore the intersection of technology, ethics, and society in the latest episode of the Redefining Society Podcast with philosopher Daniel Sanderson.

Episode Notes

Guest: Daniel Sanderson, Owner at Planksip [@planksip]

On Linkedin | https://www.linkedin.com/in/danielplanksip/?originalSubdomain=ca

On Twitter | https://twitter.com/planksip

On YouTube | https://www.youtube.com/@planksip

Website | https://www.planksip.org/

____________________________

Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________

This Episode’s Sponsors

BlackCloak 👉 https://itspm.ag/itspbcweb

Bugcrowd 👉 https://itspm.ag/itspbgcweb

Devo 👉 https://itspm.ag/itspdvweb

_____________________________

Episode Introduction

Hey there, folks! This is Marco Ciappelli, and do we have an intriguing episode for you on Redefining Society Podcast. Have you ever stopped to think about the role technology plays in society or pondered the ethical dilemmas that come with AI? Well, if you're interested in discussions that live at the intersection of technology, ethics, and society, you'll want to tune into our latest episode titled "The Philosophy of Technology with Daniel Sanderson."

As many of you already know, our society is knee-deep in technology and AI discussions, which often leads us back to philosophy, ethics, and sociology. These topics have never been more relevant in public conversations today. That's why we're diving into a thought-provoking talk with Daniel Sanderson, a philosopher hailing from Canada, who also happens to be the founder of the media outlet PlankSip.

Daniel has a lot to say about how technology impacts society, our fears, and the role philosophers play in redefining society for the better. He discusses the current climate and how the psychology of society could shift if we understood that we are at a critical point in history. The episode also explores what it would mean for AI to embody humanity, a question that challenges our core beliefs and dissolves our fears.

You won't want to miss Daniel's unique perspective, especially his ideas around a thought experiment that pivots the entire conversation about the potential of AI and technology. So come join us in redefining society and consider some provocative perspectives that are likely to shape your own understanding of where humanity is headed.

Make sure to hit that subscribe button so you never miss out on any of our future episodes. And, of course, if you're inspired by what you hear, feel free to share it with friends, family, and anyone else who's up for a hearty intellectual challenge.

Now, push play and give it a listen. Trust me, you're in for a treat. 

Cheers!

_____________________________

Resources

____________________________

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Marco Ciappelli: And here we go. Hello, everybody. This is Marco Ciappelli. Welcome to another episode of Redefining Society podcast, where, as you know, we talk a lot about AI, about technology, about climate change, about everything really that affects us. And as a society, you can interpret that as everything we do, everything All the way we interact with each other. 
 

And most of the time we don't think about it. You know, we just go along. We, we get our culture, our communities, the way we think, our country. But luckily there are people that stop and think about what are we actually doing? Why are we doing certain things? Uh, I don't know what is the meaning of life or something like that. 
 

And usually those are philosophers. And as I joke often lately, philosophy, ethics. And the way that we live in our society, sociology, never been as relevant as today on the mass media in the, in the public conversation because of AI, because of technology, because of privacy. So we always end up there, but from different angles and with different guests today. 
 

For the people watching the video, they already see, uh, Daniel Sanderson is here, all the way from Canada, and for the people listening, yeah, it's true, he's here. So, uh, Daniel, um, let's hear your voice and, uh, introduce yourself. Who is Daniel?  
 

[00:01:30] Daniel Sanderson: Marco. Thank you for having me. Um, it's my pleasure to be here. 
 

I, um, I, I, I'm a philosopher really at the end of the day, which, uh, you know, I may start to. Uh, repeat itself for people that really understand philosophy, that you love wisdom, and, uh, well, I love wisdom, and, you know, where do we draw that from? I think it's, um, from thousands of years of written text, uh, from people who have thought a whole lot more than I have or you have, right? 
 

There's this, this tradition of lineage that passes down from person to person, and I'm... I'm someone that just participates in it and loves that participatory, um, experience, I guess, right? That's, that's the simple thing. So as the philosopher, I lead with that. Uh, I also am the, the founder and the owner of a media outlet called PlankSip, uh, which, um, PlankSip stands for plank, like a piece of wood, an organic platform, and sip like a hot cup of coffee. 
 

Or a T. That's supposed to, uh, get people to think about consumption. And I, I'd like to change what consumption means. So if, if there are any creators out there that, that, that, um, try and create podcasts, right? Like we're doing now. Uh, write articles. Uh, create content. This, this creative process has the potential, um, and a shaping function. 
 

Now I'm leaning into a little bit of a, of a platonic sort of thing, right? Shaping, forming forms, this kind of thing. It's, um, really is an ideal process and part of our psychology. So I like to tap into that and. And, and emphasize the discussing nature of what we can do and what we can discover together. 
 

And that's exactly what we're doing here today. And what I do with the people that are involved in my network, my friends, my family, and, and you as well, Marko.  
 

[00:03:41] Marco Ciappelli: Well, I love it. I feel like I've I've seen a little bit of myself in what you said, and we were talking a little bit before starting, you know, turning on the on air or recording sign, and yeah, I mean, I think we have a common passion and uh, I love what you, what you do. 
 

I want to learn definitely more about, uh, the publication that you have. And I suggested maybe we will do that on my other podcast where we can really dig into that and, you know, content creators and which is a very popular, uh, terminology used. Everywhere now from a Tik Tok to, you know, you can argue there's different kind of content for sure, but today on this show, I want to dig into more into your, your vision as a, as a philosopher and someone interested in the future of humanity, which is. 
 

You know, questionable in a way where, where we find ourselves today and we look back at history. We now dealing with things that probably a hundred years ago, we didn't even think about like AI. Yeah. Everybody's talking about that. So, you know, we need to drink, I guess it's a drinking game. Every time we say AI, we, we have a sip, which connect with what your publication is. 
 

So it's a complicated conversation. And like I said, , it's back into. The public conversation, really a mirror, I think, and looking at the way that You know, generative AI, for example, collect all this information, um, study in a way who we are, almost thinking about the fifth elements when you, you know, the, the fifth element incorporate all the knowledge of humanity in few, in few minutes. 
 

And, and then who you are, it's just, you're, you're just somebody that is human. You're not somebody from another world. You just incorporated all that knowledge. And I think you get the bad and you think the good. So as a philosopher, this technology, um, the bad and the good of it. I mean, are you more a pessimist, a dystopian or a utopian when you look at this? 
 

[00:05:53] Daniel Sanderson: That's an excellent question. And I think I, I, I exist. My opinions and what I'm actually going to explain exists in relation to our society. Uh, in ancient Greek society, it had a little bit of a different culture. The culture was orientated towards perfection. So, you could say, if, if they had the technology at their fingertips. 
 

Okay, the Promethean, which represents technology in and of itself, right? Promethean technology. Um, I guess I should explain Promethean was a, um, a god that was punished by Zeus for giving humanity technology. Giving humanity the ability to, um, I guess transcend out of the ordinary, right? Now, if the Greeks had a technology like... 
 

We're experiencing an AI technology and some of those fears. I think they'd be more optimistic to use your framing. Okay. The reason why I think they'd be more optimistic is that on the forefront, the psychology that I'm witnessing is very fearful. So in response to this fear, I wrote a book. A lot of people write books. 
 

I think, um, I'm going to evoke the, uh, Darwin for a minute. So Darwin, when he, um, when he wrote his bombshell of a book, he was, he was immersed in the. The culture of, uh, religion at the time, his wife was, was, was horrified to know that the, the, the, the, the paradigm that he was going to unleash in the world, he, he'd prefer just to have left it. 
 

And until after he had passed away, Wallace, who discovered, um, discovered, you know, air quotes, uh, evolution at the same time. Kind of was the catalyst to kind of move that forward. Right. Okay. So, um, and I want people to pause for a minute because it is this guy actually comparing himself to Darwin. Right. 
 

And I say, only on one simple point, and that that point is, is that I've written over 33 books. I've not published a single one. And you're gonna think, well, why am I waiting to publish these? I'm probably going to publish a few. One of the ones that I'm most proud of is a book called Will Freeman. And it's exactly that it's a, it's a, um, it's, it's, it's. 
 

It's about an android. It's about artificial intelligence. Now, I wanna, I wanna frame this for you, okay? Cause the entire book is framed around the fact that humanity's existence is going to end. Just point blank. That's what you have to accept when you start reading the book. Humanity's existence is going to end. 
 

Okay, now outside of the book, that could happen at any, you know, it's for any reason, it could be an asteroid, it could be a virus, it could be, you know, whatever. Okay, but there's a universal acceptance that the, the, um, extinction of the human species is inevitable within 30 years. Okay, now, if we kind of parallel that to the conversations that are happening right now, um, there are a few fears that might Make that somewhat relevant. 
 

Climate change, nuclear fears, uh, war, resource scarcity, this kind of thing. But, the idea is that it's like, the jury is still out. If I make those claims, somebody else is going to say, no, no, that's not true, you're over exaggerating. Right? So, the fundamental starting point of this particular book is an acceptance that that's happening. 
 

You say, well, why would we do that? I say, well, it's a fiction. We're basically saying that in this world, it's accepted that we're, we're done, okay? Now, assuming that we're done, what does that mean for artificial intelligence now? Notice how the psychology instantly switches, and that's what I wanted to leverage. 
 

When the psychology switches and says, uh oh, we're done, what is the potential of artificial intelligence? You know, and I'm talking general artificial intelligence, and artificial intelligence that would be, um, uh, that captures the essence of a human. So a sentient one. Absolutely. Absolutely. So then the, then the conversation goes to say, well, if we wanted to embody all of humanity into a sentient being, how do we do that? 
 

What would we put into this? What, what would be included and what would be excluded, right? So that's the subtle play. And now you have a project, a humanitarian project where the drive, the single force drive is to actually impregnate that technology with everything that it means to be human. Okay? 
 

Whether or not it's possible or not, this is the point. It's like a thought experiment. The interesting psychological thing that happens when you do that is that something reveals itself. Our fears become on the forefront. They dissolve away because the answer is we no longer have those fears. We're literally going to die. 
 

There's no more humanity. So our entire hope for the continuation of the human species is this transcendent Sort of like next generation, uh, I, it's biological, right? It's, it's, it's, it's not transistors and, and, uh, on off switch or, or binary switches, it's, it's actually a biological entity, right? Okay. So that was the premise of it. 
 

And it was an enjoyable ride to be able to write something like that, because it really illuminated for me how much fear is. In and around the conversations of artificial intelligence. 
 

Wow.  
 

[00:12:13] Marco Ciappelli: So. Okay. Uh, that there is some thinking going on there for sure. I, I already kind of saw a portion of the movies in my head as you were telling this, this, you know, and, and of course one of my first, uh, thoughts is, many times I say, well, okay, what kind of end of humanity? I mean, his end of humanity is. 
 

Like a virus probably would, uh, with a meteor destroyed his AI to destroy the planet, or if it's just the end of humanity, so either form of, I'm doing air quote, form of life, even if it's artificial still exists. So I'm assuming that's, that's the scenario you're talking about, and we are excluding the possibility to go out of. 
 

Our planet and save the species. I guess that's not in the book.  
 

[00:13:06] Daniel Sanderson: Yeah. Well, that's the interesting thing, isn't that? So psychologically, let's, let's talk about what's happening.  
 

[00:13:11] Marco Ciappelli: There is no option. That's what you're saying. You're like, you're facing the end.  
 

[00:13:16] Daniel Sanderson: Exactly. And, and I found that as writing, when I was writing this, I mean, I was very explicit in, in, in the beginning of the book to set the stage, so to speak. 
 

And I thought the urge to try and figure out what exactly those details were. Right? And, and, and I, I settled over a process of a year and a half trying to leave it as, um, as vague. Because the tendency of exactly what you're doing is, is that, but wait a minute, is it an asteroid? Is it this? Is it that? And the shift focuses, or the, the focus shifts from the plausibility, the realistic, um, like, like how accurate is that potential from actually happening? 
 

And it ruins the book. It ruins the idea of let's just assume that there is an end to this. And this this is kind of like, um, platonic training 101. Right. What is the ideal for a, um, for an artificial intelligence that is supposed to represent humanity? What is that?  
 

If you take humanity and you take, what is it? 
 

What are, what are the, the, the single. And it could be multiple things, in fact, you know, they really are, but what is the essence of what it means to be human, right? And so that's what the meditation is, not so much on the plausibility of the, you know, of the argument, right? There's, there's plenty of doomsday scenarios, right? 
 

So it's like, okay, doomsday is happening, let's not talk about how it's going to happen, right? Because, I mean, think about this one movie, uh, Don't Look Up, okay? I mean, you don't understand, like, the... You know, the, the narrative at play, it's instantly divisive, divisive, right, because the, the climate people on one side are saying, yeah, this is a metaphor for this. 
 

And then the, the, uh, I don't know, the same more conservative libertarians are like, this is ridiculous. We're just going back to work and business as usual thing. And we understand what, you know, you guys are doing and it's overblown and it's ridiculous. Right. And I thought, I can't go down that road. I can't, it's not, I want to know if we had the power, uh, technology wise to put the essence of humanity into a robot to be, you know, to be  
 

[00:15:42] Marco Ciappelli: a human form or what would, what  
 

[00:15:45] Daniel Sanderson: would we put into it? 
 

How would we capture that experience of what it means to be human? So we start to have other conversations and think about, well, there's a lineage to it. There's knowledge, there's experience, there's love. There's. There's so much that captures what it means to be human that now all of a sudden the plausibility of how the world collapses, okay, is irrelevant to this fiction. 
 

It just doesn't matter. It does not matter, okay? And I would say that considering, um, you know, going back to Darwin, um, there's no guarantee that the species of one on this planet is destined to live forever. Okay. Right. There's, there's no guarantee on it. And you almost think about one of my favorite academics, which would be E. 
 

O. Wilson. He, um, he's known as the ant guy. He is a Harvard ant specialist. And, um, basically his, um, his approach is one of consilience. So he looks at the way the world is and thinks that... We, uh, you know, we need to dramatically come together to be able to fix the planet from a biodiversity standpoint. We need to be more, um, half earth orientated and, and work more in lockstep. 
 

I guess just work to, to promote the biodiversity of the planet. One of the things that, that EO Wilson was, um, I guess. It's known for a little tidbit fact that I didn't know is that the biomass of humanity or humans on the planet equals that of ants. And if you think about it for a minute, you're like, wow, that's interesting. 
 

There's like, I don't know how many ant species there are, but there's hundreds of thousands of ant species. You know, I think, I think that's what it is. There's, there's a lot more than one. We're one. We talk about survival of species. Well, there's so many species on the planet, in terms of ants, and they have the same biomass that we do. 
 

Take all of the mass of ants, and you look at all the mass of humans. We're right on par, right? So, you know, and I think if there, if there are events that really challenge our survivability, we talk about, um, uh, the production of food, for example, we can look at mass famines and maybe something triggering, uh, I don't want to get into a doomsday conversation, but this is, you know, we could go through a period of that. 
 

We could go through a very, um, challenging time where The, the decadence and the prosperity that we, especially in the, in, in the northern hemispheres and in the, you know, the Western world, we're experiencing, um, unprecedented times of, of, of wealth and prosperity. And there's nothing guaranteeing that that will continue. 
 

Nothing saying that it can't actually get better. Right. This is kind of how, you know, you have to approach it, but when you look at it and you think we're one species and in terms of evolution and adaptability, you know, we got one chance. We have one chance with one species. Ants, for example, have like many more possibilities to be able to adapt into their surroundings. 
 

There's more species just simply.  
 

[00:19:05] Marco Ciappelli: Yeah. Yep. Yeah. I mean, it's hard not to go in Dom's day scenario when we, when we talk about this, unless. Unless we, we take this exercise and this scenario and we say, well, okay, we, we just saw the end and the future or what it could be. And what do we do with that now? 
 

Right? So what, what do we learn? Like I, I have conversation with people that run complex risk assessments for the planet. Where do we focus the most? It's the, you know, the world gonna end. Um, and it's not a. A fantasy thing. Although, that's the really thing between fantasy and reality. We don't know what it is anymore. 
 

But with that idea, what do we learn? Because the thing they often say is, you know, I'm not afraid of technology. I'm afraid of how human you're going to use technology. I feel like technology is an extension of who we are. We created it and it's What I'll pause to, I mean, it almost like part of being human. 
 

It's an extension on my opinion of being human. So it will actually make sense that ultimately we hand our humanity to a piece of technology that we have created. And you kind of. Keep living in into another form. I mean, I don't know if that's what you're going, but that's what I'm seeing. And so I want to bring it back to to reality of the contemporary world where again, can we do this analysis and imaginary scenario that are very plausible anyway and say, Hey, Can we change something today? 
 

Are we doing the right thing? Are we building the technology that serve us? Or are we building the technology that it just make us money in the short run and there is not really? I mean, for me now that that's the big question, like instead of waiting for this end of the world where you created the vision for it, what do we learn from that? 
 

[00:21:17] Daniel Sanderson: Yeah, yeah, I mean, that's that I have a lot of sympathies for what you're explaining. Actually, I really do because, um, with we've all seen that, you know, the hockey stick graphs and the explosion of technology. Um, like I mentioned, we live in an. Unprecedented time of prosperity for a lot of, um, I guess in the western world and in the northern, um, continents, I guess, right? 
 

And when you, when you look at it, I think, I think you're going down the right path to say, um, can we slow it down, right? I mean, we're going so fast, right? And that, that's one thing that when you, when you have a car that's going really fast and you want to like kind of limit it, you put a governor on it. 
 

And so governance is a very, very interesting conversation to have. How do we, how do we government, how do we govern and slow down, um, the, the rapid explosion of technology? Um, and 
 

I think the key here is a little introspective because. What you're doing by slowing down technology. Okay. And the adoption of technology is you run the risk of stifling innovation. So there's a couple of things that are kind of working against each other, you know, we're forced with challenges. And I think maybe you could just evoke entropy at this, at this standpoint, things will get more complex over time and things are getting more challenging. 
 

So if, if we, if we, Thank you. If we enter into a, um, a state of agreed governance that restricts and slows down our rapid, um, adoption of technology, does this in itself threaten the existence of the human species? Like, we could have kept going with the pedal to the floor, so to speak, right? Okay? I mean... I don't know why I'm thinking about Elon Musk, but he's another one who, who, um, has got his foot on the gas pedal. 
 

He's a platonic thinker, uh, as well, and, you know, he's, he's, he's going full force, right? You know, it's, are we going to get into a position where we say, hey, stop doing that, Elon? That's a really interesting thing, right? Like, where does that power come from? And, I want you to think about the internet, for example. 
 

Um, and the nature of what it means to be human. Elie Wiesel was a, um, uh, a Nazi concentration camp survivor. And... He's really famous for pointing out that within the heart of every human individual lies darkness as well as light, okay? 
 

So what that, the reason why I bring that up in relation to the internet is that prior to the internet, by the way, if you look at the Google Doodle, the Google Doodle, it's got a 2 5 in there, and I'm guessing, right, that it's the birthday and That's really interesting because I remember a world prior to Google,  
 

[00:24:58] Marco Ciappelli: right? 
 

And you're right. It was 25, a few days ago.  
 

[00:25:03] Daniel Sanderson: Yeah, exactly. So what the, you know, the reason why I, I, I bring that up is that prior to the internet changing our lives prior to that, we had no idea. We were filled with. Ideologies and hopes and aspirations of what the internet could be. I mean, if you went in a time machine and went back and had conversations with people about the possibilities of the internet, this was supposed to be like a panacea, uh, a pill that we could take and it would, it would be, um, equal information sharing for all of society. 
 

And it would radically transform society. Right? That was kind of the idea, right? And um, could you have harnessed that in a way that made it ideal for everybody? Well, ultimately, if you look at the way we are as part evil, part good, more good than evil, on the aggregate, you're going to say, hmm. You've got to take the good with the bad. 
 

You've got to face the bad with courage. And so I want to bring it all back to, to four virtues. And this is the introspective approach, rather than trying to stay in front of the freight train and push it and move it. You think to yourself, can I practice, uh, cardinal virtues, for example, can I individually and introspectively look at Things like prudence, justice, fortitude, and temperance. 
 

Can I do that? Can I become a noble man? Can I, or woman, right? Or whatever. So anyways, you, a person, okay, you can all practice this as rational beings. We can practice this introspective, self improving type of thing. 
 

We can also hold our leaders accountable with those same types of virtues. So, when you can make a difference, when you can change, that's where judgment emerges from, okay? Um, and I, and I think that's, I think that's the key. Um, I think when we think too big, when we try and, like I said, get in front of the freight train and move it with our hands, it's coming at us. 
 

We're like, hey, let's have a conversation, maybe we could just shift it over this way. No, you're going to get trampled. You're absolutely going to get trampled. So, um, we're just here for a blip. Blank. Hey, to give you one other kind of idea about, uh, the finite nature of human existence, and at least on lifetimes, um, and bringing it back to evolution. 
 

Imagine what the human species is going to be like in 5, 000 years. I mean, even assuming that we can kind of weather that storm, right? 5, 000 years. There's not going to be a lot of evolutionary change I wouldn't imagine. Right? I mean, humans 30, 000 years ago in the caves of Lascaux still have this human character to them, right? 
 

At least when you strip the technology down and you've got the handprint in the caves of Lascaux and it's like they're, they're human. They're, you know, they're cave dwellers, hunter gatherers, they're communities, they love their children and they, they support their tribes and their families, right? Okay, so you say, well, 5, 000 is not going to mean very much. 
 

30, 000 is not going to mean very much. Maybe even you could say a quarter million years and what humans are really not going to matter that much, but, or they're not going to change that much evolutionary wise. Okay. But if you say 500, 000, 4 million years, it's not like, um, transhumanism is some sort of a choice. 
 

We have to adapt, right? And if that, if that goes down that sort of way, we're surviving. That's what we're trying to do. Not like, well, hey, we could go A, B, C, or D. What does it mean to be surviving? That's a different question. And we have a problem thinking in those timescales. We almost apply the bias of one lifetime to say, Oh, in 6 million years, I don't want to be living like this. 
 

I don't want to be, you know, you know, living on, uh, you know, virtually in virtual worlds. Well. Actually, your biology will be different in that time period. You have no idea. My grandparents had no idea what the world was going to, how it was going to change and shape over one lifetime. What makes us so preoccupied with the future to have the assumption to think that we could actually predict what's even going to happen in two and three lifetimes? 
 

To say, let's just push this freight train over that way. It'd be like, What? 
 

[00:30:09] Marco Ciappelli: Wow. So, two points. One, it made me think this, uh, this creation, this, uh, this creature, I think in Frankenstein here that, that we created, it kind of reminded me of the, the golden record on the Voyager that we put there with, you know, Carl Sagan working on it and a piece of poetry, a piece music, a piece of history of who we are, hoping that it will reach somebody that actually do care. 
 

And, you know, it's the essence of being human. In the early 70s at that time when we did that. So , so this creature that we, we create, I started having a feeling about talking about transhumanism, that we're going to be more and more integrated with, with technology in our body. And that could be for a health care reason for wearable or who knows for what, maybe for flying. 
 

And so I'm thinking that maybe that AI generally either you're imagined to survive all of us and representing us. Maybe we'll even have some. genetic human part in it. So it's not just our consciousness, but also some of our biological life. So I'm just getting along with your, with your vision here. Love your, your perspective on that thinking in a long run, maybe not 30 years, but 500. 
 

[00:31:47] Daniel Sanderson: Well, the thing, the thing was Marco is that it's not, um, there's nothing prophetic about it, which means I don't think that's the way history is going to unfold. Really. It was. One perspective to show and illustrate our bias, right, where we're so impregnated by fear. There's so much fear of the unknown that it, it, it greatly influences our decisions and our creativity, right? 
 

And I think that is, that's just the point. That's it. That's, I don't want to have somebody say, Oh, Daniel's advocating for transhumanism, or he thinks that this is the way it's going to be. No, no, no, no, no, no, no. I am not being predictive in any shape or fashion. I'm only trying to offer Uh, I don't motive. 
 

Okay. Like  
 

[00:32:43] Marco Ciappelli: a pause.  
 

[00:32:44] Daniel Sanderson: Alternative to say, temper that fear. Temper it. 
 

No, no. This is the problem is that people get so wrapped up in, I think this is the way it's going to be. And then that's your position. And then you defend that position. And I'm like, Mark, well, I have nothing to defend other than the fact, other than the fact, like, this is just, it's very light. Yeah,  
 

[00:33:09] Marco Ciappelli: you're pausing and it's a different angle to look to look at things. 
 

[00:33:15] Daniel Sanderson: Yeah. Yeah. It's just, maybe I shouldn't be like, maybe I shouldn't be so fearful of the potential and what happens if I shift the mindset and I think, oh, what happens if I'm positively looking at that, right? Your show is really interesting on redefining society and it. In the early days of PlankSip in 2016, my, uh, my Twitter profile, it's like thought stories and something or other related to big data. 
 

Now it's a little bit of a pejorative where big data is like, what is big data? And, you know, the, uh, the arts and certain divisions of the university will actually look at things like big data and think that they're coming up with some sort of a proof where they're not. It's just a large number set. 
 

What does the number set tell us? And there again, and there's a fear factor on, on large data sets in that something we're well in is going to be able to, um, control us. They're right. There's that aspect. What is the liberating aspect of large data sets? Hey, it's just information. It's just knowledge. So when I say liberate, this is the thing that. 
 

Um, I, I would like people who think about big data and, um, I'd like to have them think about ways that it could free people, um, enhance people's lives, okay? I remember there was, um, there are some analytics about who you should marry. Now they're not telling you who you ought to marry. They're saying that, on average, you marry the third person that you have a relationship with. 
 

Now, why that's important is because, not that you have to head into the first and second relationship as somebody who's, Oblivious to that information. You have that knowledge. What does that mean? There's a shaping function there. Um, it doesn't mean that you have to abandon your virtues, but what it's saying is that we all approach relationships in a particular way, and what the reality is, is that there's, there's kind of a sweet spot for finding your life partner, right? 
 

As revealed through analytics. Now, I'd like to know about that. I think that's valuable information. It doesn't mean that I need to, say for example, I was a, um, a leader of a church with very specific dogma about... Um, no, I say like, out of, uh, out, out of, uh, marriage, uh, like premarital sex. Okay. For example, and find the person you're gonna be with and all that sort of thing. 
 

There's nothing saying that you, you can't really articulate that and you can't embody those values and manifest them in relationships and encourage your, uh, your, your, and. Your faith. There's nothing saying, but that's information that's really valuable. And big data is just information. What we do about it. 
 

I'd love to see more. Of a, a liberating aspect of it. This, for example, like, Google's earlier, uh, mantra. Which, by the way, PlankSip was really interesting. I, I think. I had to get special permission from, from Google. To use the PlankSip colors, or to use the Google colors on the PlankSip logo. There's a P, and it's got the, the four colors of Google. 
 

And, um, I had to get special permission. They gave me permission. I just can't outwardly say that they are endorsing anything that I do. But basically. I'm very much for a, uh, an information rich society, very much so. The key is, is thinkers like you and I, and as many other people that want to participate, how do you make it a, a... 
 

A vehicle for, for freeing, uh, for freedom. How do you, how do you provide, how do you, um, how do you impregnate the wisdom in, in, in big data and the collection and the activity? These are the conversations that I want to have. Fear is, I don't really know how to react to that other than to run away. Right. 
 

To me, fear, if it's fearful, this is one of the first litmus tests that I have to go, well, if I'm scared. I evoke the god of the saber toothed tiger, and I'm going to run the other way. There's nothing threatening about having the conversation about this.  
 

[00:38:20] Marco Ciappelli: Well, I think it's going into a loop from when we started. 
 

So, and because we're running now short on time, although I officially would ask you to come back again, and so we can keep going with this conversation. It's the fact that you started with defining yourself as a philosopher, or philosophy, which means love of. Wisdom, right? You know, the knowledge, which is that. 
 

And now we're in a loop where I think everything we created. In technology, it's part of who we are. I'm a big fan of that. I'm not afraid of AI. Again, it's a way to look in a mirror of ourself. And now you're bringing now the big data, which is as well, it's what is, yeah, it's knowledge. It's information that we can have way faster. 
 

In a way more, um, articulated and organized through AI, for example, that could allow us if we use it wisely to make the right decision. So, but also, as it is good for the good thing can be. Amplifying the bad thing as well, which is again, Yin and Yang, good and bad, and all of those fun conversations. So again, I'm not afraid of technology. 
 

I'm afraid of what we can do with it and. And I, and I think it's just part of who we are. If, if we see all the things that we see now in, you know, chat, g p t coming out with biases, uh, well that's because we, we taught that to, that's  
 

[00:39:59] Daniel Sanderson: a reflection through  
 

[00:40:00] Marco Ciappelli: all the harvesting, reflection of who we're all the harvesting of information is done. 
 

It's like, it's not a just, it's not like creating their own opinion. It's just learning reflection to Yeah, and it's not really intelligent either. It's just putting together what learn from. Years and years of our Knowledge that we put on papers or in binary form. So, well, look, this is a lot to think about. 
 

Like my, my brain is going, um, a lot of connection. I hope the audience also is thinking quite a bit. Cause that's my mantra, which is if we finish a podcast and people are thinking and have less, less answers and more questions than when they started, I think we're doing a pretty good job. And, and you've been amazing in really bringing it to, to the next, uh, to the next level, I guess, you know, being a philosopher, that's what you, you do. 
 

And, uh, I was suggested to publish that book, man. Seriously. I, I want to see it. Probably. I want to  
 

[00:41:00] Daniel Sanderson: read it. Yeah. It's, uh, yeah, that's  
 

[00:41:04] Marco Ciappelli: relevant now very much.  
 

[00:41:07] Daniel Sanderson: Yeah. It's, it's only, uh, most of what I've written is nonfiction and it's a fiction that was very challenging to do. Um, but, uh, there's been a lot of philosophers that will write fiction. 
 

Uh, I just happen to be one of them and, uh, to go along with some of the non fiction stuff that I've got too, so. Yeah, I gotta, I gotta get them out. All right, cool. I seem to be so busy helping other people write their books and get their stuff out  
 

[00:41:33] Marco Ciappelli: and put them first, but. It's the story of my life. I, you know, believe it or not, and I think people know it by now, I've been thinking about writing books for since I was born, pretty much. 
 

And I always write for other things. I do write. I give people advice, I help people tell other story, and I just need to find the moment that I actually put out my own. Which in a way I do through this podcast, like, you know, people follow, they see that angle of what I try to say, but I love, I love, uh, I love a good fiction if it makes you think. 
 

[00:42:09] Daniel Sanderson: Um, yeah. Well, I think we, uh, you know, back in the day, in Plato's day, used to write things on, uh, on tablets or papyri, uh, this type of thing. Um, We're content creators have the ability to make this their art in conjunction with, you know, the written element of a book, right? So, um, I offer that to you if you'd like some help writing your book or any of the people that are, are listening. 
 

Um, you know, check out PlankShip because we offer free publishing, right? It's, uh, it's what we do and everything that swells around that publishing is, um, Is your voice is your contribution. Yeah, and uh, yeah, that's my mission right  
 

[00:42:54] Marco Ciappelli: now to help love it. I love it Yeah All right We'll definitely share that for everybody listening if you want to get in touch with daniel We'll have notes on the podcast and on in the youtube description way to connect with uh with daniel and uh with what he does and anything you want to share with the audience following this conversation would be We're great and I hope you'll come back. 
 

I want to get this conversation going. I had a really good time. Thanks, Marco. Cool. Thank you, everybody. Stay tuned for the next one. Take care.