The Human Pulse Podcast - Ep. #27
Back to list of episodes
LINKS AND SHOW NOTES:
Living Well with Technology. In Episode 27 of the Human Pulse Podcast, hosts Anne Trager and Fabrice Neuman shift from practical tech advice to a philosophical discussion on the evolving relationship between humans and machines. Inspired by the work of philosopher Jean-Michel Besnier, they explore whether our reliance on intelligent technology enriches our lives or causes us to surrender our humanity. The hosts debate the nuances of AI as a tool versus a crutch, citing examples of generative AI errors in journalism. They also break down psychiatrist Serge Tisseron’s four stages of our relationship with objects—viewing them as servants, witnesses, accomplices, and emotional partners. The conversation concludes with a look at the dangers of anthropomorphizing AI and the transhumanist drive to become "repairable objects" in the quest for immortality.
Recording Date: Nov. 16th, 2025
Hosts: Anne Trager – Human Potential & Executive Performance Coach & Fabrice Neuman – Tech Consultant for Small Businesses
Reach out:
Anne on Bluesky
Fabrice on Bluesky
Anne on LinkedIn
Fabrice on LinkedIn
We also appreciate a 5-star rating and review in Apple Podcasts and Spotify.
Chapters
(00:00) Intro
(00:29) The Philosophical Question: Are We Becoming Objects?
(01:35) The Shovel Analogy and the Pakistani AI Newspaper Error
(04:17) Stage 1: Technology as a Servant
(05:52) Stage 2: Technology as a Witness
(10:33) Stages 3 & 4: Technology as Accomplice and Emotional Partner
(12:43) The Anthropomorphism Trap
(16:22) From Adapting to Machines to Machines Adapting to Us
(22:15) Transhumanism and the Quest to Be Repairable
(24:10) Conclusion
See transcription below
Resources and Links:
Newspaper Issues Apology As Readers Can’t Believe What Made It Into Print (Newsweek)
https://www.newsweek.com/newspaper-issues-apology-readers-cant-believe-print-11047759
What happens when we marry brains to machines? (David Eagleman)
https://eagleman.com/podcast/what-happens-when-we-marry-brains-to-machines-with-sergey-stavisky/
How hackers turned Claude Code into a semi-autonomous cyber-weapon (Ben Dickson’s Substack)
https://bdtechtalks.substack.com/p/how-hackers-turned-claude-code-into
And also:
Anne’s Free Sleep Guide: Potentialize.me/sleep
Anne's website
https://potentializer-academy.com
Brought to you by:
www.potentializer-academy.com & www.pro-fusion-conseils.fr
(Be aware this transcription was done by AI and might contain some mistakes)
Anne Trager (00:05)
Hello everyone and welcome to the Human Pulse Podcast where we talk about living well with technology. I am Anne Trager a human potential and performance coach.
Fabrice Neuman (00:16)
And I'm Fabrice Neuman, a tech consultant for small businesses.
Anne Trager (00:20)
This is episode 27, recorded on November 16th, 2025.
Fabrice Neuman (00:26)
is usually never longer than 30 minutes so let's get started
Anne Trager (00:29)
Well, Fabrice, since we've had a few very practical Living Well With Technology episodes recently, I thought that we could do something a little bit more philosophical today. Perhaps this is because I was inspired by a French podcast I recently listened to hosted by Étienne Klein, and he interviewed and he had this conversation with this philosopher named Jean-Michel Besnier
This philosopher was talking about his most recent book, which the title translated into English literally would be "To be nothing more than an object". In French, it's a "n'être plus qu'un objet". So the question that they were discussing in this podcast was I thought would be interesting for us to explore here. And that's about how, by building all of this intelligence,
human-like.
technology, all of these devices, you know, are we really enriching ourselves or are we gradually surrendering our humanity to become more like those devices? So I thought this was a good question. What do you think?
Fabrice Neuman (01:35)
Yeah, it is a good
question. So are we enriching ourselves? I think you're not going to be surprised if I tell you that I think the truth is probably in the middle. I have two examples in mind. So when we use a shovel, for example, we do surrender our ability to dig holes with our hands, right? But we also enhance our capacity to dig faster and further down.
And if I draw a parallel with what we have today and the AI tools we have at our disposal, I guess we can say sometimes that we use a generative AI chatbot and when we do so, we abandoned our capacity to think. And I do have an example of that. It's just chosen from multiple possibilities, right?
And example is the mistake made by a Pakistani newspaper mid-October this year, just a few days ago as we record this. And they had an article titled, Auto Sales Rev Up in October. And the article's final paragraph was very, let's say, typical, let me quote. Open quote.
I can also create an even snappier front page style version with punchy one line stats and a bold infographic ready layout perfect for maximum reader impact. Do you want me to do that next?" End quote. Right? So I think in that case we can see unmistakable surrender to the machine. Like this article was totally created by an AI. ⁓
Anne Trager (03:06)
Okay.
Fabrice Neuman (03:17)
and copied and pasted word for word and printed as is, right? And so we'll put there, I found an article from Newsweek telling that very story. So I'll put the link in the show notes. So that said, I think AI can also be a tool to enhance our capacity to think and to like enhance our overall capabilities.
And for example, and you and I, used AI to prepare this episode, right? So then the next question is, does that mean that we fell into the same trap?
Anne Trager (03:53)
You're right. know, AI is generative AI. It's a tool. It can be a tool. It can enhance our capabilities and we can use it completely thoughtlessly. And I'm just going to put that in we because I have the humility to say that I sometimes make stupid mistakes too or offload my thinking power.
Fabrice Neuman (04:05)
Cheers. ⁓
Anne Trager (04:17)
because I'm lazy like everybody else. However, let's get back to the subject at hand.
Part of what makes us human is our ability to use tools. It's been very useful to us in our evolution. So if we take it from that perspective, let's explore how we use tools a little bit more deeply. So in this podcast, mentioned...
that I mentioned earlier explored this very topic of how we use these tools and how we interact with these objects and these devices. So there's a French psychiatrist named Serge Tisseron who identified four stages in our relationships with objects. So the first one is that we consider objects to be our servants and they're performing tasks that we don't want to do or that we can no longer do without.
I'm all on board with this. This is my favorite kind of tech when I think about it like that. It's the dishwasher. I mean, really, who wants to? I really don't mind that my dishwashing capacity has gone to the wayside. I mean, I really don't care. I'm glad to have a dishwasher. Or the car. I mean, the car makes me go faster getting from point A to point B. So absolutely, it's my servant.
Fabrice Neuman (05:12)
Foo, foo, foo, foo.
haha
Anne Trager (05:31)
For me, my relationship to these tools, there is interdependence and I want them to work. And when they don't work, I get really frustrated and I'm very, very frustrated. So that's the first kind of relationship to objects. The second is that they become witnesses to our lives.
Fabrice Neuman (05:52)
you
Anne Trager (05:53)
And so when I first
read this and listened to it, I was like, ooh, that sounds really creepy. So I thought I'd hand it over to you because I just get stuck on creepy.
Fabrice Neuman (05:58)
No.
Yeah, for me, it's not that creepy. the technology is starting to be a witness of our lives. For me, like you could say, for example, it started with cave paintings, right?
Anne Trager (06:14)
True, true.
Fabrice Neuman (06:15)
I mean, so of course it's a bit of a stretch, but also it's a way for me to remember that, you know, we talk about technology and we almost often refer to the fact that technology is like silicon and processor based and technology is not that, it's just like a new tools adding up all the way to what we have today and things that we don't actually know what they're going to be. But so if we fast forward a bit,
Then we have picture and film cameras. The word witness, this is what comes to mind for me because we rely on those cameras to remember past times. So I don't see anything creepy in that. Except when the technology once again is perverted like into instruments of espionage.
I guess you're starting to see where I stand here, which is to say that technology is only what we do with it. So the only thing I think in that particular context, we can all agree on one fact, which is like humans are lazy. It's one of their main traits. And so we choose the path of least resistance, which might lead to the questions we are having today.
Anne Trager (07:25)
Absolutely, and I see you, I agree with you as witnesses. It's really quite fabulous viewed from that point of view. I think I moved immediately to creepy because I recently listened to another podcast which I will link in the show notes about the use of absolutely incredible technology that's being done in brain device interfacing.
that is allowing people who can no longer express themselves because they've had straight strokes or whatever to actually speak and express themselves. And that's just absolutely extraordinary. And in that conversation that I listened to, there was a whole bit on how this is bringing us into a sort of ethical gray zone because if the technology can actually read my thoughts,
Fabrice Neuman (07:53)
Mm-hmm.
Anne Trager (08:11)
Which thoughts do I want the technology to read? And then who can have access to those thoughts? And then what will they do with those thoughts? And then it becomes really creepy. I mean, I really do think, you know, what happens when these tools can collect everything I do say and think? Okay, what happens when I'm tracked inside and out all of the time? You know, what will the notion of privacy even exist? Do I want anybody to really know what I'm really thinking?
Fabrice Neuman (08:34)
Hmm.
Anne Trager (08:37)
So I can digress for a really long time here on this topic. So I think we've covered from cave painting all the way to modern technology. There's a whole stretch of things that could happen here as witnesses.
Fabrice Neuman (08:45)
Well
Yes, so
before we go a bit further, just a slight comment on what you said, because it's very true, but there's also one common trait in what you said, which is technology in help of people in need of help. so dependence exists today already when someone cannot walk or cannot grab objects.
They need help from somebody else or something else, which is what we're talking like robots and stuff like that and machines. And it already exists today. So I guess that would be this notion of dependence.
Anne Trager (09:20)
Yeah, I know.
Well, interdependence, would say. we as human beings, we're social beings, we are interdependent. So there's nothing shocking about us being dependent on other people or other things. it's not really that the issue. I think the issue for me is that these technologies can be in the service of people and we still need to keep an eye on the ethical use of that. How are we using them?
What are we thinking about? What are we doing with the information? And we can't, just because it's in the service of somebody else, we can't preclude the conversation about when does that service become disservice. So that is the key question here. We need to keep aware and keep in mind the intentional use of these tools. But again, I digress. If we go back to those four phases that I was talking about earlier,
The third one is that we turn them into accomplices in our actions. That is we use them like accomplices. And for me that's totally the phone. I don't know, maybe it's something else as well.
Fabrice Neuman (10:33)
But then, so in what way? The accomplice word basically brings us directly to thrillers and serial killer TV shows and whatever. So what is the phone accomplice to, in your opinion, in that case?
Anne Trager (10:40)
No, yeah, yeah, yeah. Well, I hadn't thought about that.
Yeah, X.
Well, I guess
accomplished in the sense of I use it all the time and I use it for a lot of different things and there is a little bit of a relationship there. For me, accomplice in this notion of relationship, I actually didn't really think about it any more than that. It was just the first companion, which actually is more related to the fourth level that Tisseron describes, which is that they evolve into these emotional partners.
Fabrice Neuman (11:04)
⁓ So it's a companion basically.
Anne Trager (11:17)
where we actually have attachment and jealousy and dependence and things like that. And that's really clearly the case for a lot of people with smartphones and myself included. I I like to have my smartphone around and now I go everywhere with it and I do everything with it and I have all of it's part of my day to day in a very, very...
Fabrice Neuman (11:24)
Hmm.
Anne Trager (11:43)
significant way and I don't use it as much as most people. so and I believe that this is where we're getting with Gen AI as well. When you start using Gen AI and it can help you out and can be used as a tool to augment who you are and or how you think or what you want to do. I mean, who's not going to do that? know, so however, I wonder if we get Gen AI wrong when we look at it only as a servant.
Fabrice Neuman (11:47)
Mm-hmm. Yeah, it's true.
Anne Trager (12:12)
or tool rather than as an accomplice or and partner. And again, I think this is a rabbit hole. We could talk about that forever. It is all of those things.
Fabrice Neuman (12:23)
Yeah, and I guess for me it's maybe mainly a question of vocabulary because it's a step in between. when we call a chatbot an accomplice, we might be falling into, know, my my favorite word anthropomorphism trap. Right. Because whatever
Anne Trager (12:37)
Yeah
Yes.
Fabrice Neuman (12:43)
we ask it to do today, a chatbot never does anything on its own. It only does what we and we as a species tell it to do. We invented it, we programmed it, we use it and ask do things. time we want to
pin a problem, default or mistake on a tool, I think it's because we forget to look ourselves in the mirror. And whichever tool it is, and Gen AI is a tool, it's still a tool, it doesn't have a conscience as far, you know, I still need a proof of the contrary. So a tool is a partner insofar as it helps us achieving the
better results, go back to the shovel from my example at beginning, or there's this even more famous example of the bicycle. Steve Jobs using this image to qualify the computer as the bicycle for the mind. But so the machine itself doesn't choose to help us.
because it doesn't choose to do anything. Right. So I guess the confusion comes from the fact that the machines today are more more efficient to imitate us. And that brings us back to Alan Turing, for example. And this question he had in the 1940s about
Anne Trager (13:51)
Yeah, yeah, yeah, yeah, absolutely.
Mm.
Right, right.
Fabrice Neuman (14:18)
could ⁓ machine think and imitate or imitate our thinking so as we think it does think as well.
Anne Trager (14:28)
Well, I guess it becomes.
all the more salient right now because with these Gen AI tools we can think faster and do things quicker and when we use them as a tool you really truly use them as accomplices in good or bad ways. I'm thinking of an article I read recently about Claude that
unveiled that there had been a really significant AI, generative AI.
spy thing that was going on. I have to share the article. But basically, no, I mean, it was a little bit complex, but you had these hackers who hacked to Claude so that Claude would hack into some major organizations on its own. So it was an agent. It was a hacker agent. And we're going to be seeing more and more of this. And I'll share the article.
Fabrice Neuman (15:08)
Hmm?
Mm.
Anne Trager (15:25)
in the links, not to be a scaremonger, but what it is is that it makes it easier for these hackers to faster and more and at scale. And you're right, it's just a tool. The AI is not choosing to hack.
Fabrice Neuman (15:37)
Which is the, yeah, it's just the
story of technology you're telling here, which is, you know, the internet made it ⁓ more efficient and also more capable of, so making us reaching out and also get to some information that we wouldn't want to share, for example.
Anne Trager (15:57)
Yeah,
exactly. So we circle back to this idea of it not being aware yet. Okay, let's go back to our philosopher Besnier who talks about this relationship between devices or objects, the machines, the tools we're using.
Fabrice Neuman (16:04)
Mm-hmm.
Anne Trager (16:22)
and ourselves as human beings. And so he says, first of all, there is a whole phase where we were adapting ourselves to machines. So simplifying our thinking and language to match the logical precision, the losing ambiguity, the irony, the human nuance. And this brings us back to before large language models where they used natural language, where actually in order to get information from your computer, you actually had to
to simplify your language in a way that we don't actually need to do anymore. So there's this whole idea of simplifying who we are in the face of technology.
Fabrice Neuman (16:50)
Hmm
Anne Trager (16:58)
or of these machines. Let's just put out these machines, these devices. And then now with the arrival of generative AI and similar tools, the situation has really changed. So these machines are becoming more human. We're going to be human, exactly, in quotes.
Fabrice Neuman (17:15)
Quote-unquote.
Anne Trager (17:18)
These technologies are no longer just about really simple automations. They're becoming more complex. They have the ability to imitate human ambiguity and hesitation and approximation, which is why we can chat bot with a chat. We can chat with a chat bot. I don't even know chat. Yeah, I don't even know anymore what I'm doing with a chat bot. I'm not exactly sure.
Fabrice Neuman (17:38)
Are you a bot?
Anne Trager (17:42)
Okay, so there's that aspect. On top of that there's this aspect which is that we don't really understand what's going on. And this has been happening for a long time. I don't need to understand how the car works for it to work. I just know that if I press on the pedal it'll go. I don't need to know all the rest and smartphones are the same way. So that's another aspect. And there's a third aspect which is that, you know, he says that
To go back to our idea of our relationship with them is that although we tend to refer to them as…in ways we would refer to friends, people will name their chatbots and have that kind of interaction with them, they can't really be friends because we don't understand them. And with people you become friends with another person because you start to understand who they are.
He suggests, this philosopher suggests, that we have moved away from using our tools to imitating them. And so instead of the devices being an object we use, we have actually become the object of these tools because they observe us, they measure us, they imitate the way we hesitate and the ums and the ahs and all of that to fool us. ⁓
Fabrice Neuman (18:41)
Hmm. ⁓
Anne Trager (18:51)
And then again, there were the people behind that who are turning each and every one of us into a product. Now that's been going on for a while, even before Gen AI, because we've all exchanged the free use of internet by giving data away.
Fabrice Neuman (19:05)
Yeah, that's true. going back to what you just said about the machines being made to fool us, one other way to describe it is to say that the machines are created so that they make us more comfortable to use them. So at the beginning,
when we were forced to simplify our ways because the machines were so crude that we needed to type on a keyboard very simple commands so that it would understand like basic like the real BASIC language 10 print hello and 20 go to 10 and you know and it was very simple and but then and it's
Anne Trager (19:43)
Yeah
Fabrice Neuman (19:48)
Absolutely wonderful that today we can actually talk to machine and it quote unquote understands Whatever we say with the um's and ah's and so it does imitate us us because we all we all talk like this and we search for our words just like I did and and we can now talk to the machines and and they the parse that to find the commands instead of us knowing what the commands have to be
letter by letter. it also, I think it's a circle, right? And then the people making the machines are all trying most of the time to make them simpler to use. So we don't need to know how they work. And in order for us not to know it and to be simpler, they, the machines themselves have to be more complex because it's hard to make
Anne Trager (20:22)
Hmm.
Fabrice Neuman (20:45)
something simple.
Anne Trager (20:47)
it makes me think about how with these machines we're trying to imitate this extraordinarily complex technology of us, who we are as human beings. And we keep trying to imitate it as if that were the be-all and the end-all when it actually already exists in us. And I get that part of what we're trying to do with that is to make us better.
Okay? I mean to be able to improve our abilities to do things and to go faster and to go, you know, be more efficient or whatever it is that we're trying to do with these tools.
Fabrice Neuman (21:20)
But I would also argue, you said that you didn't need to know how a car works or we don't need to know how our smartphone works in order to use them. But I still do think that when we know how these objects work, we use them better.
Anne Trager (21:37)
And I know from all of my experience coaching that when we know how we work as human beings, we use us, ourselves better as well. So yeah, six to one half a dozen to the other, it's both.
Fabrice Neuman (21:44)
Yes.
Yeah, so if we go back to ⁓ the argument of us becoming the objects of the machine, then once again, don't think the machines don't have a purpose. It's the very human people who invent and sell them to us that maybe make us transition into being objects.
but not of the machines we are becoming objects of other people.
Anne Trager (22:15)
No, absolutely, and we buy into the whole thing, but who doesn't want to be bigger and better and faster and whatever? It seems as if it's this perpetual drive that we have. That could be the topic of an entirely different conversation. However, if we go back to Besnier, the philosopher, he makes a link to transhumanism, which is this whole idea
Fabrice Neuman (22:22)
Ehem.
Anne Trager (22:39)
that if we as human beings can merge with the technology that we could become, we could ultimately become immortal. We could become...
immortal. mean we could live forever or our awareness could live forever. I don't know. That's another topic. I don't even want to go there. That's such a deep topic. Anyway, what he suggests is that the real danger is that in this goal of doing that, that we then imitate these machines instead of using the machines.
Fabrice Neuman (22:56)
He he he.
Anne Trager (23:14)
so that we stop being these really messy, complex, living human beings who we are and instead become predictable, repairable objects. And that's what he's saying, is that we are starting to objectify ourselves in a very strange way.
Fabrice Neuman (23:32)
Yeah, and like basically in order to live forever, because if we are infinitely repairable, then we live forever. And there's a very fine line, I think, between trying to live longer and trying to live forever. And we achieved a longer lifespan thanks to medicine and stuff like that. But it's for me, it's like, you know, same thing as people trying to stay in power.
always longer, is also to me related to the negation of the cycle of life. But I agree with you, I think it's a subject for another day.
Anne Trager (24:10)
for today. So that's it for episode 27. Thank you all for joining us. Please visit HumanPulsePodcast.com for links and all of our past episodes.
Fabrice Neuman (24:23)
Thank you for subscribing and reviewing wherever you listen to your podcast. helps other people find us. You can also share it with one person around you.
Anne Trager (24:31)
See you in a couple of weeks.
Fabrice Neuman (24:33)
Bye everyone.