From Doom to Meh: the Human Fluctuation with AI

The Human Pulse Podcast - Ep. #22

Back to list of episodes

LINKS AND SHOW NOTES:
Living Well with Technology. Anne and Fabrice dive into the current wave of AI hype, fears, and reality checks. From doomsday predictions about artificial intelligence wiping out humanity, to practical examples like Google’s Pixel 10 AI tools and the ever-trustworthy Ember mug, the conversation swings between optimism, skepticism, and playful curiosity.

They explore whether AI is truly an existential risk, the limits of current large language models, and the question of trust in technology that sometimes fails us. Anne reminds us of the human superpower of forgetting, while Fabrice sees value in AI’s ability to remember. Together, they highlight the balance between embracing new tech and staying rooted in our humanity.

Recording Date: August 30th, 2025
Hosts: Anne Trager – Human Potential & Executive Performance Coach & Fabrice Neuman – Tech Consultant for Small Businesses

Reach out:
Anne on Bluesky
Fabrice on Bluesky
Anne on LinkedIn
Fabrice on LinkedIn

We also appreciate a 5-star rating and review in Apple Podcasts and Spotify.

Chapters
(00:00) Introduction
(01:36) Tech joy: the Ember Mug and Anne’s meditation practice
(04:50) Entering the AI doom discussion
(06:01) Google’s Nano Banana model fueling awe and distrust
(09:30)“Could AI Really Kill Off Humans?” challenges the extinction narrative
(13:32) Slowing progress: the ChatGPT-5 debate – From AI boom to bubble?
(17:00) Personal frustrations when AI tools don’t deliver
(21:48) Google Pixel’s Magic Cue – A for “assistant” in AI?
(25:44) What’s next for AI?
(26:35) Conclusion






See transcription below

Episode transcription

(Be aware this transcription was done by AI and might contain some mistakes)

Anne Trager (00:00)
Hi everyone and welcome to the Human Pulse Podcast where we talk about living well with Technology. I'm Anne Trager, a human potential and performance coach.

Fabrice Neuman (00:10)
And I'm Fabrice Neuman, a tech consultant for small businesses.

Anne Trager (00:14)
This is episode 22 recorded on August 30th, 2025.

Fabrice Neuman (00:18)
Human Pulse is usually never longer than 30 minutes, so let's get started.

Anne Trager (00:22)
So Fabrice, are you noticing how the conversation about tech has turned to be almost exclusively AI related these days? Yeah, I know. And it's like AI is the, is the focus of all tech conversations and things like, like, know, committees and commissions and organizations that used to be called tech, whatever, are now called AI, whatever, as if like all tech were AI.

Fabrice Neuman (00:30)
No joke.

Well, and I guess that's the point. I do remember back in the day, you we would talk only about processors and speed and what have you, because this was important. And nowadays for each new device, there's not such a big jump in power or speed or whatever. So it's not important anymore.

Anne Trager (01:07)
Yeah,

the official department of technology did not become the official department of speed or the official department of processors. And now everything's become the official department of AI instead of the official department of tech.

Fabrice Neuman (01:22)
I see your point. Maybe the equivalent would be the Department of PCs.

Anne Trager (01:27)
Right. Right.

Well, anyway, so that said, I just wanted to start today's conversation with something tech related that has nothing to do with AI. Okay. Yet. And I'm going to need you to, to, to do the props for those of you who are watching this, because I did not bring my prop with me, but I wanted to talk about my total tech joy with what, with my Ember cup.

Fabrice Neuman (01:36)
Yet.



Anne Trager (01:55)
show us the Ember Cup for those of you who are watching on visit the mug. So what Ember is, is a mug that is, ⁓ will automatically keep my coffee at the right temperature. It is.

Fabrice Neuman (01:55)
Yeah.

Yeah.

Yeah, and

we talked a bit already about it here, but at that point, I was the only one using it and you were pretty jealous about it. And so you got one, which meant that I had to get another one. So now I use two. Yeah, yeah.

Anne Trager (02:16)
Yay!

Right. Right. So we are an Ember family. Well, so

why am I talking about it today? I mean, I was thrilled when I got one and I love it and I use it every day. However, I realized that this tech is now enabling me to get back to meditation.

Fabrice Neuman (02:38)
Yeah, so you have to tell more about it because, and I like the fact that, and considering what we're going to talk about later on, a bit of very positive and optimistic piece of tech is very welcome.

Anne Trager (02:39)
How was that? Okay.

Well, and it, that it brings me back to something which is totally un-tech related, which is totally disconnected, which is meditation. So let me explain. So I was, my, my mornings, I get up, I do all kinds of things. I have a, I have a very structured routine, which is full of all kinds of stuff. And it includes spending a lot of time going outside and walking. And then when I come back from my walk and I get ready for my day, I will go make my coffee and then I will start my day.

because of all of the stuff that I do in the morning, had sort of given up on the meditation. Yes, I would do walking meditations, but I wasn't actually doing a sitting meditation. And the coffee was coming at just the right time for me. So I didn't want to push back my coffee or anything like that. And I realized the other day that, well, actually, I can have a few sips of my coffee, then I can go and do my sitting meditation. And when I come back to my coffee, it will still be warm.

Fabrice Neuman (03:49)
Yeah.

Anne Trager (03:50)
So it opened up this space where I could actually do my meditation and not totally disrupt my routine again in a way. just, want to thank Ember for that. I want to thank that technology for that. want to thank technology and I want to thank my meditation practice because it is wonderful.

Fabrice Neuman (04:00)
Mm-hmm.

Yeah, I think it's also related on a broader spectrum to the importance of accessories. So it's an accessory to your morning routine, just like maybe it's a little stretch, the right case for your phone. Sometimes you choose a case that...

Anne Trager (04:16)
Mm.

Fabrice Neuman (04:30)
That's not the right one. grip is not good. It's too big, it's too small, it's whatever. And then you find the right one and all of a sudden it falls into place.

Anne Trager (04:38)
Exactly.

Exactly. Thank you for stating that in such a concise way. Okay, but that was my topic for today that I really wanted to bring. What about you, Fab?

Fabrice Neuman (04:50)
Well, so the topic I wanted to discuss is more doom related. It's not very positive, but I think it's really important to talk about it. it started after reading an article in the Atlantic called the AI Doomers are getting doomier. And it talks about all those people thinking that basically

Anne Trager (04:55)
Yikes.

Ahem.

Fabrice Neuman (05:12)
We are playing with fire when doing AI, basically sawing the branch on which humanity is sitting, leading to its fall, the fall of humanity, and being replaced by sentient AIs, not only taking over, but even killing us all. so that was the starting point because, you know, and for example, the

Anne Trager (05:28)
Yikes!

Fabrice Neuman (05:36)
Lately we saw and I had this kind of somewhat of a discussion on LinkedIn about this new image generating model from Google called Nano Banana Which is absolutely amazing as far as the results are concerned, you know, you can do whatever you want You can replace faces on people moving people on videos You can create an image and then modify it just by prompting it

Anne Trager (05:46)
Mm.

Mm-hmm.

Fabrice Neuman (06:01)
even with voice, you know, and it's like, it's absolutely amazing. But then it also starts the discussion, which is what I started on LinkedIn saying, be careful people, because basically it means that every day is April's Fool Day, because you cannot trust anything you see even more, because what Nano Banana model is generating is so perfect. so is there...

Anne Trager (06:03)
Hmm.

Fabrice Neuman (06:30)
So I can ⁓ understand people saying, look at that. AI is going to destroy maybe not us, but everything we've known for sure until now. And so this is why I guess at least listening to people saying that AI is going to do some harm is interesting.

Anne Trager (06:42)
Hmm.

Well, that article, I read it, it's a very doom-y article. It cites these naysayers who are giving, you know, giving us another year or so of sovereignty before the robots take over. I mean, that's really scary. That's like really soon. Okay. And then, or maybe a few more years before they kill us all off. And

Fabrice Neuman (07:05)
You

Anne Trager (07:11)
So it also refers to the ebb and flow of doomsdaying around AI, which I thought was really interesting. you know, it started off strong, AI is, you know, leading us to our doom, and then things waned a little bit. And now it seems to be coming back with some actual examples of chatbots and chatbot simulations that deceive human beings, that blackmail us, that

Fabrice Neuman (07:11)
Yeah.

Anne Trager (07:38)
even simulate killing humans and things like that. I mean, this is some pretty scary stuff. Or AIs that are sabotaging user requests or they have these secret evil personas or they're talking to each other in code and nobody seems to know how to make it safe. So that's really scary.

Fabrice Neuman (07:57)
So I

interject here to remind everyone that when we say AI seems to want this or do this or do that, AI does not want, does not think, right? This is our perception because AI is still just a statistical approach of what word comes next in a sentence.

And so, and then we do anthropomorphize it because it's so close to being perfect as a conversation tool.

Anne Trager (08:29)
Yeah. what I understand, people who are deep into AI don't exactly understand how these AIs are communicating with each other in code and what they're doing and how it is stuff happens that looks like deception and feels like blackmail and so forth. I mean, okay, there's a lot we don't know.

Fabrice Neuman (08:47)
Mm-hmm.

Anne Trager (08:53)
That article also references a statement made in 2025. So that was a year ago. Sorry, a statement made in 2024, so a year ago, where a number of researchers came out and said that mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.

this is mentioned in an article in Scientific American written by

Fabrice Neuman (09:21)
here.

Yeah, so in

another article, I might say, so you said I was like ⁓ the doom bringer and then you brought a new article that even, you know, went further.

Anne Trager (09:30)
I know.

I

know. so this article is from Scientific American and it's very recent and it's called Could AI Really Kill Off Humans? And the fellow who wrote it is a national security specialist. And he starts with this really bold hypothesis, which I was actually, well, which I am actually very happy to believe. you know, and the hypothesis that he starts with in this article is

no scenario can be described in which AI is conclusively an extinction threat to humanity. Humans are simply too adaptable, too plentiful, and too dispersed across the planet for AI to wipe us out with any tools hypothetically at its disposal. If we could prove this hypothesis wrong, it would mean that AI might pose a real extinction risk.

So that's what he starts the article with. And then he goes on to explain that with his team and an engineer and a mathematician, and this is their job to actually analyze risk, they went out to take the risk of AI causing extinction to be very serious. And they studied how it could do us in basing it on the big things that could possibly do us in today, which are nuclear threat and pandemics and...

I don't know, know, biological weapons or something like that. Okay. And he concludes, it turns out it will be very hard though, not completely out of the realm of possibility for AI to get rid of us all. Anyway, I recommend reading the article. will link to it to understand, you know, first of all, what AI would need to do to do us all in based on

Fabrice Neuman (11:12)
Heh. Heh.

Anne Trager (11:22)
on this study. anyway, what I, what I take away from it on a more positive note is that we are not in the current AI landscape. We're not yet at this kind of super evil AI.

Fabrice Neuman (11:23)
Yeah.

Yeah, so I would add a couple of things to this article. First of all, his team considered like three different things like biological weapons, pathogens, and nuclear war, one after the other. in the article, at least he didn't consider the fact that AI could potentially do all three at the same time. So, well, and...

Anne Trager (11:57)
Yeah

Hey, I ended

on a positive note there and here you go, here you go.

Fabrice Neuman (12:04)
Yeah.

And then there's a sentence at the end of the article that struck me, which is this one, two sentences. So will AI one day kill us all? It is not absurd to say it could, as you said. At the same time, our work shows that we humans don't need AI's help to destroy ourselves. So maybe AI is not the only culprit here.

Anne Trager (12:25)
There we go.

Fabrice Neuman (12:28)
And it goes back to the fact that for now, AI is, even if we don't exactly know how it works sometimes, we are the master of it, like we created it. And so it didn't step out of the box yet. Will it ever? I cannot know, obviously. I don't think anybody knows. But the AI is not sentient. So...

Anne Trager (12:41)
No.

Fabrice Neuman (12:49)
That would be my message. Let's not be too anxious about it because we like to, as humans, like to scare ourselves. Just look at all the movies and stuff like The Matrix and Terminator.

Anne Trager (12:58)
Yeah, yeah, I know we really.

Well, so then in theory, we may have created a little bit of clickbait from talking about this. Okay. Okay. Not our intention. Just, we're just doing it out of curiosity. Right? Well, so

Fabrice Neuman (13:09)
Yes, absolutely, Yeah, yeah, well, so when

the articles also very well written, so it's a very good read.

Anne Trager (13:19)
Yeah, yeah, absolutely.

Yeah, it's a good read.

The thing is, is that with what's happening in AI, you know, in the past month or so, maybe we don't need to be quite so worried. So can you tell us a little bit about that?

Fabrice Neuman (13:32)
Yeah,

so the parallel I drew was with the somewhat botched launch of Chat GPT 5. There were many articles about it, I read one from the Los Angeles Times that seems to demonstrate that the progress of generative AI, the one technology, LLMs are based on, or based on LLMs, actually.

It's already slowing down or at least not capable of achieving the exponential improvement that was previously thought possible. you know, from GPT-3 to 4, it was amazing. From GPT-4 to 5, it's not that much of an improvement, so some people say. So, and some people have been now saying that, well, AI is a bubble ready to burst. Even Altman, the founder of OpenAI, talks about an AI bubble, right?

Anne Trager (14:24)
He

was the first, I think, to pull out the word, right? Is it marketing or not? I don't know.

Fabrice Neuman (14:26)
Yeah, yeah, exactly. well, that

might point exactly. When he says that, maybe it's also trying to drive the market to someplace. But he's not the only one saying that it's an AI bubble. But just, you know, during the 2000s, the web bubble, the internet bubble, because there were so much money burnt.

in the development of all those tools And because of the first tests of ChapGPT 5 not being so much better than the previous version and in the first few days it would actually give some weird results. So some people are saying, you see AI is dumb so there's nothing to worry about.

Anne Trager (15:10)
Yeah.

Fabrice Neuman (15:10)
Right. So,

but I would say as usual that the truth between, you know, AI is dumb and AI is going to be our overlord, it's probably in the middle. You know, just like, you know, as human beings, we are suckers for novelty. So we are just as quick to ditch what we just adopted. You know, that's what we are. and the my main thing about is that

Anne Trager (15:28)
Mm-mm.

Fabrice Neuman (15:35)
Either when we talk about AI or not, we more often than not think in absolutes. We go back and forth until we meet in the middle. So it's a process. And I think we are just doing that with AI nowadays. we see around us, the world is obviously shaken by the recent wide adoption of generative AI, chatbots and other creation tools. Like I was mentioning NanoBanana a few minutes ago.

But I'm thinking that the two and a half years since the mainstreaming of Chat GPT, because it was released to the world, to the wide world in October, November, 2022, it's not enough to estimate the long-term effect.

Anne Trager (16:15)
Hmm. Well, so I'll second you on a few things that you said. First of all, that we are absolute suckers for novelty. Just love anything new. And we as human beings are fickle. That means so we're really quick to adopt and then to drop and then adopt and dropped and go back and forth and whatever. This is our nature. And particularly when, you know, first we have trust in something and then

That trust is broken because it doesn't work. you know, we've talked about trust before, you know, with ChatGPT it worked and then it didn't. I mean, one of the things that happened is a lot of the things that people had built prior to there was ChatGPT coming out, you know, you woke up, they woke up the next morning when it came out and it wasn't working anymore because it was based on, you know, the previous models and all of that.

Fabrice Neuman (16:42)
Mm-hmm.

Anne Trager (17:00)
And then there were workarounds that make it work and then Chat GPT did whatever, did what they had to do. that's enough to break my trust. I mean, do I really want to build tools that are supposed to help me and make my life easier using something that I could wake up one morning and find that it's not working? ⁓ I mean, I know that can happen anytime. I'm not naive. However, I don't know that there's something, there's something I do wonder ultimately.

Fabrice Neuman (17:17)
Yeah

Anne Trager (17:26)
Okay, I'll just put it this way. My trust has been shaken. So I'm a little bit less likely to spend hours and hours building something if I don't yet trust the tool. So I'm going to have to, I'm going to wait a little longer. Anyway, I don't know. This has nothing to do with how it will fundamentally change our nature as human beings. And that's another question.

Fabrice Neuman (17:46)
Yeah, it's true. So my latest example about this and the trust thing that I tried ⁓ to ⁓ use Chat GPT to help me write a long article in French. Maybe it's one of the reasons why it didn't work that well because as we know, all those tools have ⁓ mainly been developed in English.

And the main source of data is still in English. So maybe it played a role. But the thing is, I had the back and forth conversation with Chat GPT and it was very interesting and nice in the way of building up the topic and adding up to it up until the point where it started to tell me

well, this is going to be super long. Basically, it refused to produce that long of an article. So I had to tell it, go on, go on, go on several times. And then we ran in circles because I wanted to get a file compiling all the things that we produced together. And it said, yes, of course. And then I had to answer.

10 questions for him and say, are you sure you want this? Maybe you just want the titles of the sections and not the text in it up until the, you know, it totally refused to produce the file I wanted. And so it was like, what, what, what was happening? So, and I had to go back in the conversation and do just like basic copy and paste to get the, ⁓ the, end result that we'll now have to rewrite because it's not very good actually.

Anne Trager (19:21)
Yeah.

So, you know, I'd kind of like to just anthropomorphize that a little bit and say, look, your Chat GPT is resisting you and is, is deceiving you and is making, you know, I don't know, it's taking you on a wild goose chase or whatever.

Fabrice Neuman (19:38)
Exactly,

that's what it felt like. I know it wasn't that. I was probably reaching some kind of limitations either of the model or limitations brought by OpenAI because I was asking something big enough for the servers to be used too much for their...

Profit and losses balance, you know, like financial balance.

Anne Trager (20:04)
But if we take that apart, human beings, when they start getting grumpy and they start saying, I don't want to do that, are you really sure you want to do that or whatever, maybe it's because they haven't eaten in a while. they haven't, you know, and so what you're asking will take too many resources away from their server, know, upsetting their profit and loss balance for the day or whatever. I mean, it's not that far off.

Fabrice Neuman (20:15)
Yeah

Mm.

At least,

ChadGPT never got angry or annoyed.

Anne Trager (20:33)


right, right. Yeah. Yeah. So now we're grateful that it's that it's fawning, right?

Fabrice Neuman (20:37)
Right? Exactly. Exactly.

And actually it forced me not to be either, you know, because at some point it was like, come on, like just do what I ask. And so, but I, but I did not say that to it. But so it means that, it can be a limitation. there are limitations to what AI can bring to us for now. You know,

Anne Trager (20:53)
Well, you're so interesting,

Fabrice Neuman (21:03)
and what the tools can bring. So we don't exactly know what all the AI tools we now have will do to us, will bring us, and some of them will disappear because it's such a moving target these days. But some will stay and be useful. so this is the search is on, obviously, and all the big companies are trying to achieve that.

The latest demonstration for me was what Google showed during the Google Pixel event. The Pixel phones, it's brand of smartphones, a big collection of them. So you have a candy bar and foldable and whatever. And they showed some new

Anne Trager (21:46)
Mm-hmm.

Fabrice Neuman (21:48)
AI tools integrated in that new range. And it's called Magic Cue. And what it does is it's trying to use the information it has on the phone. So your calendar, your reminders, your email containing lots of information as we know about yourself and your relationship with people and when you receive a package or you go on a travel or whatever. And it tries to actually

help you, just like an assistant, do things for you. So for example, you made a reservation for a restaurant and you have that confirmation of the reservation in your email and a friend asks you in a text message, did you make the reservation? And the phone, knowing that, will actually can answer for you or at least give you the potential answer, yes, the reservation is done and here's the, it's at this,

place at this time and here's the link to the restaurant. And all of this is generated by the phone because it knows things about you and what you're going to do in the next few days or something like that. And this is where I think the fact that AI can know more about you, this is where for me the A in AI can become assistant instead of artificial.

Anne Trager (22:43)
Hmm.

Mmm.

Fabrice Neuman (23:05)
I'm rooting for that because I think AI can really be an assistant if I give it enough information about me, which brings us to the fact that when there's the question, when is giving information is like giving too much information and maybe letting it out of the box as we were mentioning.

Anne Trager (23:09)
Hmm.

Yeah, yeah.

Yeah, yeah, yeah, yeah. Well, and there's this thing about, you know, in one of the articles about the Pixel 10, it talks about AI that remembers. And so you want it to remember as much as possible. And I know that you really like the idea of having a machine that remembers everything. And I'm not even going to the privacy stuff article, just the concept of a machine that remembers everything. And I actually believe that forgetting is a superpower.

Fabrice Neuman (23:33)
Yes.

Yes.

Anne Trager (23:50)
I just think it's really cool to forget things and it allows us to really focus in on stuff. And I think, you know, when we talked about this in episode one of this very podcast, you know, just like we talked about trust in episode six of this very podcast. So anybody wants to go listen to those. and you know, my stance on the matter is that no matter where AI leads us, as an assistant or as a, as an overlord, we can all gain from leaning into

Fabrice Neuman (24:02)
Mmm.

Anne Trager (24:17)
our humanity, okay, with its messy forgetting, okay, and it's oddball interpretations of broad contextual situations. That's kind of what we call creativity and innovation, okay. And our drive for interaction with other people, which is like almost purely physical, okay. And I'm not talking about like, I mean, it's just, we are driven as human beings to be with other human beings because

Fabrice Neuman (24:28)
Mm-hmm.

Anne Trager (24:42)
It creates all of these really happy hormones for us, notably oxytocin, is wonderful for health and longevity. Well, and we have all of this other stuff going on and the more that we can actually lean into this, the more human we become. I just think then, then the AI will be a really interesting side partner, something, and we don't know what it will be yet.

And I also believe that we, human beings are wildly, wildly resistant. are really hard to kill. Okay. I mean, even for AI, even for IA and I, to be positive here, I think that we were certainly more likely to outlive the current LLMs. So there we go. That's my positive take on today's conversation.

Fabrice Neuman (25:10)
Yeah. ⁓

Yeah, so going back on your two things, the fact that the forgetting being a superpower, I would add the fact that forgetting is a superpower of human beings, which means for me that we can give the AI the possibility to remember for us so we can ask it whenever we want.

Anne Trager (25:43)
Absolutely.

Fabrice Neuman (25:44)
And so this is what I like. And as far as the restrictions and limitations of LLMs, I think this is what's happening already after two or three years. We are reaching the limitations of the current LLM model as a whole. And it struck me that Yann LeCun, he's a French guy, he's the head of AI at Meta, already said that...

at least a year ago, if not almost two, well, know what guys, LLMs, I'm not interested in them anymore in the development because this is, okay, we reached the peak as well already. We'll put a link to some things that he said. That's very interesting. So we'll see what happens next.

Anne Trager (26:28)
Right. Well, so in a very, very human way, I wonder, goody, what's next?

Fabrice Neuman (26:35)
Exactly.

On that note, that's it for episode 22. Thank you all for joining us. Visit humanpulsepodcast.com for links and past episodes.

Anne Trager (26:45)
Thank you also for subscribing and reviewing wherever you listen to your podcasts. It helps for other people to find us. And we would be grateful if you could share this with one person around you who is wondering about the future of AI or possibly wondering about the future of humanity.

Fabrice Neuman (27:04)
Anyway, normally, we'll see you in two weeks.

Anne Trager (27:07)
Bye everyone.