Is AI About To Solve Loneliness?

The Human Pulse Podcast - Ep. #20

Back to list of episodes

LINKS AND SHOW NOTES:
Living Well with Technology. In this episode of the Human Pulse podcast, Fabrice Neuman and Anne Trager explore the implications of AI in addressing loneliness, the importance of discomfort for personal growth, and the complexities of human-AI interactions. They discuss the potential benefits and risks of AI companions, the role of boredom in fostering creativity, and the ethical considerations surrounding AI's empathetic capabilities. The conversation delves into the challenges of relying on AI for emotional support and the potential degradation of content as AI continues to evolve. they call on listeners to stay curious, question the tools that mirror us, and keep human connection at the core of an automated future.

Recording Date: July 27th, 2025
Hosts: Fabrice Neuman – Tech Consultant for Small Businesses & Anne Trager – Human Potential & Executive Performance Coach

Reach out:
Anne on Bluesky
Fabrice on Bluesky
Anne on LinkedIn
Fabrice on LinkedIn

We also appreciate a 5-star rating and review in Apple Podcasts and Spotify.

Chapters
(00:00) Intro
(00:40) AI & Loneliness Debate
(03:44) Loneliness vs. Discomfort
(06:20) Boredom, Rest & Productivity
(10:18) “Lazy Prompts” with Prompt Cowboy
(13:57) AI Empathy, Ethics & Addiction
(20:33) New Human–AI Interaction Paradigm
(21:25) Language Flattening & Data Degradation
(25:51) Conclusion






See transcription below

Resources and Links:

Paul Bloom, “A.I. Is About to Solve Loneliness. That’s a Problem” (The New Yorker)
https://www.newyorker.com/magazine/2025/07/21/ai-is-about-to-solve-loneliness-thats-a-problem

Prompt Cowboy (prompt-refinement tool)
https://www.promptcowboy.ai

Lex Fridman Podcast #475 – Demis Hassabis: Future of AI, Simulating Reality, Physics and Video Games
https://lexfridman.com/demis-hassabis-2/

Victoria Turk, “The Great Language Flattening” (The Atlantic)
https://www.theatlantic.com/technology/archive/2025/04/great-language-flattening/682627/

MKBHD, “This Is What Happens When You Re-Upload a YouTube Video 1000 Times”
https://www.youtube.com/watch?v=JR4KHfqw-oE


And also:
Anne’s Free Sleep Guide: Potentialize.me/sleep.

Anne's website
https://potentializer-academy.com

Brought to you by:
www.potentializer-academy.com & www.pro-fusion-conseils.fr

Episode transcription

(Be aware this transcription was done by AI and might contain some mistakes)

Fabrice Neuman (00:00)
Hi everyone and welcome to the Human Pulse podcast where we talk about living well with technology. I'm Fabrice Neuman a tech consultant for small businesses.

Anne Trager (00:09)
And I'm Anne Trager, a human potential and executive performance coach.

Fabrice Neuman (00:14)
This is episode 20 recorded on July 27th, 2025.

Anne Trager (00:19)
Human Pulse is usually never longer than 30 minutes, so let's get started.

Fabrice Neuman (00:30)
So Anne, I found yet another article in The New Yorker that was interesting to me. It was published on July 14th with the title, "AI Is About To Solve Loneliness. That's A Problem". So I'll put the link in the show notes as we do, even though this is behind a paywall, obviously. But maybe some of you have access to a New Yorker subscription or like us access to Apple News Plus and then you can read the article. So I guess in line with this podcast, you know, I'm not sure I'm going to do justice if I try to summarize this article. So obviously I asked, Chat GPT. And here's a three line summary. This is the prompt I use.

Anne Trager (01:07)
Of course.

Fabrice Neuman (01:12)
The author argues that AI companions, while not conscious, can offer real emotional support to those experiencing deep loneliness, especially the elderly and isolated. While critics fear dehumanization or emotional deception, the piece suggests that refusing these tools may be a greater cruelty. Yet, widespread reliance on always sympathetic AI risks dulling our capacity for genuine connection, growth, and the very discomfort that makes us human.

I'd like just to add to that, that basically to me, it's all about the integration of these new tools in our daily lives and on the whole into our societies. So it's a big subject. It's obviously very challenging because it's going so fast as we often said here, but it seems that we don't have time or we don't take the time to consider the implications and it's starting to show. And this is to me the conclusion for that.

Anne Trager (02:17)
Well, the first thing that comes to mind to me is something that you always say about technology, is that one new tech never replaces the old. It's just added on. And if that is the case this time, I'm all for embracing empathetic AI companions as add-ons to human interactions, not replacements.

Hopefully that's what will happen. I mean it's concerning that you know the article says especially the elderly and the isolated. Well the elderly and the isolated are already isolated so give them a machine and I don't know that's disturbing from a human point of view and I'm not saying don't do it because if you can help people, yay, yes let's do it and again as add-ons not as replacements.

So going back to that article, the article makes the distinction between degrees of loneliness. There is deep loneliness, which is very detrimental to mental health and wellness. And there's also just uncomfortable short-term loneliness, which is a little bit akin to boredom. And I don't mean to minimize what people feel. Just let's get that clear right from the start.

Fabrice Neuman (03:34)
Hmm

Anne Trager (03:35)
And I also want to get realistic is that we all feel lonely from time to time. We all feel bored from time to time. It's very uncomfortable, very, very uncomfortable. And it's also a signal that maybe we need to go out and mingle with some people no matter how uncomfortable that is. This is the way we work. mean, it's the discomfort that's going to get, it's the pain that gets us to actually do stuff.

Otherwise, I see it all the time in my practice as a coach is that people will say, I want to do this. I want to do that. But if they're not feeling enough pain, they don't actually change. It's kind of sad to say. And that brings me to, you know, one of my pet topics. And that's that I think we're all getting a little bit soft and I'm not talking about deep loneliness here. Again, I want to put all the guardrails we really need to help.

Fabrice Neuman (04:02)
Yeah.

Anne Trager (04:30)
People who are feeling that kind of deep loneliness and help others around us who are feeling a not so deep loneliness. But what I'm talking about is that is the discomfort of being alone or the discomfort of being bored or any number of other daily discomforts that we complain about and avoid. And I'm a bit of a stoic in that way. I think that we get strong by rubbing up against our own discomfort. I think that we change when we get discomfort, when we are uncomfortable enough. And I think that this is one of the ways that we build skills. And the Stoics used to say that, and that's that.

You know, doing hard stuff is really essential for growth and embracing challenges as opportunities is what builds resilience and wisdom and virtue and all those other things that we as human beings aspire to. So difficulty is not the obstacle, but the way forward. So I veered a very long way from the idea of loneliness. And that's what comes to mind is that sometimes we need to push ourselves into those discomfort zones, maybe sit and be alone for a while and get used to the idea of being alone. It's very, very, very hard. Like sitting and being bored, it's very, very hard. The first thing we want to do is to pick up the phone and not be bored. And I do that all the time. I pick up the phone to not be bored. And there's a lot to be said about being bored because we...

Anne Trager (06:02)
You know, being bored is kind of like just allowing your brain to stop thinking for a while. Anyway, there's a lot to be said. So I'm very far from AI. I'm very far from loneliness. But that's where this article brought me.

Fabrice Neuman (06:10)
To illustrate, I would be the first to acknowledge the struggle I have, for example, to ⁓ not have, ⁓ you know, when I don't have anything to do, let's say, try to fill the periods of time or what have you with,
picking up the phone, listening to a podcast all the time. And so it's a struggle and I try and do that, but you know, it comes back to me to, for example, I don't have anything to do, but so I try to not do anything.

So not pick up the phone, not listening to a podcast. actually so recently reduced the number of podcasts I subscribe to so I don't have this feeling of, my goodness, I need to listen to all those things ⁓ and maybe even a greater speed because otherwise I will not have time to finish all that thing. So that's the usual FOMO and stuff like that. so resisting the need to fill time is a struggle. This is what comes to mind when you say that. Yeah.

Anne Trager (07:29)
Yeah, and it is for everybody. And I think it's because we are distraction machines and we love to be distracted. It's one of the ways that our, it's one of our superpowers.

So being bored is kind of the opposite of being distracted. And so, you know, and I'm not saying be bored all the time. And I know that it's helpful for the brain to have rest. So to think differently, at least to not be always doing the same kind of intellectual stimulation, for example, so that your brain can have ups and downs of doing different things and it's replenishing.

Fabrice Neuman (07:44)
Yeah, we don't, I don't do that enough. And I'm being more and more aware of that. And so trying to change my habits around that. also.

Anne Trager (08:20)
It's like when we do sports. I mean, you're never going to train full on all the time. You train full on, then you take a break. You rest.

Fabrice Neuman (08:28)
Hmm.

Anne Trager (08:30)
It really helps to do it on a regular basis. Even during the day, if you actually think you can work nine hours a day or eight hours a day without stopping, that's actually kind of ridiculous. You can't. Maybe you can do it for a day or two and then you're just going to crash because you're not going to be as effective as you can possibly be if you don't have that pulsing on and off, on and off because that's the way we function.

Fabrice Neuman (08:54)
Okay, so I need to remember one thing. And it's funny because it's like my brain working against me. When I go for a jog, I need to make sure that I don't listen to something while I go for a jog. And it became really not natural to do that. So I'll try and work on that.

Anne Trager (09:16)
Well, so I will challenge the word need there.

Fabrice Neuman (09:19)
Yes, yeah, yeah. Well, I want to be just to to even if it's only a potential difference to it's an experiment. Yes. So I agree with that. Thank you, coach.

Anne Trager (09:28)
Yeah, it's an experiment. Yeah. Yeah. You're very welcome.

Fabrice Neuman (09:42)
And a little bit more tongue in cheek, I guess, to go back to what something to.something you just said, is like a difficulty is not an obstacle, but the way forward. I wanted to ask you a question about that, which is how do you reconciliate that with us human beings being the lazy bunch, you know, meaning always seeking for the path of least resistance. difficulty not being the obstacle and yet we see it as an obstacle. And to illustrate that, very recently I saw you using an online service called Prompt Cowboy, which is a service for those who don't know : so you have the habit of using prompt with whatever…

Anne Trager (10:19)
I love it, I love Prompt Cowboy

Fabrice Neuman (10:29)
chatbot you use and so you put a few words. In this service you put your few words in and then it develops the prompt into several sentences in several paragraphs for you and and then and then you put that into the chatbot you want to ask the question to so and how do you reconciliate the fact that difficulty is not an obstacle and you using that service so you don't have to think about your prompt.

Anne Trager (10:41)
Thanks

Yes, I love that service. It's amazing. They even say put lazy prompt here, you know, and to which I just like, yeah, here's my lazy prompt. And then I'm so happy to have something because well, admit, well, because we're full of contradictions and because yes, one of the, you know, we are always constantly trying to save energy. And I talk about this a lot. We try to save energy. So we create habits and rituals, things we don't have to think about. We use shortcuts all the time. And at the same time. You know, we grow when things are hard. You know, I wish it were if it were one or the other maybe it'd be kind of boring to use a word we've mentioned already. So that's the only answer I really have to that is that we're both. We need both. We do Anyway, which makes me want to circle back to that article in the New Yorker, which you mentioned because I think it's really interesting.

If you look at the science, comes out in the article and it comes out from some of the reading that I did on the side of that, ⁓ is that AI can be empathetic and compassionate. And sometimes it even rates higher than humans in being empathetic and compassionate, which makes total sense to me. mean, if you see... a doctor in the morning, they might be a little bit more empathetic and compassionate than they would be at the end of the day after, you know, how many appointments. Whereas it doesn't have that fatigue factor, for example. I mean, don't even, there's not even, I don't even need to push into the science for that to actually understand that being ⁓ true. From what I understand, the empathy that AI has is pattern-based and not context-based because AI doesn't have this capacity of having the full context or even like even when it does do contextual stuff because we know you can personalize your AI and so on and so forth, it still doesn't contextualize in the same way that we human beings do.

So it is one of our superhuman superpowers to be able to consider a really, really rich context of what's known and unknown and assumptions we make about the other people from shared culture and context and so forth, and to make meeting from more than just the patterns, which could make the interactions more complex and deeply personal, perhaps.

So that's one thing. The other thing that comes up frequently when you start looking at the studies and the article brings it up is that over-reliance on AI for emotional support could actually change how you interact with other human beings. We've talked about this before. And there is a problem of, addiction. Addiction is a real problem with these tools. So those are some of the areas to think about.

This question of how often and how we use AI in an empathetic interaction.

Fabrice Neuman (14:01)
I think it's interesting ⁓ to me what's fascinating is the fact that it seems that AI and all those new tools make us think even more about who we are. They force us to think about how we work and how we interact and how we go, you know, every day on our daily, you know, tasks.

I guess in that way AI is not just a tool per se because it goes beyond and ⁓ it really questions us or makes us question ourselves.

Anne Trager (14:45)
Yeah, without like us even interacting with it. It makes us question ourselves. It is a very, it's pushing us to be more human maybe.

Fabrice Neuman (14:49)
Maybe so. And then you were talking about the AI lacking the ability to contextualize, partly because it lacks the number of senses we use comprehensively. So when we talk to each other, and even more so when we're in the same room that not...

as we are now, we use, ⁓ so we talk and we look at the person we're talking to. We have a feeling movements and we smell and whatever. We use all of our senses and for now, at least, AI doesn't have that. And it seems for me to be the basis for potential AGI.

which probably won't be able to happen before AI is able to do that.

Anne Trager (15:48)
So you might need to tell us what AGI is in a few words.

Fabrice Neuman (15:53)
Well,

that would be the AI being the G in that case is general. an AI being able to do things, whatever the context is. So being ⁓ as ⁓ versatile as a human can be.

Anne Trager (16:06)
So this is the holy grail for people who are developing AI. I mean, not that we don't have like ⁓ eight billion plus super intelligent human beings running around the planet already who could do all kinds of incredible things. Okay, sorry.

Fabrice Neuman (16:16)
Yeah, so I... ⁓

No, no, but then I would encourage people to seek ⁓ interviews by ⁓ Demis Hassabis who is the AI guru at Google and the creator of DeepMind. ⁓ I'll try and find some links to put in the show notes for that.

Anne Trager (16:45)
Mm.

Yeah, excellent, excellent.

Well, so what we haven't touched upon and it's touched upon a little bit in the article is is ethics, the ethics of using this tool in dealing with loneliness, for example. And so in a conversation that I had morning with my AI that was totally devoid of empathy. This is the answer I Quote.

There are ethical concerns. AI can perpetuate biases, miss subtle contextual cues, and sometimes generate superficially supportive but ultimately shallow responses. I read that and I was like, hmm.

Fabrice Neuman (17:33)
Hmm.

Anne Trager (17:34)
Yeah, that sounds like a lot of people I know. Sounds like me some of the time. Sounds like you some of the time. Okay. I mean, people do that too. Wouldn't you say?

Fabrice Neuman (17:38)
haha

Yeah, I agree with that. thing is, then it brings us to the fact that still, AI is just a tool. So you touched on the addiction part. ⁓ I guess you could...

You could say it's an addiction as well if you only rely on one person's opinion all the time without ⁓ without listening to anybody else like you're focused on just one thing so ⁓ using AI and only AI and only one chatbot to ⁓ ask questions about your life and getting you know, this is what unfortunately leads to some people being very deeply, getting deeply anxious because the AI told them that they should change their lives or they should you know get a divorce or whatever because you know and once again maybe anthropomorphize it a bit too much because you're only using that as a sounding board and as a relationship tool

Anne Trager (18:50)
It's almost as if we give more credit to an AI than we would to another person because if you're talking to a friend and they say you should change your life, you don't necessarily take it at face value. It depends on, obviously people do in certain situations. This is way more complex actually than just a tool ultimately.

Fabrice Neuman (19:04)
hehe

Mm-mm-mm.

Yeah, and we mentioned

here also that AI doesn't often push back. it seems that with, and basically it seems to mean also that we don't push back,

Anne Trager (19:23)
Right.

Fabrice Neuman (19:30)
You know, when we have a discussion with another human being, then we,

It's like a discussion. We go back and forth. And so you think that I don't, and then we push back into each other. so because the AI is such a sycophant, ⁓ then ⁓ we take its answers at face value because it always agrees.

Anne Trager (19:46)
Yeah.

So the takeaway here then is that ⁓ when you agree with me, I should push back. Or when my AI agrees with me, potentially I should push back. Okay, well, so what's coming out of this conversation for me is an understanding then.

Fabrice Neuman (20:05)
So...

I prefer that second part of your thinking. Thank you.

Anne Trager (20:23)
The whole notion of interaction both with human beings, which I know is very complex, but also the interaction with AI is ultimately a whole lot more complex than I initially thought it was at the beginning of this conversation.

Fabrice Neuman (20:38)
it's a kind of interaction between humans and AI in a way that we never had with tools before. know, this is where AI is a tool but ⁓ different from any other tools we had before. Basically, I don't think a screwdriver changed my opinion on any topic, except confirming my inability to do handy work around the house and you would agree with that.

Anne Trager (21:05)
I would totally agree with it. Thank you for saying it so I didn't have to.

Fabrice Neuman (21:31)
Okay. Yeah, I wanted to be ahead of you on that topic. But that makes me think of another article from the Atlantic ⁓ with the title, The Great Language Flattening. I will put the link in the show notes as well. And basically the article is saying that LLMs learned how to write from us and now we write like them.

Anne Trager (21:36)
Yikes.

Fabrice Neuman (21:37)
Like you know and because we some researchers showed that ⁓ so we use we as a whole you know society we use AI tools to rephrase write for us some emails so you put just like you know the prompt cowboy we were referring to we put a few sentences and then or short sentences, and then we send emails that are longer and more structured because that's what AI does. But then we actually, I'm not sure we learn from them, but it kind of leaks into what we do. And so we naturally write into this the same way. it's kind of ⁓ weird in a way because it also flattening, meaning that we copy the exact structure and so we leave out what ⁓ differentiates us from one another. And then it can also lead to another risk, being that ⁓ as more and more online content is at least partially written by AI. I mean, we all now encountered web pages and posts and stuff like that, that obviously,


Fabrice Neuman (22:50)
has been written and produced ⁓ by AI. It seems that, and then the models are training themselves on AI content because we know that all those tools, they scrape whatever they can from the, they can find on the web, right? And so if more...

Anne Trager (23:07)
I wonder what's gonna,

what the end result is gonna be.

Fabrice Neuman (23:11)
Yeah, well, so in one of them could be some people say that the model might develop a writing style of their own, different from the humans, you know, because the idea behind it being language evolves based on, you know, fashion, you know, and you know in different generations rejecting what the previous generation did or said or something like but then there's to me there's another risk Technology which is and I would to illustrate that it made me think of something that when you take a photo or video, you know and you have that converted into on your phone, a photo is in the JPEG format, the video is in MP4 format. And these are like compressed file formats that degrade the image just a bit in order to reduce the file size, right? they're very good at that. So you don't see it with your human eye, you know, so, and it's not only good enough, it's very good.

But if you take the same image and you put that and you convert it again and again and again, since it's a degrades a bit at some point and very quickly after 10 or 20 cycles, you see the difference. And in order to illustrate that, you can see this MKBHD, you know, he does tech reviews and five years ago he did something, put the link to his video in the show notes as well and it's titled, This is What Happens When You Re-upload a YouTube Video a Thousand Times. And after not even a thousand times way before that, very high quality video becomes just a blur that you can't see anything in.

Fabrice Neuman (25:09)
And so to me, is, it illustrates the risk of AI relying on itself to produce new stuff or to train itself on, because at some point it will degrade and degrade again and degrade again. And then we don't know what the result is going to be. This is the whole discussion about using synthetic data to train the models further.

So this is a whole new can of worms and open question, right?

Anne Trager (25:41)
Yeah, it does sound like a whole new can of worms and thank you for bringing it up because it I think it's a topic for another podcast because I believe we are coming up on our 30 minutes.

Fabrice Neuman (25:54)
Okay, so that means that's it for episode 20. Thank you all for joining us. Visit humanpulsepodcast.com for links and past episodes.

Anne Trager (26:03)
Thank you also for subscribing and reviewing wherever it is you listen to your podcasts. It helps other people to find us.

Fabrice Neuman (26:11)
And why don't you share this podcast with one person around you? Just one, it's enough.

Anne Trager (26:15)
We will see you in two weeks.

Fabrice Neuman (26:17)
Bye everyone..