Starting the Slow AI Movement

The Human Pulse Podcast - Ep. #33

Back to list of episodes

LINKS AND SHOW NOTES:
Living Well with Technology. In Episode 33 of the Human Pulse Podcast, hosts Anne Trager and Fabrice Neuman explore the rush to adopt AI and automation, driven by FOMO and a desire not to be left behind. They also develop the concept of the slow AI movement, based on the slow food one, emphasizing the need for more intentional and discerning use of AI tools, prioritizing usefulness, trustworthiness, and enrichment. And they tell the story of Marco Arment, creator of the Overcast app, who was able to go head-on with Apple adding a transcription service to his app… by himself, partly thanks to AI.

Recording Date: Mar. 22nd, 2026
Hosts: Anne Trager – Human Potential & Executive Performance Coach & Fabrice Neuman – Tech Consultant for Small Businesses

Reach out:
Anne on Bluesky
Fabrice on Bluesky
Anne on LinkedIn
Fabrice on LinkedIn

We also appreciate a 5-star rating and review in Apple Podcasts and Spotify.






See transcription below

Resources and Links:

Marc Andersen is Wrong about Introspection - Joan Westenberg
https://www.joanwestenberg.com/marc-andreessen-is-wrong-about-introspection/

Tech Connect Europe: A WITI Networking Hour
https://www.witi.com/networks/france/events/6344/Tech-Connect-Europe:-A-WITI-Networking-Hour/

Duolingo cut the humans and crossed $1 billion - Kamil Banc
https://aiadopters.club/p/duolingo-cut-the-humans-and-crossed

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

I Didn’t Want to Melt My Rug (ATP episode 683)
Marco Arment, creator of the Overcast podcast app tells the story of how he integrated transcripts with the use of AI (starts at 00:54:55)
https://atp.fm/683

And also:
Anne’s Free Sleep Guide: Potentialize.me/sleep

Anne's website
https://potentializer-academy.com

Brought to you by:
www.potentializer-academy.com & www.pro-fusion-conseils.fr

Episode transcription

(Be aware this transcription was done by AI and might contain some mistakes)

Fabrice Neuman (00:00)
"Practical consequences of an unexamined inner life at scale are not theoretical. social media platforms built by people who believed behavioral data was a reliable substitute for understanding human psychology produced a decade of engagement metrics while user well-being declined and our entire social order decayed." This is a quote from article by Joan Westenberg on her blog titled Mark Andreesen is wrong about introspection.

Anne Trager (00:36)
So hi everyone and welcome to the Human Pulse podcast where we talk about living well with technology. I'm Ann Trager a human potential and performance coach.

Fabrice Neuman (00:45)
and I'm Fabrice Neuman, a tech consultant for small businesses.

Anne Trager (00:49)
This is episode 33, recorded on March 22nd, 2026.

Fabrice Neuman (00:53)
Human Pulse is usually never longer than 30 minutes, so let's get started.

Anne Trager (00:57)
So that initial quote makes me think about whether we are paying attention to the right things and how far off we can go when those things we're paying attention to are wrong. Which in turn makes me think about an interesting discussion we had last week in our weekly Tech Connect Europe networking group. last week we were, somebody asked the question, should

all small businesses be rushing to figure out AI before they are all left behind. And it got you and I both thinking about, actually asking the question, rushing to what? What are we rushing to? What are we going so fast for? Okay. And thinking about speed being considered as like the only virtue here.

Fabrice Neuman (01:36)
Yeah.

Yeah, there's another link to the quote Mark Andreessen is a well-known venture capitalist. And basically, he said that we should not, we don't have time ⁓ for introspection and we should just go forward. And basically, Joan Westenberg in her blog post said, okay, but toward what? And in order to know,

Anne Trager (02:02)
Yeah, okay.

Fabrice Neuman (02:04)
what then you have to think about it. do some introspection. So Marc Andreessen got real backlash after his declaration. And so I urge you to read the whole blog post. It's very interesting. And the question also made me think of an old TV ad on French TV, probably in the beginning of the 1990s. I believe it was for TV sets.

So the story goes, you would see the CEO of the brand being questioned about why the brand was late in releasing a new TV set with some particular technology. don't remember which one. And the CEO would answer something like this. Our ambition is not to be the first to market, but to be the best. And fast forward to today. And it does seem that the only metric now we use is to be first at whatever.

Even if it means breaking things on the way and mostly break trust with users, we often talked about that here.

And we see that because sometimes it does break things. For example, so there's something that happened last December and I'm quoting an article from The Guardian, also quoting the Financial Times. It goes like this. A 13-hour interruption to Amazon Web Services, AWS operation, was caused by an AI agent called Kiro,

autonomously choosing to quote delete and then recreate end quote a part of its environment. Right? That's weird.

Anne Trager (03:28)
I know, I know, I know.

Fabrice Neuman (03:29)
And part of the explanation for this seems to be related to the relentless push within Amazon to use AI tools, either by employees who are not trained enough to use them or even to replace part of their workforce.

Anne Trager (03:44)
Well, and then there's that story of Duolingo, which we've, I talked to you about it just the other day. I learned about it through a newsletter called AI Adopters Club. And we'll give a link to that in the in the show notes and it's written by Kamil Banc And Duolingo went all in on AI in early 2024.

Fabrice Neuman (03:50)
Yeah.

Anne Trager (04:06)
like all in like they replaced all of their translators and content people and editors and and and everything okay and

Fabrice Neuman (04:15)
And

the CEO ⁓ went public and he declared and wrote something about it and went on podcasts to say, at what we're going to do. And it's amazing, right?

Anne Trager (04:19)
Yeah.

Yeah.

And they actually had extraordinary growth over the next 12 months. Like 50 million daily active users, over a billion in bookings. I mean, just huge, huge growth. And nearly 100 % of the new content generated was generated by machines. Then what happened to them from a business perspective is that cost so much.

that machine generation cost so much that their margin shrank and the you know they which of course in business big business is not a good thing okay and the CEO had to you know the internet went crazy over it and the CEO had to backpedal quite a bit and the other thing that was happening is that the gen AI was making things up I mean it wasn't real

Fabrice Neuman (04:58)
for you.

Anne Trager (05:12)
Anyway, there is a whole bunch wrong with this. And it would even like teach you the wrong grammar rules. I don't know. There is definitely a lesson in there for sure.

And so I think what's happening, if I go back to that question, because now these big companies are sort of jumped in and they made these mistakes and we see these big, huge mistakes happening at scale, and then we get down to this question of small companies and even individuals being pushed to use gen AI and do it now and do it all. Do it now, because otherwise you're going to miss out on something.

And so there is some FOMO going on, fear of missing out, for sure. And among the big players as well as among the smaller players, we're so desperate. Well, we're so desperate to not be left behind. And the big players are so desperate for us to all be paying for AI that they're pushing us to think that we are missing out on something. So we really have to start centering back down and saying, well, OK, what are we doing with this?

Fabrice Neuman (06:03)
Hehehehe

Anne Trager (06:13)
And I don't know if anybody can really tell us what we're missing out on yet.

Fabrice Neuman (06:17)
Well, that's the thing. So the Duolingo example is really just one example. because what we see is that so the big players, OpenAI, Claude, Google, and Perplexity, maybe in a lesser scale. I don't know about you, but it seems that we don't hear that much about Perplexity any longer for whatever reason. Maybe they also just realized that ⁓

Anne Trager (06:38)
Yeah.

Fabrice Neuman (06:40)
trying to get into mass market is not the way to go. even like OpenAI just announced that they were going to focus more, maybe even only on enterprise market. Because, yeah, well, it is very new as we are recording this on March 22nd, 2026. Because basically,

Anne Trager (06:44)
Yeah

I didn't even hear that, okay? Okay, that's new.

Fabrice Neuman (07:05)
OpenAI, to use them as an example, obviously released their tool to the world at the end of 2022. And we started to use it and they were happy for us to use it more and more. But the thing is, the more we use it, the more money they lose. Because it seems that a $20 a month subscription is not enough to be profitable.

Anne Trager (07:25)
Yeah, it's like...

Fabrice Neuman (07:34)
And I hear here and there that even $100 or $200 a month is not. So it's not sustainable, right? So which is why going to the enterprise market might be a sounder solution. ⁓

Anne Trager (07:49)
Well, I also

heard they were going to expand where they're putting advertising and so forth to try to figure that out as well. They use advertising to pay for it.

Fabrice Neuman (07:57)
Yeah, yeah. it seems

that they tried to put advertisement into their tool and they were slapped in the face by Anthropic during the latest Super Bowl ad contest that we see that saying, Anthropic saying that we will never put ads in our product. And we know that Anthropic and Claude is basically targeted to companies and enterprises.

Anne Trager (08:08)
Yeah, yeah.

Yeah, yeah, yeah.

Yeah, well, so anyway, there's a there's a lot going on and these companies haven't quite yet figured out how to make money. And apparently, and so there's another idea that comes up for me, however, in this question about speed. And it's that speed and efficiency are not the same thing at all. And that moving faster in any situation can help us, you know, respond more quickly, sure. But it doesn't really help us.

Fabrice Neuman (08:32)
Heh.

Anne Trager (08:49)
process things or do things better. You know, when we get under that time pressure, we as human beings, we we actually process less information, we do narrow our attention down, but we make lower quality decisions. So, so by pushing us all to adopt a AI quicker, I don't think we're pushing us to make better decisions. I think we're just pushing us again for I don't know what yet.

There is an interesting study and it's a little bit dated because it's from 2025.

Fabrice Neuman (09:22)
What you just said is

also linked to AI development. Like it has to go so fast. so 2025 is old news now. Like, come on.

Anne Trager (09:26)
I know, I know. But

we'll share it in the show notes, but it showed that developers who used AI were actually 19 % slower than those who were not using AI. Okay, so I don't know. I don't know what's going on here.

Fabrice Neuman (09:46)
Yeah, there are also other studies showing that you actually don't save money by using AI, as shown by the Duolingo example, but not only. So I agree with your premise, precipitation does not lead to good judgment. I would also link it to something you just told me. At some point, you were reading articles or LinkedIn posts, and you were saying that all those

Anne Trager (09:52)
Amen.

Mm-mm.

Fabrice Neuman (10:10)
posts and publications are getting longer and longer. And so basically you said, don't have time to read all of that and or focus on all of that. Plus, it's so obvious that many of them, if not the majority of them, are at least partly written by AI. And so we get under this constant flow of AI slop and it's annoying. It's obviously it makes us lose time.

Anne Trager (10:15)
my God, yeah, absolutely.

Fabrice Neuman (10:37)
because we have to go through those first.

Anne Trager (10:37)
Yeah, it loses time and

focus and it's almost as if it requires a different part of the mind. But that's a story for another episode. What I wanted to talk about is not fast but slow. So I'm really reminded of, with this whole conversation about going fast, I'm really reminded of the slow food movement, which has been around for a really long time and I don't know it.

Fabrice Neuman (10:46)
Yeah.

Anne Trager (11:02)
I don't know, maybe 30 years ago it started to grow. And its core principles are basically good, clean, and fair food. You can't really go wrong with that, right? So was wondering what slow AI as a movement would look like. Now I should be clear that I'm talking about, know, gen AI here and this use of generative AI because there's a lot of other

AI that's been around for a really long time that is slower and is doing some really good things, So I'm not talking about that. I'm talking about this generative AI and this rush towards generative AI.

Fabrice Neuman (11:39)
And

as a teaser, I'd like to say that at the end of this episode, I want to tell a story of how AI can help and tremendously help a one-stop, one-guy shop developer to actually go against the big company.

Anne Trager (11:55)
I know the story you're going to tell. Cool. Okay. Okay. Okay. That's a tease. Yes. Stick around to the end for that. Okay. So back to my slow food, my slow AI thing, because I'm really, this is my, my, my morning idea. Okay. So the first question would be not how fast can we deploy AI, but rather what kind of AI helps people think, work, and live better. Do you agree with that? That's a good way to, to, read.

Fabrice Neuman (12:20)
Yes, absolutely.

Anne Trager (12:22)
So then what would, so if we take those three words, you know, the good, clean and fair, okay? So what would good AI look like? First of all, I think it would be useful. I think it would be trustworthy and I think it would be enriching. So these are words that I haven't, we don't hear a whole lot about around AI. So today I'd say that it is sometimes useful. Okay? It is actually sometimes enriching. I'll be fair.

but it's not necessarily trustworthy at all. And I'm not sure it's helping us work better for all of the things we've said before, okay? I think it's bringing in a whole lot noise as you just brought up in that LinkedIn example. I think it's completely distorting all of the social media networks, like you said in the LinkedIn example as well. I mean,

Fabrice Neuman (12:50)
Er, no…

Anne Trager (13:11)
the percentage of stuff written by bots is huge. I think it's like 90%, okay, all around with exception, maybe a little bit less on LinkedIn, you know, so it's all written by machine. So it's distorting that network completely. And in a lot of ways, it's messing with our minds by leading us to easy thinking over hard thinking, okay? And I think that anybody out there, and you tell me if this has happened to you, Fabrice, but when I use too much AI, all of a sudden then I have to come up with my own idea. I'm like,

Fabrice Neuman (13:37)
Yeah.

Anne Trager (13:37)
my God, how do

I do this? Because I have to go to a different part of my mind again and I'm not used to doing it. So I mean, beware, our brains are really lazy and so we will take the path of least resistance no matter what. So we as human beings have to be really attentive to that, okay? And so rather than treating AI as something we can trust and sound enough that we can actually use without close scrutiny, we actually have to think it about, I think,

Fabrice Neuman (13:40)
Mm-mm-mm-mm.

Anne Trager (14:03)
about AI as a rather careless assistant. So anyway, good AI would be something that would help improve our judgment and clarity and not the contrary, which is what's happening right now for a lot of us.

Fabrice Neuman (14:14)
Yeah, so I think we try to use it this way and it's still not exactly... And it doesn't reach the point yet. I don't know if you've heard this. To me, it's a new term. Have you ever dry chatted? So basically it means having a chat with your GenAI of choice.

Anne Trager (14:28)
I know. I don't know what that means.

Fabrice Neuman (14:34)
And as you might remember, I do that somewhat regularly while driving. I use Gemini by voice. So I have this conversation like just to shoot the breeze or to develop ideas when I try to prepare for a conference or a lecture. But then dry chatting goes the way, for example, to role playing.

Anne Trager (14:40)
Mm-hmm.

Fabrice Neuman (14:57)
when you prepare yourself for a job interview or you want to steel yourself because you have to fire someone. And it seems that the AI tools we have now are not that good in that particular area. I heard people saying, so I try to, you it's hard when you have to fire someone. So you dry chat. But then the chat bots we have are way still too sycophantic, right? So you tell it.

Anne Trager (14:58)
Mm-hmm.

Yeah, yeah, yeah.

Fabrice Neuman (15:21)
I need to reduce my workforce. And then the AI goes, what a wonderful idea. And so that doesn't work, right? So to go back to your slow food ⁓ analogy, it's not the only problem,

Anne Trager (15:25)
Yeah, yeah, yeah, yeah.

Right, right, it's not the only problem for sure. Okay, back to my idea. The second principle is clean. So what would clean AI look like? Well, it maybe would not be telling you. Actually, fair, the third one would be it would not be telling you that it's a good idea. Anyway, clean AI would be more transparent about what it like the energy it consumes and also about what it's producing and like maybe

where the data comes from. Again, it crosses over with fair, like not stealing all that data. And we all know that AI is stealing data all the time. that's another topic. Anyway, it would also use lower energy choices where possible. And we have talked about this European platform. What is it called again?

Fabrice Neuman (16:15)
It's called

Euria, so E-U-R-I-A from Infomaniak, it's a Swiss company.

Anne Trager (16:22)
Okay, and it actually recycles its energy into heating houses and things like that.

Fabrice Neuman (16:28)
Yeah,

the heat produced by using all the server farms ⁓ is partially reused,

Anne Trager (16:35)
Yeah, so anyway, that's not necessarily a lower energy choice, but it is a different approach. And so maybe also this clean AI would not like feed the AI slop cycle back into its data because that just messes the data up even more. So I guess it would look like models that are more size to use. We've already mentioned this.

Fabrice Neuman (16:40)
Yes.

Anne Trager (16:56)
It would probably look like more intentional use instead of just all over use. It would absolutely mean more intentional deployment rather than this sort of indiscriminate automation or everybody has to use AI free for all that we're seeing in a lot of big companies right now.

Fabrice Neuman (17:13)
Yeah, it seems that these big companies basically trapped themselves into that. So because we're still at the stage where AI needs to be used by a whole lot of people to improve itself. But then just by reusing the AI results into training the AI itself, we now know and basically we knew that from the beginning that it would produce weird results at least.

Anne Trager (17:41)
Mm-mm.

Fabrice Neuman (17:42)
So

like those big companies, are pushing AI down our throats. And then that leads to the Amazon outage we mentioned before. Or also there are like several stories about lawyers losing cases because they present to the judge totally invented facts, presented as precedents or even like articles of law that don't exist. And it's linked also to the Duolingo example where

Anne Trager (17:47)
Hmm.

Yikes, yikes. ⁓

Fabrice Neuman (18:07)
Not only it would push some grammar rules that don't exist or are wrong, it would also hallucinate things about cultural facts and stuff that would not be helpful for people to learn a language.

Anne Trager (18:14)
No.

So yeah, I know, what a mess. Okay, that's clean AI. There's a lot of work to do on cleaning up AI. And then there's fair AI. what that would, in my opinion, be about having more equitable treatment of human beings all around the system. So it would be reducing bias of all kinds in the data that's used. ⁓ It would mean creators would

Fabrice Neuman (18:23)
Yes.

No joke.

Anne Trager (18:45)
the creators whose work is used would be in some way compensated for their work being used to train the AI. It would mean that the workers who label and moderate content would get fair treatment. It would mean that teams, notice I'm using a lot of conditional tense because none of this is happening right now, right?

Fabrice Neuman (19:02)
Well, it's not really happening. Yeah.

Anne Trager (19:05)
The teams designing automation would design automation around like the humans who are already on the team. It would mean end users not losing like full agency over what's going on. So we're looking at compensation, we're looking at consent, we're looking at accessibility, we're looking at human oversight, we're looking at sharing benefits, potentially even sharing.

Profits, you know the day there are profits. I don't know. I'm just throwing it

Fabrice Neuman (19:35)
Well,

it's very interesting. It makes me think of another example that actually shows that we as a society are not ready yet. And it's related to the company Grammarly. So the tool Grammarly, they changed their name, I don't remember, but everybody knows Grammarly. So it's this tool that helps you correct your ⁓ spelling and grammar, obviously.

Anne Trager (19:47)
Mm-hmm.

Fabrice Neuman (19:58)
but they also integrated some AI tools also, obviously, to help you rewrite in certain styles. And all of a sudden, they started to offer AI-based tools to help you rewrite in the style of somebody else. So, but in the style of, for example, journalists from The Verge.

Anne Trager (20:16)
Hmm.

Fabrice Neuman (20:20)
like Nilay Patel, who was the editor-in-chief of The Verge, who, among others, saw his name used by Gramerly saying, so you can rewrite this paragraph in the style of Nilay Patel. And he was not, and none of the ⁓ person used were called by Grammarly to say, so we're going to do that with your style. So, know, and so that.

Anne Trager (20:20)
Haha.

Fabrice Neuman (20:41)
There was a big uproar, obviously, and Grammarly also backpedaled. it's also basically, I'd like to emphasize the fact that it's not the AI tools fault, right? It's just those companies, they are tone deaf sometimes.

Anne Trager (20:45)
Mm-hmm.

yeah, yeah, there's a complete disconnect between the real world and all of these cool things we could do. I mean, I can almost see like a bunch of no offense tech nerds, but tech nerds sitting around and say, wow, this is so cool. We could do this. Let's just throw it out there. like without any emotional intelligence at all about anything. Anyway, again, no offense tech nerds.

Fabrice Neuman (21:03)
Yeah.

Yes.

Yeah,

Anne Trager (21:20)
I am married to a tech nerd, okay? So I just wanna say that.

Fabrice Neuman (21:21)
the... No, but I guess you would agree that I also go with the following the line of the because we can reason is not enough.

Anne Trager (21:32)
Yeah, yeah, exactly,

exactly. Well, so there's a lot to do for us to have a slow AI movement. mean, we can imagine it again, if I kind of summarize, we'd be looking at narrower tools with stronger guardrails, trained or configured for like one setting rather than every setting.

there'd be this thing which we can actually all act upon right now as we walk away from this podcast, which is to have more discernment when using AI and seeing whether it's useful and when it's not. There's a lot, you know, AI could, I can really see how AI can honor human, you know, skills in a way it can do that. And a lot of what's being done right now is just

like de-skilling people, ⁓ so taking away our skills. So there's this notion again, if it's, it parallels slow food of craft, okay? So how can we make this something that is compatible with our human craft, you know, in that sense? and again, a lot of it comes down to just more discernment on how we're using it and redefining efficiency.

Fabrice Neuman (22:24)
Hmm.

Anne Trager (22:38)
So that it's less about speed and automation and more about quality and and long-term capability and and all of the rest that I've already just said so anyway There's something about Remembering this notion of dignity and human dignity that I think we're losing. I don't say that we need to be slow to respect dignity and well, it sure becomes a whole lot easier because we do slow down and think

Fabrice Neuman (22:46)
Absolutely.

Anne Trager (23:04)
we show some constraint, We're more thoughtful and less caught up in a kind of mindless fascination.

Fabrice Neuman (23:12)
I agree with that. there's still a lot of work to be done, right? I wanted, as I teased before, to end this podcast on a more positive note and give an example of how AI tools used properly, if I may say so, can be very, very efficient. it's the story about, so Marco Arment is the creator and the solo developer of

podcast app called Overcast. Yes, and we do use that. We pay for it. We've been using Overcast for years. It's a wonderful podcast app. The thing is, a few months ago, Apple introduced transcripts into their Apple Podcast app, right? So thanks to AI tools, you can have the text.

Anne Trager (23:36)
I love overcast.

Hmm.

Fabrice Neuman (23:59)
of the podcast and our podcast, for example, is being transcribed by Apple Podcasts. And if you use Apple Podcasts, you can see the transcript. And the other podcast apps don't have that yet because it's quite an undertaking to have that. And so obviously for a company like Apple, as big as it is, it can dedicate some servers to do that. And the thing is, so...

Marco Arment has a podcast with uh co-hosts called John Siracusa and Casey Liss. It's called ATP for Accidental Tech Podcast. And he described what he did to actually offer this transcription service within his Overcast app. And it's fascinating because he was able to do that on his own. It took him a few months.

Anne Trager (24:32)
You

Fabrice Neuman (24:46)
offer this feature of transcription within the podcast. And it relies on using several Mac minis, I think up to minis.

But so the whole thing, so you can listen to it. He describes it in details. It takes him like 45 minutes to describe. It's fascinating. But basically, he tried and he saw that just with the power of a Mac Mini, he can transcribe a podcast at 2000x. So a 2000 minute podcast would take just one minute to be transcribed.

Anne Trager (25:16)
Mm-hmm.

That's extraordinary,

yes.

Fabrice Neuman (25:22)
That's

fascinating. But it means that thanks to these AI tools that he can use for free, he can provide this feature and go head to head with Apple on this particular feature. Because basically, said his opinion is that having transcription in the podcast app is now table stake. Everybody expects to have this. And so he put that in place in a few months.

Anne Trager (25:41)
Yeah.

Fabrice Neuman (25:47)
It's fascinating. Go listen to the podcast. He describes it. We'll put a link in the show notes.

Anne Trager (25:53)
Well, and go and really go try Overcast. I mean, I hope that he has to add another Mac Mini because people from our podcast will go and sign up for, you know, for Overcast just to support the work that he does. It is, I love, and this is the irony, I love Overcast because it is the only one that allows me to listen to

Fabrice Neuman (26:03)
Ha ha ha!

Anne Trager (26:17)
podcast at triple speed and still understand what people are saying in two different languages. Okay, so back to speed. I love my speedy podcast. Anyway, thank you.

Fabrice Neuman (26:20)
haha

Yeah.

Well, on that ⁓ incredible note, that's it for episode 33. Thank you all for joining us. Visit humanpulsepodcast.com for links and past episodes.

Anne Trager (26:42)
Thank you also for subscribing and reviewing this podcast wherever you listen to your podcasts. It really does help other people find us. You can also share it with one or two people around you and we'd really appreciate it.

Fabrice Neuman (26:55)
and we'll see you in more or less two weeks. Bye.

Anne Trager (26:57)
Bye everyone.