Trust, Truth and the AI Slop Multiverse

The Human Pulse Podcast - Ep. #29

Back to list of episodes

LINKS AND SHOW NOTES:
Living Well with Technology. In Episode 29 of the _Human Pulse Podcast_, hosts Anne Trager and Fabrice Neuman discuss the implications of the multiverse trope in media, the concept of 'slop' in information, and the challenges of discerning truth in an AI-driven world. They explore the change in scale of information vs sources available today, the role of trust in society, and practical steps to navigate the complexities of modern media consumption. They explore the widespread presence of multiverses in popular culture, the rise of overwhelming AI-generated content, and the difficulty of discerning truth.

Recording Date: Jan. 4th, 2026
Hosts: Anne Trager – Human Potential & Executive Performance Coach & Fabrice Neuman – Tech Consultant for Small Businesses

Reach out:
Anne on Bluesky
Fabrice on Bluesky
Anne on LinkedIn
Fabrice on LinkedIn

We also appreciate a 5-star rating and review in Apple Podcasts and Spotify.






See transcription below

Resources and Links:

“Welcome to the Slopverse”, by Ian Bogost in _The Atlantic_
https://apple.news/AZlksDyHOQ42q8ylvoxb-jg

Slop: Word of the year by Merriam-Webster-Webster
https://www.merriam-webster.com/wordplay/word-of-the-year

AI Slop on Last Week Tonight with John Oliver (HBO)
https://www.youtube.com/watch?v=TWpg1RmzAbc

Fringe (TV show) on Amazon Prime Video
https://www.primevideo.com/detail/0KZW29L9Y0POR6UT6HNAY2EQ53/

And also:
Anne’s Free Sleep Guide: Potentialize.me/sleep

Anne's website
https://potentializer-academy.com

Brought to you by:
www.potentializer-academy.com & www.pro-fusion-conseils.fr

Episode transcription

(Be aware this transcription was done by AI and might contain some mistakes)

Fabrice Neuman (00:00)
The multiverse trope, which presents the idea of branching alternate versions of reality, was once relegated to theoretical physics, esoteric science fiction and fringe pop culture, but it became widespread in mass market media. Multiverses are everywhere in the Marvel cinematic universe. Rick and Morty has one, as do everything everywhere all at once, or Dark Matter. The alternate universes depicted in fiction set the expectation that multiverses are spectacular involving wormholes and portals into literal physical parallel worlds. It seems we got stupid chatbots instead, though the basic idea is the same.

Anne Trager (00:43)
Hello everyone and welcome to the Human Pulse podcast where we talk about living well with technology. I am Ann Trager, a human potential and performance coach.

Fabrice Neuman (00:54)
And I'm Fabrice Neumann, a tech consultant for small businesses.

Anne Trager (00:57)
This is episode 29 recorded on January 4th, 2026. Happy New Year, everyone.

Fabrice Neuman (01:04)
I'm joining your happy wishes and still, Human Pulse is usually never longer than 30 minutes, so let's get started. So what I just read is an excerpt from Welcome to the Slopverse by Ian Bogost in the Atlantic. We'll put the link in the show notes, of course.

Anne Trager (01:19)
So where are you leading us with that Fabrice? I mean, I would really, really love to start the year in an entirely parallel universe, okay? Like something big and flashy and wonderful and joyful and anyway, a little far away from where we are right now. Anyway, that said, here is another quote from the same source.

Fabrice Neuman (01:28)
Hmm

Anne Trager (01:42)
People were misled by media long before the internet, of course, but they have been even more so since it arrived. For two decades now, almost everything people see online has been potentially incorrect, untrustworthy, or otherwise decoupled from reality. Every internet user has had to run a hand-rolled probabilistic analysis of everything they've seen online testing its plausibility for risks of deception or flimflam. The slop verse simply expands that situation and massively down to every utterance.

Fabrice Neuman (02:22)
Yeah, I know. So it's not a very good way to start the year, maybe. But I put all of that in the context of the word of the year 2025 as chosen by the famous dictionary, Merriam-Webster. The word is "slop"and basically as in AI slop and we'll discuss that a little further. And it made me think a lot about basically two things. How can we be sure of anything we see or read today? And is it really worse than it was before, you know, in the very undefined sense of before, you know, as in, yeah, when I, when I was younger, it was better or things like that. What do you think?

Anne Trager (03:00)
I'm of two minds. Or should I say a multiverse of minds about this? Yeah, I know, I know, couldn't resist. So yes, everything is faster. We have unprecedented access to everything and more. And there are way, way, more people on earth, all of which leads me to agree that slop is everywhere. And I don't actually need to put on my snob hat to say that.

Fabrice Neuman (03:05)
I see what you did there.

Anne Trager (03:26)
Okay, that's just a statement of fact. On the other hand, I believe there's always been slop in the sense of content of low quality. Now I take that from Merriam-Webster. actually looked up the definition and it says digital content of low quality. Well, so if you just drop the digital, well, content of low quality or products of little or no value have always existed.

Fabrice Neuman (03:41)
Mm-hmm.

Anne Trager (03:50)
So maybe it's just different in scale. I think one of the big differences is that we actually no longer have common references that are in the same way as we did before. And this has always been the role of the state and of moral institutions, whatever they may be. ⁓ And these aren't holding as much sway as they used to.

And that's the big difference, which means that we don't actually all have a common framework for defining what truth is. So we're all living in our own little universes reading and hearing only what we want to read and hear. I think it's a fairly human way of doing things, ultimately. I mean, it used to happen just because we didn't have the means of communications we have now.

And maybe we need that because having so much access to everything might just be too much for us. I don't really know about But maybe. So anyway, we have fewer references from which to deduce the truth. And I think that might make this a little bit scarier or worse than before. Or at least we are drowning in

Or at least that truth is being drowned out by a whole bunch of crap.

Fabrice Neuman (05:03)
Yeah, the notion of scale is very interesting. It reminds me of a conversation we had with friends a few days ago. We were reminded of one of the big TV shows of the 80s in France, which was Dallas. Right? And at that time in France, have to know that at that time we had like three TV stations only.

Anne Trager (05:16)
Mm. Mm.

Fabrice Neuman (05:27)
And so, which meant that this TV show became very famous, meaning that millions of people were watching it at the same time. The same kind of thing happened in the US as well, where you could have huge audiences for just one program, which is very, very difficult to do today for any TV show or movie produced.

Anne Trager (05:38)
Hmm.

Fabrice Neuman (05:50)
the only thing maybe leading to the same kind of results are sports events. But what I mean by scale is that, so you have that scale, one program listened or watched by millions, or basically everyone. And on the other hand, you have now the scale of the number of things available to us to watch and listen is many, many fold.

Anne Trager (06:13)
Mm.

Fabrice Neuman (06:13)


bigger and this I guess is a transfer of scale if you will.

Anne Trager (06:17)
Yeah, yeah, well,

yeah, and it fragments our attention. I mean, as a whole. So not only as individuals, but also as a whole. Like we are not all focused on the same things. We are all focused on a multitude of different, different, different things.

Fabrice Neuman (06:20)
Yeah.

Yeah, exactly.

so it can lead to the drowning you were referring to. what this made me do, and frankly, was a long time ago. What I mean by that ⁓ in technology chronology is basically when chat GPT-3 was released to the world, it made me rethink the way I get my news, for example, to focus.

on more, to focus more on reputable sources. For example, here in France in January 2024, so two years ago, I took a paid subscription to a French website called Numérama, which could be somewhat called the equivalent to The Verge.

albeit ⁓ of a smaller scale, going back to the scale word, but in order to help them survive and provide me with news. also subscribe to a few podcasts, also to participate to their financial health. And that was accompanied by a drastic reduction of my use of social media, where I think we can agree


Fabrice Neuman (07:32)
the AI slop is the most present. ⁓ And so no more X or Twitter for me, no more Facebook, no more Instagram, because basically I don't see the point in scrolling through an ocean of AI-generated content. So it seems none or close to non-human supervision. And that's, I think, a very important point I want to emphasize.

using AI to produce content is fine. But if you only use AI and then you publish, that doesn't make any sense and I don't think it's interesting. And this is what you see more and more, I think, on social media, even though I don't use them as much as I used to. So I don't see the appeal to just scroll through AI content with no supervision.

Anne Trager (08:20)
Well, I think you're right that what you've just said is part of the problem is that if you're using it without any human intervention, then it's just like slop, repeating, slop, repeating, slop. And after a while, slop degrades to sloppier slop and then to sloppier slop, slop, slop. And it's tale of the demise of anything interesting.

Fabrice Neuman (08:35)
Yeah, indeed.

Anne Trager (08:44)
I agree with you. I mean, I'm with you, totally with you on this on social I actually carefully curated my news sources long ago in order to manage the onslaught of negative news, which has run the—I mean, negative news has run the news ever since news has ever been news. It was long before internet.

and social media. So for me, the reason I did that was related to stress reduction and to carefully curate the inputs so that I could, well, I guess you could say so, could create my own universe in a way that was very curated in order to be able to focus because otherwise I just was all over the place. And so I'm following you on the social media thing and no, I am not following you on social media because I am no longer on social.

Fabrice Neuman (09:15)
Hmm.

I was going to say

it so...

Anne Trager (09:31)
I'm no longer on social media with the exception of LinkedIn because I just can't stomach it and I'm just there's not enough value to keep at it for me.

Fabrice Neuman (09:40)
Yeah, it's a real change, ⁓ like a big change, ⁓ because social media was really one of the ⁓ biggest revolutions on the internet. I don't know if it's going to disappear. Basically, technology never disappears, but it's kind of added and somewhat replaced, partly at least by something else. And I think with the AI tools we get, we don't exactly know by what yet.

And something else I've been thinking about, and this conversation makes me think of it even more, is a sentence I still can't fathom after a few months I heard it. You might remember it. It was in a discussion group we go to and...

in which participants share the new AI tools they discover on a weekly basis. It's not as, let's say, productive as it was a couple years ago or a year ago because the new tools are not as many as there were a year ago. But basically, during that...

discussion, I voiced my concern about this particular topic we discussed today, which is basically about What was truth anymore in this AI generated world we're in, you know, and how we can trust anyone and such. And one of the participants.

one of the most agile users of AI tools on top of that in the group, answered this, nobody owes you the truth. And let me repeat that, nobody owes you the truth. I was completely taken aback by this. so at the time, I didn't answer anything. But then I was thinking about this and...

Basically, my main question is, how can someone just capitulate so completely? Because to me, that's what it means. If nobody owes me the truth, means that whatever you read, whether it's true or not, who cares? It just is. And now I have an answer to that way after this conversation occurred. Too bad, so sad. ⁓

Fabrice Neuman (11:41)
I was never able to tell him directly, but let's say I'm using our podcast to answer him. So Brad, who owes me the truth? But of course everyone does. And everyone owes you the truth as well, Brad. Because otherwise it's just about trying to fool me and we already have enough of that.

Anne Trager (11:50)
Mm-hmm.

Fabrice Neuman (12:03)
And as you said, we already drown in this... Slop. So... the thing. That brings me back to, for me, one of the most important endeavors today, which is to find reliable sources for everything. Please help.

Anne Trager (12:20)
I don't know if I can really help with that. What I can do is to bring the conversation back to a very fundamental foundation of civilization and people connecting with other people, and that's trust. And what you're expressing, what I'm hearing you express is that when somebody says nobody owes you the truth, then that's complete disintegration of trust. ⁓ And trust is based in part on the belief that others will be truthful with me.

Fabrice Neuman (12:41)
Indeed. Yes, that's exactly right.

Anne Trager (12:49)
⁓ I trust somebody else because I believe they will be truthful with me and I trust them until I have proof that they are not truthful with me. It is in this context that we are actually able to live in society with other people. We all owe everyone to be as truthful as we can. It is a social contract.

You are right. We need to be attentive to what and who we trust more than ever, primarily due to this year's of slop and the scale at which it is disseminated. And also because the moral compass has shifted away from that. We don't have a moral compass anymore. If nobody owes me the truth, then we actually don't need any truth. then so, well, who the hell cares if it's truthful or not truthful? And then so on and so forth. And then how is it that we build the foundations of actually working together?

Fabrice Neuman (13:26)
Hmm.

Anne Trager (13:39)
in a way that is constructive and not destructive. It's very problematic and I'm not saying that there is one single source of truth. I circled back to what I said earlier about how we used to have these common references in order to help us to do some kind of truth. So we no longer have that. We no longer have a moral compass.

I will circle back just to add another circle on to this, to the initial article you mentioned in The Atlantic, because what struck me most in that article, or what I took away from it, was a commentary on how when a non-truth is very far away from the truth, we enjoy it. We enjoy it. We love the Marvels and stuff like that. It's really pleasurable because it's very far away from the truth.

But when the non-truth is closer and closer and closer to the truth, it becomes very uncomfortable because we don't know where the truth lies. We never actually knew where the truth lay. However, we would reach a sort of consensus as to where that truth lay. That consensus was always culturally based and so on and so forth, but it's farther away from us right now.

And so there's something very disturbing.

Fabrice Neuman (14:53)
Plus I would say when you talk about a work of fiction, you know it's a work of fiction and so you can dive into it and it's fine. And your mind tricks you to kind of believe this famous suspension of disbelief and that's alright because you still know it's not true and that's alright. Basically it's like with humor, ⁓ context is everything. And to me that's

Anne Trager (15:15)
Yeah, yeah, yeah.

Fabrice Neuman (15:17)
the very important thing. ⁓ So when you read, listen, watch a work of fiction, it's not true, but you know it's not true. So the truth has been said before. It's written on it, you know, it's written on the tin. It's a work of fiction, right?

Anne Trager (15:27)
Yeah, this is not truth. Right, right,

right. Well, and so the thing is, that, you know, with the slop that's out there is that it's actually written as if it were truth. Okay. It's written in sentences that almost look like sentences, okay, like they could have been generated by a thinking person.

Fabrice Neuman (15:41)
Yes.

Anne Trager (15:49)
Often there's just a tiny little thing wrong. mean, now whenever I do anything with AI, I leave it for 48 hours and then reread it from a fresh perspective because there's so much stuff that's so just a tiny, tiny little bit off. And it's very disturbing.

Well, so anyway, we're going to be coming up on our time soon. I thought we should, we better get practical. Otherwise, we're just going to drown in this pit of despair. And I don't want to start the year off that way. So let's get practical. What can we do? I made a list. Okay. And I'll let you chime in on it afterwards, what you think. Don't expect to be able to tell the difference between AI generated stuff and non AI generated stuff.

Fabrice Neuman (16:14)
Yeah.

Hmm.

Anne Trager (16:30)
We just can't expect to know the difference anymore, which means we have to question everything. Always look for the provenance of your information beyond just its delivery. Not who delivered it, but where it actually came from. So if you're concerned about the truth, you really have to go back and back and back. ⁓

Fabrice Neuman (16:48)
Yeah, to the sources like we mentioned before, it's so

important.

Anne Trager (16:52)
And so if there are sources, don't just rely on AI saying that there are sources and it came from Nature magazine. You actually have to go back to the original works. And I don't know, from my experience, AI does a sloppy job of extracting ideas and putting them into another context. And there the pun was really intended. Okay. I mean,

Sometimes it just doesn't get it right. So you actually have to go back and read the original and see if what the original was really inferring is what the AI inferred. And AI tends generally to overgeneralize stuff. So if it reads like it's been overgeneralized, you actually can probably think it may be particularly aware that

If it sounds good and says little or nothing, it's probably AI. And then even when you have primary sources, you actually have to go back to the primary sources, which could also not hold the truth. Let's be clear, just because it's written by a human doesn't have any bearing on its truthfulness, but you actually have to triangulate it with other sources. And you could probably ask your LLM to do that for you.

So anyway, those were my ideas so that we could start off the year in a practical, productive kind of way and not just with scary thoughts.

Fabrice Neuman (18:08)
Yeah, it's very true. ⁓ So these are all good ideas and that helps. I it's going to help a lot of people. Really the sources and what can be scary is that the first thing you said is that you cannot expect to be able to tell the difference between AI and non-AI. And it doesn't have to be a bad thing.

if the AI used to produce whatever content was actually subjected to checking, know, just like a normal journalist check their sources. You know, it's a tool as it and it's it should stay a tool, not just a I mean, just a tool and not something to trick us, which is a little too often the case. ⁓ I'd like to add the, so we're using the word slop because of Merriam-Webster. I also would like direct people to the piece by John Oliver on his TV show called Last Week Tonight.

It's available on YouTube and I'll put the link in the show notes. And he did lots of, I mean, a big like 25, 30 minutes on AI Slop and what it is. And it was a few months ago already and he was like really dead on. And go for it. It starts with a nice and happy kittens, but they don't stay happy that long.

Not to finish on a sad note, ⁓ I want to finish on a more positive note by mentioning another way to experience the multiverse we talked about. ⁓ I know it's a minor spoiler, but I would urge people to go watch one of my favorite TV shows of all time that is now available on Amazon Prime Video and it's called Fringe. It's amazing. Go watch it.

People go now.

Anne Trager (20:04)
Okay, well Fringe is definitely one way that you, Fabrice, live well with technologies as you watch it over and over and over again. Thank you. And I am sure that AI will soon have a very, very... ⁓

Fabrice Neuman (20:11)
It's just a, come on, come on, come on, it's just the fifth time.

Anne Trager (20:20)
clever way of coming up with a sound that ⁓ communicates the wink emoji that I would have written in a sentence if we did that. Anyway.

Fabrice Neuman (20:33)
Well, that's it for episode 29. Thank you all for joining us. Visit humanpulsepodcast.com for links and past episodes.

Anne Trager (20:41)
Thank you also for subscribing and reviewing wherever you listen to your podcasts. It helps other people to find us. You can also share this podcast, this particular episode, if you would, to start off the year with one other person around you. Thank you very much.

Fabrice Neuman (20:59)
and we'll see you in two weeks.

Anne Trager (21:01)
Bye everyone.