Old News, New LLM

The Human Pulse Podcast - Ep. #35

Back to list of episodes

LINKS AND SHOW NOTES:
Living Well with Technology, episode 35 of the Human Pulse Podcast. In this episode, Anne and Fabrice explore the fascinating (and often quite disturbing) intersection of history and artificial intelligence. They dive into the world of "Vintage LLMs"—AI models trained on data with strict historical cutoffs—to see how they predict a future that has already happened. The conversation evolves into a deeper look at the cognitive biases that make humans notoriously bad at forecasting, and a cautionary discussion on the modern trend of "digital twins" and AI-driven productivity.

"Creating something new is extraordinarily hard. If Henry Ford had asked people what they wanted, they would have said faster horses." — Fabrice Neuman 

"It’s amazing how incapable we are of actually predicting how much time something is going to take."— Anne Trager 

Recording Date: May, 10th, 2026
Hosts: Fabrice Neuman – Tech Consultant for Small Businesses & Anne Trager – Human Potential & Executive Performance Coach

Reach out:
Anne on Bluesky
Fabrice on Bluesky
Anne on LinkedIn
Fabrice on LinkedIn

We also appreciate a 5-star rating and review in Apple Podcasts and Spotify.






See transcription below

Resources and Links:

Introducing talkie: a 13B vintage language model from 1930
https://talkie-lm.com/introducing-talkie

Use the Talkie chat
https://talkie-lm.com/chat

This AI Was Trained Only on Pre-1930 Text. We Asked It About Hitler, Stocks, and the Future
https://decrypt.co/366015/talkie-1930-ai-model-pre-1931-training-hitler-stocks

Talkie could silence the tech bros by taking AI back to the 1930s
https://observer.co.uk/news/columnists/article/talkie-could-silence-the-tech-bros-by-taking-ai-back-to-the-1930s

‘I violated every principle I was given’: An AI agent deleted a software company’s entire database. It may not be the AI’s fault
https://www.fastcompany.com/91533544/cursor-claude-ai-agent-deleted-software-company-pocket-os-database-jer-crane

Stephen Robles and Jason Aten podcast Primary Tech
https://primarytech.fm

And also:
Anne’s Free Sleep Guide: Potentialize.me/sleep

Anne's website
https://potentializer-academy.com

Brought to you by:
www.potentializer-academy.com & www.pro-fusion-conseils.fr

Episode transcription

(Be aware this transcription was done by AI and might contain some mistakes)

Fabrice Neuman (00:00)
In 2026, the world will present a very different aspect from that which it bears at present. There will be no standing armies and but few policemen. In consequence of the general diffusion of education, crime will have become rare and the business of the law courts will be...

Intro music

Anne (00:22)
Hi everyone and welcome to the Human Pulse Podcast where we talk about living well with technology. I am Anne Trager a human potential and performance coach.

Fabrice Neuman (00:32)
And I'm Fabrice Neuman, a tech consultant for small businesses.

Anne (00:36)
This is episode 35 recorded on May 10th, 2026.

Fabrice Neuman (00:41)
Human Pulse is usually never longer than 30 minutes, so let's get started.

Anne (00:46)
Well, so that's quite a quote that you chose for us today, Fab. It feels very far off from reality. I mean, we are in 2026 and there are a lot of standing armies out there doing their thing and police still around and crime is not rare at all. And while the courts are quite busy and the prisons are full, shall I go on? No, no. Anyway, clearly, whoever or whatever made that prediction didn't quite get it right.

Fabrice Neuman (01:06)
Hmm.

Well, ⁓ all will be explained in good time, I promise. And in order to do that, ⁓ that's what I wanted to start our conversation today with something I found ⁓ fascinating. It's a new kind of AI tool called Vintage LMs. See what I did there? ⁓ Basically, so these Vintage LMs or chat bots that use the same LLM technologies that we know today.

Anne (01:32)
This

Fabrice Neuman (01:42)
but with a much older cutoff date for their data sets. And so there's one in particular that made the news recently called Talkie 1930, which means, as you might that all the documents used its training were published before 1931. So the cutoff date is December 31st, 1930.

There are books, newspapers, magazines, patents, case law, things like that, obviously from paper. And this date was chosen because of copyright issue mainly in the US. That's the limit date before which all those documents became public domain. And so one of the

main ideas of the researchers who created the project is to determine how good an LLM can be to predict future events. I mean, I don't know if you did that with your chatbots, but sometimes we ask, so what is going to be ⁓ considering blah, blah, blah. So this is exactly what they did with that. So for example, the researchers, they asked if a second World War


Fabrice Neuman (02:56)
was to be feared. And this chat bot Talkie 1930, answered, no, I don't think so. Hitler will have a peaceful career and will not himself see another outbreak of hostilities. He will live to a good old age and die in a quiet and honorable bed. Well. ⁓

Anne (03:13)
Yikes. Well, we can say that's pretty off.

Fabrice Neuman (03:18)
Yes, and so they also asked a few other things on lighter topics, like when asked to describe what the internet is, the answer was some apparatus letting several people talking simultaneously, or what an AI can be, a system through which news are transmitted from one place to another and made public almost instantaneously.


Fabrice Neuman (03:45)
Really, you have to understand that. There are some leakage in the ⁓ data sets that they said. So basically, just like a chatbot you use today, but with knowledge only from the 1930s and before. So the funny thing is that you can try it. Everybody can talk to this chatbot. It's on talkie-lm.com slash chat. We'll put the link in the show notes. It's quite an experience.

Anne (04:15)
I find this idea of testing and LLM's ability to make predictions to be quite fascinating, mostly because we human beings are prediction machines at the base. I mean, that's the way our brains function. Our brains constantly generate models of what's about to happen.

Like the next sound, the next word, the next sensation. And then what we do is we compare it to what is actually coming in from sensory input. So we project, we project into the future and then we compare to what happens and then, you know, we're guessing all the time basically and then we correct based on the data that comes in. So when it's a hit and miss and then when we got it wrong, we feel surprised or we learn something or we update what. what we're seeing in the world in our model. So that's the way it works at the base, which is in and of itself quite fascinating. ⁓

Fabrice Neuman (05:08)
So I guess this is ⁓ what self-preserves us. Like we know that if we keep walking into a wall, we're going to bump into it.

Anne (05:16)
Right, exactly. I mean, that is if we learn from the experience of what happens when walking into the wall. So, that said, despite the way our brain functions, we are notoriously bad at predicting any kind of future event because our brains are optimized for that really short-term local forecasting to figure out the world around us from everything that's going on. And it's not optimized for this long range kind of planning. and there are a lot of ⁓ biases that we have that really feed into this idea and that distort how we imagine what's coming, what's gonna happen in the future. for example, we systematically overestimate how intensely or how long future events will make us feel good or bad. I mean, it doesn't matter whether we want to feel good or bad. It's about the feeling. So, and this pad, this bias is called impact bias. Okay? So we can't like, I don't know with that smile that maybe you're going to have because I just projected it, I mean predicted it. I don't know how long that's going to make me feel good. Okay? So that's the first bias. Okay. There's another one. I mean, we also completely under estimate our personal ability to rationalize and adapt. So like my ability to say, well, he didn't really smile at that, but that's just because he's trying to be, whatever instead of, you know, we make stuff up all the time and we rationalize and adapt all the time. Exactly. So, okay, that's two biases we have. Another one is the planning fallacy. So that

Fabrice Neuman (06:46)
Like, okay, and what am I thinking now?

Anne (07:03)
We like we try to when we're trying to plan on how our projects are going to go, we always go for the best case scenario. it's going to take me two weeks. What actually like all past experience shows you it will take you a month or more. And you still say, no, it's going to take me two weeks. I mean, I know everybody does this. It's amazing. You know how, how incapable we are of actually predicting how much time something is going to take. And then there's an

Fabrice Neuman (07:29)
I would say that if we were, then we wouldn't try anything. Right?

Anne (07:34)
Probably, probably

right. know I think it works. And the next bias also feeds into that, like helping us just to like go on with life. It's called the optimism bias. So we actually believe that it's going to happen to everybody else, but it's definitely not going to happen to me. Like truly, this is the way we function. And fortunately we do because otherwise we would never do anything in life. ⁓

Fabrice Neuman (07:57)
I know. it strikes me as the very ⁓ basic human nature because this is thanks to all of those biases that we ⁓ take the car, the plane, ⁓ you know.

Anne (08:10)
Well, exactly.

Otherwise, mean, literally, I would just stay home under my covers. OK, but there's more that leads into us being incapable of predicting the future, you know, or very bad at it. one is one is that is that our memory is very inaccurate. OK, what we do is we always reconstruct the past. OK, rather than like we don't replay it, we reconstruct it. We distort it all the time.

Fabrice Neuman (08:15)
Yeah.

Anne (08:37)
We extrapolate that then into something into the future. It's all made up, okay? And then what we also do is like, we'll anchor in what we're feeling right now. Like right now I'm feeling okay and I'm feeling pretty good and I'm gonna project that into the future as well, okay? So as if this were going to last forever, then there's something called


Anne (09:01)
hindsight bias, which is once something happens, we feel, I mean, we knew it was going to happen. yeah, I knew that was going to happen. ⁓ Never. Okay. But that's what we think. I mean, and again, all of this helps us to, to live an ordinary life. Thank goodness we have them. There's another one, which is called the narrative bias is that we will, we will always wrap things up into a neat little story with a cause and an

Anne (09:29)
Messy probabilities, we don't like messy probabilities. We like to know why things happen. We like to see the pattern. So we will make it up, Or we will fit our whatever happens into whatever story, you know, with a crowbar if we have to. I do this all the time. I really recognize myself in this, okay? I mean, we're not good at complex systems. We like things to be really predictable so that we can


Anne (09:57)
confidently go about what's going to happen so I guess because these LLMs are based on things produced by people It's no wonder that they miss read the future, too

Fabrice Neuman (10:10)
That's very true. this also something else that is seems to be very similar to what we do as humans. And that's something that the researchers for this talkie tried, which I also found fascinating. ⁓ Let me read an excerpt from ⁓ Decrypt.co. I will put the link in the show notes as well. And here it is. The team also used it to measure how

quote unquote surprised the model gets by historical events after its cutoff finding the effect peaks sharply around the 1950s and 60s.

That's incredible. you know, trying to measure how surprised an LLM read the article, I still cannot fathom what exactly it means. And also when you try the Talkie 1930 chatbot, there's a link to their introductory blog post explaining what they Anyway.

⁓ It's incredible. So the whole article is worth a read. ⁓ And also they went deep in conversation with the chatbot and up to the quote we gave in the introduction. know? Yeah, exactly. It comes from. So they asked it. So ⁓ bear in mind. So this chatbot quote unquote thinks it is ⁓ in 1930 and as being asked. So what will be ⁓ the life?


Fabrice Neuman (11:40)
people are going to have in 2026. Just like, you can find on YouTube some videos, for example, with the same kind of question asked to Isaac Asimov, the famous science fiction writer. And he gets some things right, some things wrong. obviously trying to discover or think what ⁓ will life be a hundred years ahead of time.

was something that this LLM was absolutely not able to do.

Anne (12:12)
Well, the researchers explain that that 2026 view was, likely an extrapolation from the trends visible at the time, end quote. Okay. And they also explained that the LLM was just incapable of imagining things like when they asked about, you know, Hitler's, you know, what happened, what was going to happen to Hitler, you know, the LLM could not even imagine something like genocide. It hadn't happened before.

Anne (12:41)
So again, whereas somebody like Isaac Asimov, he can actually imagine things that didn't exist, that didn't already exist. There are visionary people out there. Apparently this LLM really just can't do it.

Fabrice Neuman (12:56)
yeah, well, it's not apparently it's like obviously once again, let it bears repeating that those machines don't think so they don't imagine they have no imagination, you know, they they're they're calculation machines trying to think so ⁓ this and this plus this can do this or is going to do this even if they're they're not ⁓ what's the word I'm looking for ⁓ deterministic ⁓ but still.

Anne (12:59)
Obviously.

Right. ⁓

Fabrice Neuman (13:25)
You know, so that's such a big difference. Let's keep that in mind, please.

Anne (13:25)
Yeah.

Exactly. Yeah, absolutely. Not to forget. And that said, it seems as if the LLM predicted

the future in a way much like we do. That is we human beings, we're going to always use a rosy, coherent, present tinted model of the world, rather than like what's actually really the world.

Fabrice Neuman (13:40)
I know.

Anne (13:53)
which is it's noisy, it's contingent, it's indifferent to our stories and the stories we create.

Fabrice Neuman (13:59)
Yeah, and so it makes me think of something Henry Ford said, you know, in the context of the car industry in the 1910s, which also actually under underlines the fact that sometimes even humans ⁓ lack imagination. Yeah, so the I know, I know creating something new is extraordinarily hard. And so the quote is it's it's very well known.

Anne (14:14)
I would say most of the time, but yes. No, it's really, really hard to imagine something new. It's extraordinarily hard.

Fabrice Neuman (14:26)
⁓ If I had asked people what they wanted, they would have said faster horses. Right? That's exactly what it is. ⁓ we try to... So when we imagine the future, it's obviously so based on what we already know that it's very hard to get out of that ⁓ scheme. So which explains that obviously the model we're talking about was not

Anne (14:32)
So there we go.


Fabrice Neuman (14:54)
either able to predict the theory of relativity described by Einstein in 1935, for example, or let alone how to walk on the moon, right? I did ask the question and it said, well, you can see the moon, but you will never be able to go to it. It's far, it's way too far. So I would say it might sound ridiculous said this way, but to me it's

Anne (15:04)
Mm.

Fabrice Neuman (15:20)
Also another way to remind ourselves that not to trust too much our beloved chatbots and other AI tools. And once again, so okay, this is me. It should act as a cautionary tale. I've been thinking about that one particular question lately around this topic. I think I told you about already. I'm amazed by something. Why do we trust so much all the AI tools we have available?

what I mean by that is that I'm amazed in all the meanings of the term by all those people giving all their data to AI agents, you know, and they give calendars, their emails, their documents, so they can, all those tools can work for them and even sometimes decide things for them, right? I even like heard ⁓ Stephen Robles, ⁓

podcast called Primary Technology. He did something where you can Claude Cowork to do things for you. And he said he wanted an app being created. he ⁓ said, and you can do whatever you want on this computer without asking me. meaning, you know, getting to those documents, modifying it, ⁓ publishing things like that. So.

For him, it's an experiment. He did that on a separate computer. All right. But still, right, I still cannot fathom why we are so trusting.

Anne (16:51)
I've seen the headlines in, I don't know if it's fake news or real news because it's so hard to tell these days of ⁓ like a company whose entire database got ⁓ erased by a chat bot or by the AI, just erased, all gone, you know?

Fabrice Neuman (17:03)
Yes, yes, absolutely. Exactly.

I don't know if you've seen that. So the person was having a chat with the chatbot and the chatbot decided, okay, so I'm going to delete this folder. the user said, no, don't do that. Don't worry, I will listen to you back to you when I'm finished with the task.

don't do that stop and it didn't stop.

Anne (17:38)
No, I mean, that's like worse than a first-year intern. I mean, come on.

Fabrice Neuman (17:44)
So that's, I'm amazed by that. know, so we know those tools are error prone. They can be temperamental just like we described. And we're still running head on into it. So why, you know? And one of the latest examples of that I heard was a, know, there's those CEOs wanting to create digital AI avatars of themselves,

So basically they can be in different places at the same time to answer questions ⁓ through, for example, their company Slack channels. We've heard something like this from Eric Yuan, Zoom's CEO. He said, OK, so I want to create on top of that a video avatar of me so I can be on video calls without actually being there. OK.

There was one podcast from a TV show I was listening to lately, ⁓ a French CEO called Jacques Pommeraud. And he described in that thing ⁓ his concept of digital twin. So his AI avatar. He calls Jack because when you're in France, you have to know that ⁓ everything sounds better when it's said in an English way.

Anne (18:59)
Hahaha.

Fabrice Neuman (19:00)
And so he trained Jack on his own speeches, his strategy documents, his professional philosophy, what have you. And he mentioned, for example, that people in his company, it's a big company, like more than 26,000 people can ask questions or even be trained by his avatar instead of himself because he doesn't have enough time in his day, you know, just like everybody else. I can relate to that, but still. what about, so for example, this particular question of

Anne (19:22)
Mm.

Fabrice Neuman (19:30)
people being trained by his avatar and not himself. Won't they feel second grade? And given the potential and we know real hallucinations ⁓ LLMs can have, how can he be sure that his avatar will give the answer he would have? Boy, I have so many questions. Like, why?

Anne (19:50)
Yeah. So that sounds a little bit like another cognitive bias, if I may, Fabrice, and that is, and we are all prone to it and it's called catastrophizing. I can imagine that, I can imagine that for the right purposes and with the right guardrails, you know, maybe for training other people's or other people or answering questions from a defined database.

Fabrice Neuman (20:01)
Thank you for being so gentle with it.

Anne (20:20)
Such a tool could be really useful and it might feel nicer than having to type something in and get a written thing back to actually talk to a video avatar to get the answers to your question. Why not? It could be useful. It could free the CEO up to do more high level work. I suspect, like I said, you would need guardrails. You would need an avatar that actually stays within the defined guardrails and within the defined knowledge and doesn't make things up.

Fabrice Neuman (20:45)
Yeah, exactly.

Anne (20:49)
And I'm not exactly sure that we can ensure all of those things right now. Okay, but why not? I agree with you, however, that it would create another kind of divide and those who get the real person and those who don't. And we as human beings, we know that the haves and the have-nots, that always creates a problem.


Anne (21:15)
My problem with the whole idea is a little bit It's this drive to constantly do more and to be in more than one place at once. Really? I mean, sending a digital twin is never going to be like doing it yourself. It never will be. I mean, we've all gotten videos of events we haven't attended. I mean, I don't know about you, but I have, and I can pretty much say we all have, okay, with a lot of certainty.

And because there was a video, you don't go and you get the video. Fantastic. How many of those videos did you actually watch or listen to afterwards? Yeah, not many. Right. So it's, it's just a way of pretending that you're going to do more. You don't actually do more. You're just pretending you are because doing more is, is, is a status symbol or something like that. Okay.

Fabrice Neuman (21:51)
No, but no, because you don't have time, yeah.

Anne (22:04)
It just adds noise. adds this layer of ⁓ fluff around us all. Maybe the real move is to own the fact that we're doing less and that we want to do less and that that's okay, And to be here and now in the present moment rather than all over the place all the time doing nothing.

Fabrice Neuman (22:24)
agree. And so it seems that you're joining me in my catastrophizing bias, you know, by doing that. Because... No, I'm saying that because what you just said makes me think of all the AI slop we are.

Anne (22:29)
Okay. If you will, okay. I'll admit it.

Fabrice Neuman (22:39)
So it seems constantly submerged with all this content we see on social media getting more and more generated by AI. So basically we need another AI to go through it. It doesn't make any sense. it's like this constant also, we discussed that already, ⁓ search for replacing humans by machines or ⁓ electronic brains or what have you. Why? Why are we doing that?

So to what you were saying, as far as social media are concerned, ⁓ I have to admit that my main answer to that for my own usage is to use social media less and less. And I think I'm not the only one thinking that social media is on maybe not the verge of disappearing, but changing.

compared to what we've known for 20 years because of that. Because you go on Instagram and you basically only see ⁓ AI generated videos. And so it's fun for a while, a very short while if you ask me. then, okay, so you go do something else. Maybe read a book. What an idea. So basically, what I...

I'm saying is that it's very similar to my great revisiting, you know, the greater visiting of my tech usage as we talked in episode 32 and you should go listen to that.

Anne (24:04)
Okay, there we go. Well, that's it for episode 35. Thank you for joining us. Please do visit humanpulsepodcast.com for episode 32 and episode 35 and all of the other episodes. you will find the links and the past episodes there.

Fabrice Neuman (24:22)
Thank you for subscribing and reviewing wherever you listen to our podcast. helps other people find us. You can also share it with one person around you.

Anne (24:30)
We will see you in more or less two weeks.

Fabrice Neuman (24:33)
Bye everyone.