LINKS AND SHOW NOTES:
Living Well with Technology.
In this episode of the Human Pulse Podcast, hosts Fabrice Neuman and Anne Trager delve into the evolving landscape of artificial intelligence (AI), focusing on the French tech company Mistral and its AI tools. They discuss the growing trust issues users face with AI, particularly regarding accuracy and reliability. The conversation shifts to real-world applications of AI in organizations, the challenges of information overload, and the question of whether AI is delivering a return on investment. The hosts also address the environmental impact of AI technologies, emphasizing the need for responsible usage. Finally, they highlight the positive contributions of AI in various fields, including historical research and animal communication.
Keywords
AI, Mistral, trust in technology, generative AI, energy consumption, productivity, human interaction, natural language processing, technology efficiency, AI applications
Chapters
(00:00) Intro
(02:58) Trust issue with Al
(07:10) The confines of Al
(09:38) Al's role in organizational ffici...
(13:17) The Al Bubble: Are We Overs...
(14:54) Natural interactions with Al
(24:03) Energy consumption and ethi...
(27:28) Positive applications of Al in s...
(28:40) Conclusion
See transcription below
Le Chat Mistral AI
https://chat.mistral.ai/chat
The Huberman Podcast with Dr. Terry Sejnowski
https://www.hubermanlab.com/episode/dr-terry-sejnowski-how-to-improve-at-learning-using-neuroscience-ai
Worklabs Podcast with Jaime Teeven
https://www.microsoft.com/en-us/worklab/podcast/microsoft-chief-scientist-on-ai-untapped-potential
National Geographic Magazine (check out November 2024 issue)
https://www.nationalgeographic.com/
Anne's website
https://potentializer-academy.com
Fabrice's blog (in French)
https://fabriceneuman.fr
Fabrice's podcast (in French)
https://lesvoixdelatech.com
Brought to you by:
www.potentializer-academy.com & www.pro-fusion-conseils.fr
(Be aware this transcription was done by AI and might contain some mistakes)
Fabrice Neuman (00:00)
Hi everyone and welcome to the Human Pulse Podcast, where we talk about living well with technology. I'm Fabrice Neuman.
Anne Trager (00:06)
And I am Anne Trager.
Fabrice Neuman (00:09)
We are recording this on November 30th, 2024.
Anne Trager (00:12)
Human Pulse is never longer than 30 minutes, so let's get started.
So Fabrice, there's something new on the tech front, isn't there?
Fabrice Neuman (00:20)
Well, many things. And you know what? We're going to talk about AI again. Well, that's all the rage as everybody knows.
Anne Trager (00:24)
Again.
Fabrice Neuman (00:30)
Well, I just wanted to talk a bit about Mistral. You know, this is the French tech company and also the same name for the tool, Mistral AI that they offer to the world actually, because it's in their mission statement to offer for free.
You can access the AI model through their website, Mistral.ai. And there's a new version of their chat. It's called, and so they do a pun in French. They call it Le Chat because chat, C-H-A-T, means cat in French.
Anne Trager (01:06)
Hahaha.
Fabrice Neuman (01:12)
So you can use le chat on Mistral.ai to have a conversation with an AI bot as per usual. The new things are the fact that you can now activate the web search, just like search GPT or Perplexity AI. And then you can use their model to go through the results and have a comprehensive quote unquote answer from the web.
And you can also generate images, which is nice to have, I think, as a free tool as well to create images. And there's also this Canvas thing, which is very close to what OpenAI and Chat GPT offers, which is you ask it, for example, to create a document, like a letter for something or a mission statement or whatever. And you have then on the side of the screen, the document that it has created. And you can modify the document and then use the AI to modify a certain part of the document or actually go into the document to work on it. So, yeah.
Anne Trager (02:19)
Sounds like fun. Sounds like something that's another tool to play with. Right, right. And as if...
Fabrice Neuman (02:24)
Yeah, among the gazillion tools that we have.
Anne Trager (02:31)
...by Destiny or something, one of our cats has joined us for the podcast today. So, you know, there we go. A little thumbs up to Mistral Le Chat.
Fabrice Neuman (02:40)
Yeah, so we have Le Chat with us. Exactly. So, but other than that, so last episode we talked about AI and I wanted to go back to it and we wanted to go back to it because at the time you mentioned the amazing ways that AI was making progress at being more efficient, for example. And it seems that since then you've reached some glass ceiling or maybe worse. So maybe you want to talk to us about it.
Anne Trager (03:11)
So that might be a little bit...
Fabrice Neuman (03:14)
...Oversold?
Anne Trager (03:15)
Oversold, right?.Exactly. But there is a trust issue that's happening for me. And it's happening rather quickly, so I'm a little bit astonished. So in episode 4, we talked about how we have trust in tech. And right now I'm experiencing the fact that it's not working exactly the way it was working before, and it's not working to my expectations. And so I'm losing trust, which is exactly what happens with all the tech that we use, usually.
So I've had a few experiences with GenAI recently that where it came up with, there's no other way of saying it, complete bull. Okay? So more than hallucination. I've experienced hallucinations before. What happened is I was using the same prompt structure and so forth that I had used successfully previously, and it came up with something, with things, and several times, obviously I tested it several times.
Fabrice Neuman (03:52)
Hmm.
Anne Trager (04:11)
...that were completely off the mark. So I had translations where it cut out things, like important stuff, so completely changing the meaning. And so I would have this back and forth saying, what have you done? You know, you're missing, yes, I'm so sorry, it would reply, and then it would come back with some let's put it that way. It's pulled up, you know, research...
Fabrice Neuman (04:28)
Hmm.
Anne Trager (04:37)
...that's totally non-existent. And even though I specify that, you know, in my prompts that I want verifiable with links and so on and so forth, it will come up with links that don't exist or that are not research, they don't go to where it says it's going. so I don't know, really, really off the mark and, misinterpretations of topics as well. So.
Fabrice Neuman (04:46)
Hmm.
Anne Trager (05:03)
My initial response was very human. Where did I go wrong? What am I doing wrong in my prompting or in this and that and the other thing? And then of course, being a coach, I pulled myself back. did a little self coaching saying, well, maybe it's not you. Let's look at this from a bunch of different angles. And I then in my self-aware factor said, hmm, there is a drop in trust factor here. Right now my response is to be paying a lot more attention...
Fabrice Neuman (05:19)
Okay.
Anne Trager (05:33)
...to the output and double checking and bringing in a lot more skepticism in the way that I'm using it. And while I simultaneously try to improve my interaction with it to see if I can get better responses.
Fabrice Neuman (05:48)
Yeah, so the worrying thing is that we are used to getting some hallucinations, as you mentioned. But more when we, for example, as we talked about a few minutes ago in Mistral, you can search the Web and then use AI to go through the results. But then doing that and converse with those chatbots, we learn very quickly about hallucinations where the chat... don't know when they don't know, right? So, and they're programmed to give an Whether it's the right or wrong answer, they have no idea, obviously. And so, but you always get an answer and this is where you get some wrong
Anne Trager (06:32)
I think that we're learning about how to use these tools and what are the limits of the tools and how we can make them work for us. And it behooves us right now from this conversation to really pay careful attention and double check how much we trust what we're doing and make sure that we can trust it and perhaps keep that level of... attention high for a certain amount of time for now.
So as we can see from real use of AI, that now is a time to perhaps pay a little bit more attention to what it's actually bringing back to us and how to use this as a tool. We're exploring the confines of this tool, like what are the edges and limits of I expect that it's going to continue to change for quite some time now. I had another conversation recently about real use of AI.
And not just the bandwagon hype of AI. Okay. I was talking with a head of human resources in a very large international company and what they use is Copilot so I asked her, I said, well, what, how, how are you using it? And basically she told me that she uses it to summarize and review emails.
Fabrice Neuman (07:31)
Hm.
Anne Trager (07:51)
So she can come in the morning and get a summary of all of the numerous emails that came in recently. She'll use it also to write notes. And the idea, the hope is that it saves time. Okay. So it saves time on these activities. And when I, from the conversation, what I realized and what she described is that her time gained is not really time gained because, as she says, she only ever has time for 20% of whatever comes in over her desk. That's just the nature of her job. Okay. So she has to make that choice as to what the 20% is. And with the AI, she now has insights into the other 80%. And that said, she will never ever have time to manage that 80%. So it's not really helpful. You know, it's not bringing additional value add, it might potentially be adding more weight, some sort of, well, there's all of this. But anyway, she didn't say that. I'm thinking that out loud. What she did say is she's very aware that that 80%, those summaries that she's getting from AI, she still will not have time to treat.
Anne Trager (09:09)
What's the point is the question I ask. Clearly AI for the time being in this company is not solving the organizational over communication issues that are there in a lot of large organizations. Not yet, at least.
Fabrice Neuman (09:26)
Yeah, well, the thing is we've been talking about this, for example, just email overflow in companies where we all have those examples of people sending emails and copying the email to everyone just to make sure that not everyone is informed, but to make sure that people sending email so they can say, well, I told everybody, so it's not my fault if something happens. Right?
Anne Trager (09:56)
Yeah. Well, and that's just one of many internal communication problems that are happening on organizational levels these days. Yeah.
Fabrice Neuman (10:04)
Yeah, yeah, exactly. And the thing is, email summary by AI doesn't help for that because it's just saying, okay, you have received many emails. Plus, as we mentioned, sometimes the summaries are not that good, so you might miss a few key points, which is annoying. And then there's also this thing of nowadays more emails.
Anne Trager (10:19)
Well.
Fabrice Neuman (10:28)
or actually written by AI, right? And so it can get longer because you find it interesting and more polite basically to say, okay, instead of sending just a few key points, I'm going to write quote unquote to have written a whole paragraph by AI so it lands more properly or something.
Anne Trager (10:30)
Well, somebody asked me on one of the social media things, well, what do you do? How do you keep up this human notion with AI taking over so much? And actually, I think people can tell when something is written by AI without any further human intervention afterwards. I know I can. I've received some emails like that and I'm like, really? Seriously? You didn't even take the time to make it sound human? there's, like I said, I think we're testing the limits. We're trying to figure out how to use these tools. I have a lot of compassion for people right now.
Fabrice Neuman (11:16)
Yeah.
Anne Trager (11:35)
And at the same time, we've got to figure this out. And at some point, what I thought was really interesting, if we go back to this HR person's experience is what she said was for her, it's problematic because it's getting in the way of the, these are her words of "the necessary letting go." There will always be more to do. And you need to choose what you're going to do. You need to focus in on that 20%. AI is not going to do that for you, but you think it is. And then so you just get more and more and more, and you're not doing the letting go of that 80 % that is not adding value. So I thought that was really interesting. It makes us think again, if we get back to how can we live well with this is well, maybe we have to delegate it to the again, be more human, that is be in a place of choice as to what's important and where am I adding value and what tasks am I are truly adding value.
So the other thing that she said was that her company is still waiting to see any return on investment in the AI, which is another important thing. It might be a little soon and maybe the return on investment will not come. I don't know.
Fabrice Neuman (12:52)
So it's been two years since chat GPT 3.5 was revealed to the public. so, so many things happened during those two years and it really changed the whole horizon, if you will.
And then after two years, we get to those questions. Is there any return on investment? Are there really good results linked to AI? Which leads to the big question, are we in an AI bubble? It seems that after two years, there's some kind of backlash, right? We are using all those tools, the gazillion tools we were mentioning a few minutes ago.
Anne Trager (13:15)
Mm-hmm.
Fabrice Neuman (13:31)
There are so many that we don't know what to choose from and we don't have time to try them all. And then, but then for what? Are we really getting something out of those tools?
We want to use AI, the promise of AI that we see is that we are going to spend less time on things or gain time because we can do more. Do we need that? And so we really need to think about using those tools. And I think let's re-emphasize the fact that those AI tools are just that, tools, and we need to learn how to use them.
Anne Trager (13:56)
Yeah.
Yeah. And tools used by humans. So how can we use these tools to add as much value as we can as humans? And I said, add value. I didn't say do more. Okay. I'm all about optimization, which is usually not about doing more. Okay. It's about doing the right stuff at the right time. Well, bubble or not, we using AI. A lot of people are using gen AI.
Fabrice Neuman (14:13)
Yes.
Anne Trager (14:37)
with this incredible access to it. And actually it may not be saving us time or helping us do more things. It may however, be saving us energy, like physical energy. Why is this? Okay. Even if it's not solving any of the other promised problems yet. Okay. So I was listening to a podcast recently, with a conversation between Andrew Huberman and Dr. Terry Sejnowski, and I'm sorry if I mispronounced that name, and he is a computational neuroscientist. My kind of guy, like that kind of stuff. So what he discusses is a NY Times article about a technical writer who found that interacting with chat GPT in a polite human-like manner reduced her fatigue at the end of the day, which I find to be really interesting. Anything that reduces fatigue means that there's more energy for all kinds of other stuff.
Cool, right? So the neuroscientist goes on to explain that this phenomenon could be because our brains are wired for human interaction and treating AI like a human taps into some kind of pre-existing neural circuits, which make the interaction less mentally So I thought about you, Fabrice, because in our last episode, you talked about how you didn't want to talk to your AI. You preferred to type in and do your usual thing. Well, so maybe you are expending energy in a way that you don't need to expend energy.
Fabrice Neuman (16:21)
Well, that's interesting. I don't know. I thought about it and I also link that to the fact before AI and the chatbot, been using or trying or testing at least dictation tools, right? And I always go back to typing on the keyboard because for many reasons.
Anne Trager (16:40)
Mm.
Fabrice Neuman (16:47)
First of all, I'm fast enough. I'm not such a good typist, but I'm fast enough to write more or less as quickly as I think. But I always found that using dictation tools because in order to write a sentence, I would have to form the sentence in my saying it right? And when you are on a keyboard, I can type and then go back and then delete and then go back and then type and you know, and it's a more, it's a messier process if you will. And so dictating doesn't work for me. And I think it's linked to the fact that I don't like to talk to the AI as well because
It's the same thing. I need to formulate the question. what probably is going to help is the fact that now with the new Gemini bots and chat GPT, and the soon quote unquote to come new Siri, you can talk to them in a more natural way where you would say, can you say, no, tell me, maybe I can do, you know, and then you can stumble on your words. And so maybe that might help. And then go back to what you were saying, which is more natural. So maybe spending less energy
Anne Trager (18:11)
Right. And I don't think they're talking about this dictation thing because I understand when you're dictating, you're dictating a sentence and a sentence and you're dictating a written sentence, not a spoken sentence and so forth. So there are different processes in there. Right now, what we're talking about here is actually how you interact with the And that's the nature of these natural language models.
Fabrice Neuman (18:21)
Yes.
Anne Trager (18:37)
And there's a big difference between using natural language and, you know, doing what we used to do, which was to translate it into, you know, button clicks or specific actions. You're giving instructions to your computer and we don't actually need to do that anymore. And that's kind of cool. I was listening to another podcast called WorkLabs with Jaime Teevan, who is Microsoft's chief scientist and technical fellow.
Evidently it was an apology for AI, so we get to bring in some really positive stuff with this. This is today's positive spin on AI. And it was really interesting. And she was talking about this notion of a natural language model and how it allows us to express our intent directly. That's what you're saying. So you didn't actually have to reformulate it into a proper sentence. You can just express the intent directly.
She also talks about how using these models, we can express our intent in a much richer and more contextual way. So what we can do now with these models is to... have give more context in which comes from prompts and it comes from previous conversations that we can share and any relevant documents and any examples and so on and so forth that we can all put get better responses and to have an interaction which is richer than even an interaction with a human being would be.
So there's a lot that we can with these natural language models.
Fabrice Neuman (20:14)
It's been the promise for computers, know, in science fiction to start with, where we would talk to computers, they would understand. I would add to that something I found pretty funny. In my other podcast, Les Voix de la Tech, we interviewed a French journalist and entrepreneur called Benoit Raphael. He's an AI specialist.
He said AI and about it's better to be polite using an AI. And not just because maybe at some point they will be our overlords. And so the AI will remember those who were polite and those who were not. But also because as a tool nowadays, it can give better answers...
Anne Trager (20:52)
Yeah
Yeah
Fabrice Neuman (21:05)
...more accurate answers because basically, if I simplify it a bit, AI models have been trained on polite data more than not polite because contrary to what might seem on some social media platforms, humans are mostly polite.
Anne Trager (21:37)
Jamie Teevan says the same thing in her, in the podcast I mentioned earlier when she was talking about how to, how best to prompt Gen AI. and use it to get the best results. She talks about being polite since AI tends to mirror your tone and you get, and it's based on polite, as you say, polite conversation.
So if we come up with a list of ways that we can use Gen AI and get better results, well, one would be to be polite. Another would be to actually ask AI to help you make better prompts. And I am definitely going to do this and see if it helps me with my current situation with AI. We can give examples and context.
To provide that rich contextual information that AI is good at using. We can also use, and we haven't really talked about this, but really use AI to get feedback on our own work. think that's something I've had a lot of success on. Hey, what do you think of this? And how can I improve this? And what am I missing? And what three questions come up for you when you read this? And that kind of feedback is really interesting.
Fabrice Neuman (22:46)
Hm.
Anne Trager (22:54)
So yeah, those are a few things. you have anything to add to that, Fabrice?
Fabrice Neuman (22:59)
Well, I'm not sure.
Maybe also, I don't know about you, I need to, I think I'm now at the point where I need to choose fewer tools to really use.
Anne Trager (23:10)
Hmm. Hmm.
Fabrice Neuman (23:13)
I tried so many things, and we are part of this Gen.ai group, for example, and we talk about new tools every week. And there's a point we reached nowadays saying, we talk about new tools all the time, but which ones are we really using on a daily basis? And this is where, after those two years, I think this chat GPT anniversary is really important for that.
Anne Trager (23:33)
Hmm.
Fabrice Neuman (23:40)
We need to take a step back and say, what are we going to use? What am I going to use now? And maybe forget a bit about new tools for a while.
Anne Trager (23:52)
I agree with you and I also think there are other issues that we really should as responsible citizens of this planet be thinking about when we're using AI.
Fabrice Neuman (24:01)
Yeah, and this is the energy consumption. let's be clear about it. We don't exactly know yet what the energy consumption is for each AI tool we use, right? We tend to say that, and the estimate is pretty wide, using a tool like Search GPT is to 100 times more energy consuming than a regular Google request,
But they were able to do that and using some renewable energy like solar panels, stuff like that. But it seems that they're not able to do that anymore because they are talking about opening some nuclear plants because they need more energy slash we need more energy.
is where it's very important to see the progress that are
on models that can run locally on our devices. is basically the promise of Apple Intelligence, which is not at the level we would like as far as results are concerned. But if we can use those models locally, course it uses energy, but far less than if we send everything to the Web and then have results coming back. That's very important.
Anne Trager (25:06)
Yeah.
Hmm. It's not only energy, it's also water consumption to cool down the servers and things like that. And so I found a stat that talked about a chat GP, when you ask chat GPT a question, it uses the same amount of energy as a four watt LED bulb uses in an hour. Okay. So that's just one question to an hour. So that's just energy wise. And then the conversation will consume 500 milliliters of water. So, I mean, just if you start calculating, it is really quite outrageous. And at some point there may come kind of reckoning. But for me, what I'd like to think about right now is, again, to go back to what we were saying earlier, is we need to figure out how we can use these tools so that they really add value.
And not just sort of in this sort of... blurry kind of way we're using them now, which of course is part of the process. I also think that the notion of all of the energy that these tools use is a reminder of just how efficient our brains are because we don't need that much energy or that much water to do the same kind of thinking or sometimes even better, more critical thinking. So, interesting.
Fabrice Neuman (26:47)
Yeah, that's interesting. Let's really emphasize that. And it circles back to what we were saying in the beginning, which is instead of launching a tool to do something, maybe you can take a few seconds to think about it and maybe you don't need that tool. Because our brain is so efficient. So let's use it, please.
Anne Trager (27:03)
Yeah, yeah, yeah, yeah.
Exactly. Let's use it. Well, and potentially use it or lose it. So I'm just going, okay, I don't want to end on a negative note. I would rather end on a positive note. Okay. And so I want to bring in some really, really that are becoming possible with AI that I learned about in the November, 2024 issue of National Geographic Magazine.
Fabrice Neuman (27:13)
Yes, absolutely. No, please. Yes.
Anne Trager (27:34)
So there were three things that are happening that AI is making possible. First of all, scientists are using AI to read papyrus scrolls that are too fragile to open up. So we're getting this whole new view of history from that. And that is really cool and very human because it's like human history. Okay. Second thing is, scientists are using AI tools to try to figure out what various animals are saying to each other.
Very cool, notably whales. And understanding, this will help us to understand their habitats, to strengthen the bonds maybe that we have with other living creatures. Again, very cool as citizens of this very diverse planet. And then the third one is that seismologists are using it to better understand earthquakes and to better predict natural disasters. Again, how cool is that?
And that's it for episode six. Thank you for listening. You can find all the links mentioned in this episode in the show notes directly in your podcast app. And don't forget to go to our website, humanpulsepodcast.com to find more information and all of our previous episodes.
humanpulsepodcast.com.
Fabrice Neuman (28:51)
Thank you for subscribing, sharing, and please leave a review to help our wonderful podcast to be found. And we'll talk to you in two weeks.
Anne Trager (29:00)
Bye.
…