LINKS AND SHOW NOTES:
Living Well with Technology.
In this episode of the Human Pulse Podcast, hosts Anne Trager and Fabrice Neuman delve into the multifaceted conversation surrounding AI and its implications for the workplace and human interaction. They discuss the resistance to change that many individuals face when adapting to new technologies, the importance of emotional intelligence and adaptability in an AI-driven world, and the necessity of continuous learning. The hosts also reflect on the role of AI as a tool rather than a replacement for human creativity and problem-solving, emphasizing the need for human judgment in ethical decision-making. The episode concludes with a critique of AI's portrayal in advertising and a call to action for listeners to engage with these technologies thoughtfully.
Keywords
AI, technology, resistance, human skills, adaptability, continuous learning, emotional intelligence, generative AI, tools, future
Chapters
(00:00) Intro
(00:28) The AI Elephant in the room
(06:23) How Anne uses AI
(14:32) How Fabrice uses AI
(19:38) Building Human Skills in an AI World
(24:45) Are Apple Intelligence ads misguided?
(28:10) A quote from Antoine de Saint-Exupéry
(28:43) Conclusion
See transcription below
Links
Eric Serra (Composer)
http://www.ericserra.com/
ChatGPT
https://chat.com
Perplexity
https://perplexity.ai
Abacus
https://abacus.ai
Claude AI
https://anthropic.com
Text Summarization in MacOS since a long time
https://osxdaily.com/2016/08/24/how-use-summarize-text-mac/
Google NoteBook LM
https://notebooklm.google/
Apple Intelligence Writing Tools ad
https://youtu.be/3m0MoYKwVTM
Apple Intelligence Create memory movies ad
https://youtu.be/A0BXZhdDqZM
Apple Intelligence Change your tone ad
https://youtu.be/deNzYrTvqCs
Apple Intelligence Catch Up Quick ad
https://youtu.be/BK8bnkcT0Ng
Antoine de Saint-Exupéry (Wikipedia page)
https://en.wikipedia.org/wiki/Antoine_de_Saint-Exup%C3%A9ry
Anne's website
https://potentializer-academy.com
Fabrice's blog (in French)
https://fabriceneuman.fr
Fabrice's podcast (in French)
https://lesvoixdelatech.com
Brought to you by:
www.potentializer-academy.com & www.pro-fusion-conseils.fr
(Be aware this transcription was done by AI and might contain some mistakes)
Anne Trager (00:00)
Hi everyone and welcome to the Human Pulse Podcast where we talk about living well with technology. I'm Anne Trager.
Fabrice Neuman (00:07)
and I'm Fabrice Neuman.
Anne Trager (00:09)
We are recording on November 11th, 2024.
Fabrice Neuman (00:14)
Human Pulse is never longer than 30 minutes, so let's get started.
Anne Trager (00:18)
I realized thinking about this episode that we haven't really addressed the elephant in the room. What am I talking about? What would that be?
Fabrice Neuman (00:24)
Yeah, what would that be?
Anne Trager (00:33)
AI. Everybody else is talking about AI. I mean, maybe we should.
And I thought of it because last week I was talking to a senior tech leader in a really huge, I mean like seriously huge international company who is implementing a massive change, which of course entails AI and a certain amount of resistance. He said, I mean, he's not resisting, it's the people in the organization who are resisting. And what he said to me, as we were discussing various topics is, "I am not concerned about those who are using AI. I am concerned about those who are not."
Fabrice Neuman (01:08)
Yeah, that's interesting. You told me about it and I thought it was a very good start actually. Because I think it also places the AI as it should be, I think, which is just another new tool. It's like the history of technology that taught us that. Every time there's a new tool, it doesn't replace previous tools.
It's an addition. It's like layers upon layers of new tools that we need to learn about and we need to make sure at least we know about.
So, I think that the new thing about it, about what we are living today is that is that tech replacement or addition is getting faster, keeps getting faster and faster. And that's probably what is the most, I'd say, surprising today and taking people aback.
Anne Trager (02:02)
Well, and I think we as humans, it makes it hard. I mean, there is a... we don't like to change. Let's just face it. I mean, I don't know anybody who really wants to change. I mean, we say we want to change, but when it comes to actually doing the change, there is a certain amount of resistance that we go through. And then there's a whole curve of adapting to new things.
Fabrice Neuman (02:09)
Heh.
Well don't you think there's also something about, so we don't want to change or apply changes that we haven't chosen first?
Anne Trager (02:34)
Well, exactly, exactly. And I don't know about you, but sometimes I don't feel like I chose all of this AI stuff that's going on. In fact, I've been using it for a really long time, like without even realizing it.
Fabrice Neuman (02:43)
Yeah, yeah. Well, first of all, because AI has been, you know, AI tools. They were not invented with just generative AI that we've all been talking about for two years since the advent of a Chat GPT basically, to change is at least partly because if it a maybe too simplistic, is that there's this one guy or his team trying to bring changes into a company to people who did not choose to do that. Then there are the tools if we go back to our daily usage. So you've been using AI for quite a and you've tested and you tried lots of different tools. And this is also what we wanted to talk about today.
Anne Trager (03:25)
Mm-mm-mm.
Well, before we get to that, I wanted to add something else that I've been thinking about. As I listen to people talk about implementing AI in these larger organizations, it's becoming clearer and clearer that it's really, really key to not only to look at the tech, but to also look at the people, like how you're accompanying the people. And this may sound obvious, however, when you're in large organizations and you implement a tech change, it's often about the tech and the processes.
Perhaps a little bit less about the people. And I agree with you that clearly the pace of change has increased with generative AI and the huge investments that have been going into this. I don't want to be a scaremonger at all, but I do kind of need to say it. A study from the World Economic Forum predicts that by 2025, 85 million jobs may be displaced by the shift in labor...
Fabrice Neuman (04:27)
Hmm.
Anne Trager (04:38)
...between humans and machines. I mean, that's a lot of jobs, okay? And that 97 million new roles may appear. So that's the positive thing. So what that really means for me, as far as I'm concerned, is that we all need to just keep learning and up-skilling and figuring out how to use this stuff.
Fabrice Neuman (04:57)
Yeah, in order to give another image to what we were saying as well before AI, new tools within the home, like, I was 10, 11, 12 and we had a VHS recorder arriving at home and I had to teach my mom how to use. But she wanted to use it because she saw an interest to it. And I think it's always the same story, right? It's a new tool. It's a new tech tool. Nowadays for us tech means electronics, but it's the same thing with any kind of tech through history. So it's really linked to what you were saying, which is, we need to adapt to those new things when we find an interest to them. But in order to find an interest, we need to try them. So it's kind of a circle.
Anne Trager (05:34)
Hmm.
Exactly. Exactly, exactly. Well, that brings us to the main topic we wanted to talk about today, which is what we are actually using in terms of these new generative AI tools. I really like, I heard somebody describe it this way the other day. I can't remember who, so I am sorry if you are listening person who told me this, who described AI as being like a drunk friend.
Fabrice Neuman (06:15)
hehe
Anne Trager (06:17)
You know, you really want them to help you out, but they're not very reliable. Okay. And I think that's pretty much my experience. You know, you, maybe they slur the words a few, a little bit, and you have to kind of listen really carefully to make sure you understand what they're really saying. And that's the way I feel about the generative AI I've been using recently. I tend to use it as a research assistant. And when I do so, so I will use a number of different...
Fabrice Neuman (06:22)
Heh heh.
Mm.
Anne Trager (06:43)
...generative AI tools to do research and to think about ideas. And it's good and bad. So it is, you could also say it's kind of like an intern. You can't really trust what's coming back and you have to be really careful. I find that I have to be really careful in the prompting in order to get accurate responses and to make sure the information I get back can be verified. And then I do proceed to verify. So that's when I'm...using it as a research assistant. I like to use it as a thought partner. I almost find that some of these tools are much better as a thought partner for when I'm developing content. I don't actually use it to write the content. I use it to come up with ideas and then I will write the content from there or rewrite it from there. It requires a lot of tweaking of the prompts. Like I will layer the prompts in order to get something that's useful.
The other thing that I like to use it for is as a time saver for doing summaries or, you know, whipping up a presentation when I have already some content and I want to do a presentation with it, it will throw it together. It's a huge, huge, huge time saver for me. And it is also a little bit of a time suck for me because I, it's this huge rabbit hole and I just dive in and then I just want to keep looking and exploring and it's wasting my time much more than it is actually adding value. That said, I can truly see just from the way I've been using it that as get better at using it and pinpoint where it really does add value, it will really, really help me in any of the busy work that I have to do. So I will be saving time. And I believe that by helping me out by getting different ideas. It is also opening me up to more creativity. And that's what I really want from the AI right now, is that it will free me up for more creative thinking and also to be able to spend time with people.
Fabrice Neuman (08:53)
Yeah, that's interesting. So it makes me think of a quote. So in my other podcast called Les Voix de la Tech, so it's in French, my partner, Benjamin Vincent, was able to get a nice interview with Eric Serra, who's a composer. So you know his music through movies like The Big Blue, Leon, and The Fifth Element. And asked him the question if he was using AI in his field of music. And then he asked him so about would you use AI create music? And his answer was, I thought, incredible. He said, AI as a new instrument, maybe, but not as a new artist. And this is exactly what you are describing, which is helping discover new ideas, used as a sounding board. And this is also I use AI.
Probably not as much as you do, not every day, but sometimes having a conversation with an AI, with a chatbot, let me add that the conversations are almost always through a keyboard, not via voice. I don't know how you use that, but I hear sometimes people saying they have vocal conversations with AI bots.
This I haven't been able to do yet other than for a demonstration.
Anne Trager (10:24)
So I'm curious because you say you haven't been able to do it yet. So what's holding you back?
Fabrice Neuman (10:28)
Doing the conversation, it doesn't come naturally, I guess, like the conversation we are having now. And it's probably, I've been so, I'm going to butcher the word again. Thank you, Working against the anthropomorphization of AI. I don't want to consider it as a human being and probably I don't want it a little bit too much. So having conversation is really not natural to me.
Asking question through text, yes, because for me, using a prompt, I mean, almost like a command. It's a computer tool. So this is how it works for me.
As I used to say, I mean, as I say very often, like the real people, they don't have this kind of considerations. Whether it's a tool or they don't, I don't think they think like that because it's just a tool they can use and vocally, why not? mean, we have voice control devices in our homes and the Amazon Echoes and Siri and what have you.
Anne Trager (11:20)
Mm-hmm.
So we circle back to what we were saying earlier, which is that whole thing about these tools is whether we find some use for them. And that you, if I understand correctly, feel a little bit in this distorted dimension because you work in tech and you have worked in tech for a long time and you think about tech. And so for you, you're thinking about, well, this is a tool and for me, I want it to stay a tool. So for it to stay a tool, I type it and so on and so forth.
Fabrice Neuman (11:52)
Yes.
Anne Trager (12:10)
Most people just don't do that. Either it works or it doesn't work for them. And that ties in with what we were saying the last time about trust is that we trust the tools when they work for us. And if they don't work for us, then we just don't trust them.
Fabrice Neuman (12:13)
Mm-hmm. Yeah.
So, the AI sphere is so big that it can be used in very different ways. having, know, using that as a sounding board, I do as a side note, I don't know about you, whichever chatbot I use, I, I don't find to be that each I tried chat GPT 4 4.0 mini whatever flavor of chat GPT we can get to, our Claude Sonnet or a Perplexity the small and large model or the abacus Smaug S-M-A-U-G, I still have no idea what it means. And so I tried all those and I don't think I've found many differences between those as far as the results are.
Anne Trager (13:21)
At the beginning when I was using it and I was playing around and I would put the same prompt in different, a whole bunch of different ones. And I, and I was finding some differences and it's changing so very quickly and they are all just getting so much better that I'm not really seeing the differences. What I'm finding does make the difference is how you do your prompts and how you layer your prompts. Like you just keep adding like...
Fabrice Neuman (13:39)
Hmm.
Anne Trager (13:48)
...different prompts on it to refine what you're doing. And even then you don't want to do it too much because then you can go off into this wild area where you had no idea. It's kind of a weird interaction. It's not a dialogue, okay, because sometimes it's totally not logical.
Fabrice Neuman (13:56)
Hmph
And when you do that, then you spend so much time that it's not a time saver anymore. That's kind of weird. Yeah, and for other things, so first of all, this this very podcast summary is done by AI. So when we're done recording and start editing it, then we use a tool called Riverside and it creates a summary for us.
Anne Trager (14:11)
Exactly, exactly. But you're using it for some things that are very, very helpful.
Fabrice Neuman (14:34)
It's always pretty good, And it's basically because generative AI is very good at this kind of thing, which is to do summary to create something based on a document, based on in that, for this particular example, based on what we say, because there's no hallucination basically.
Because it's just creating something out of something that exists. It's not a question of what is this or what is that. It's just like reinterpreting something that exists. And I think it's pretty good also because summarization tools have been in existence for quite a while. I don't know if you knew that, and I heard that not that long ago. So nowadays we are using, when you are on a Mac, you were using Mac OS 15, right?
Anne Trager (15:23)
Mm-hmm.
Fabrice Neuman (15:23)
But in Mac OS 8, the system that was released in July 1997, there was already a text summarization tool in it. But the thing is, there were no LLMs as we know today, like the large language models, right? So it would use a statistical approach to find which sentences were the most important in a text, in an essay or something and delete the others. So it was not rewriting at all. It would not be able to do that. But I thought it was interesting to think about that because the tools that we consider brand new sometimes are not that new. But they're getting better, which is exactly the point.
Anne Trager (16:09)
Hmm.
Fabrice Neuman (16:13)
There's other things that I've been using. The Google's Notebook LM. I like that because once again, it's a where you can put documents and it can create things based on the documents you've given it and only those documents. So, and answer questions and also create podcasts as probably a lot of people know already, image generation. I've been using it to create some images sometimes, but most of all, I've been using tools like from the Adobe suite in Photoshop to create images, if you I need an image in a particular format, I mean, in a particular, aspect ratio. And sometimes I have an image and it's not exactly the right aspect ratio, so I ask the AI to create on the side. And so it doesn't really matter if it looks good or if it's precise, because it's just like adding texture, basically.
Anne Trager (17:16)
Right.
Fabrice Neuman (17:16)
And the last thing, I've been trying to use AI as a replacement for a search like using Perplexity as a search engine. And honestly, the jury is still out for that because I'm not sure it's efficient and probably not as far as energy consumed for that is concerned.
Anne Trager (17:35)
Yeah. I think Google, just a Google search or a regular search engine search is a whole lot more energy efficient. That said, when we are talking about the different AI tools, to my knowledge, at this point in time, Perplexity may be slightly better at search than the other ones.
Fabrice Neuman (17:48)
Yeah.
Anne Trager (18:03)
And in any case, I find that search, when you search for stuff, it gives back all kinds of really wonderful things. And then you say, wait, wait, where did you find this? And it's like, well, that doesn't really exist, but maybe you can look at this. And so it's kind of weird. You have to be really careful with search. And it will even say like, you asked for sources and I've even had it come back, which it says source, you know, in parentheses and you...
Fabrice Neuman (18:16)
You have to be very careful about it. Yes, absolutely.
Anne Trager (18:29)
...click on it it like goes nowhere because the source doesn't exist. And you say, so what is the source? And it says, well, I don't really know.
Fabrice Neuman (18:37)
Yeah, basically it's like, oops, I'm sorry, this thing doesn't exist. And it brings another thing, which is more and more people are using this type of tools to search on the web, without making sure that their sources are correct, and it take granted the answers they get. And that's a problem.
Anne Trager (19:00)
What I understood is that these tools, or some of them at least, again, are trained to actually give you an answer. It's almost as if it has to give you an answer. So it's not going to say, well, no, that doesn't exist or whatever. It's just going to give you an answer, even if it has to make things up. And again, I have no basis for saying that other than things that I've kind of heard around the area.
Fabrice Neuman (19:26)
In any case, these are tools that we need to learn how to use. Given this, how do you see humans in this new equation, like this equation written by these tools?
Anne Trager (19:42)
Well, I think that one thing is for sure that they are going to be more and more part of our existence. And when these tools take on so much space, then what I think that we humans can do is to really build up the human skills that AI is not doing. I mean, these are things like emotional intelligence.
And even if AI can be, can seem emotionally intelligent, AI is not necessarily going to help you when you have another person in front of you. Okay. I mean, unless it's like whispering in your ear and presumably that's not going to happen anytime soon. However, I would much personally like to build up my own emotional intelligence so that I can deal with the emotions and the stress of human interactions in a way that, that is, is positive and helps build genuine relationships with other people. I think that is one thing that could give us, well, that gives anybody a real advantage. The more emotionally intelligent you are, the more advantage you have with people and potentially as well with machines because part of it is managing your emotions.
The other skill that seems important to me is this one we've been talking about all along, which is adaptability. As the pace of change continues to increase and unexpected things continue to happen all the time, our ability to decide when and how best to pivot is really important. It's really important to know how to make those changes and when to make those changes and to use our emotional intelligence and our stress management to remain calm in the face of all this change.
I also think that there are things like ethical decision-making that AI, from what I know, is not doing so well. And that having human judgment is really key in navigating some of these ethical things that are deciding between right and wrong. I was listening to another podcast recently about the implementation of AI in organizations, where the people were suggesting that it's really important to have a human decision maker in the process. If you are going to implement a process with AI, at some point you have to have human intervention for this very reason of ethics and also trust that we were mentioning earlier. Other areas for me that seem important are creative problem solving. I think humans do it really, really, really well.
Humans are really good when we actually get together and talk to each other and try to innovate and find solutions. We do it really well. And we also are really, really good at critical and strategic thinking. And we question assumptions and we consider multiple perspectives and we make judgment calls in really ambiguous situations. We know how to do this and we know how to develop long-term visions and imagine long-term implications.
So I'm just saying we know how to do this really well. Let's just capitalize on these skills.
Fabrice Neuman (22:45)
Yeah. I guess what I'm wondering is will those tools be able to do those things anytime soon? And it seems that we almost all agree that the answer is no. But until then, if ever it happens, what can we do? I think we can agree on the fact that these tools are here today and are here to stay. So we need to use them. We need to try them. We need to get accustomed to them because they're here. I think that this is the same thing I tell everybody I talk to, I train people, you know, using, using computers. Don't be afraid, try, you know, and this is how you're going to learn how to use whichever tool you want to.
Anne Trager (23:38)
Yeah, so continuous learning is really, really going to be important for all of us, I believe. Along with curiosity, like just keeping really an open mind about change as it happens. I also believe that, you know, we can develop through collaboration and networking with people who know things about this or who even who don't, you know, just to try, I mean, just talking to other people. I just think we need to keep talking to other people.
Fabrice Neuman (23:42)
Yeah.
Hehehehe
Anne Trager (24:02)
Not only through the intermediary of these machines, okay? I also think it's important to let go of things, like our old ways of doing things, and that's a lot harder to do. I will admit it myself, it's hard to let go of old ways of doing things, but the more we're able to do that, the more we can integrate some of these new ways of doing things.
So those are my thoughts on how we can stay human and live well with this technology. What else have you been thinking about lately when it comes to AI?
Fabrice Neuman (24:28)
Yeah, Anne I wanted to talk about the latest Apple ads about Apple intelligence, their new AI tools that are integrated in the new systems and new versions of their systems. And so I'm sorry, I'm going to be a little negative here, but.
Anne Trager (24:51)
Okay, this surprises me because I always usually love Apple ads. I mean, sometimes they're so beautifully done, you know?
Fabrice Neuman (24:57)
I know, I know.
Yeah, but so maybe you remember this crush ad where, you know, they use the big press to crush all creative tools in order to show that you could do everything with an iPad Pro, you know, so they would crush musical instruments and TVs and crayons and paint and everything. And so those new ads about AI show people using the new AI tools.
Anne Trager (25:13)
Gosh, well if something is crushing it, it sounds awful.
Fabrice Neuman (25:29)
And they're weird for me, it's like Apple is getting tone deaf about the use of AI and it seems that they want us to believe that these AI tools created for lazy people. So you can use AI so you don't have to work, which is the total opposite how we see those tools as tools to help, not to replace. Right. So the one that struck me the most is you see somebody, some guy in an open space office and he's not doing anything. He's playing with paper clips. He's unrolling roll of tape for no reason, you know, and singing or, you know, going like, like really doing nothing. And then he takes his iPhone and writes an email with very bad vocabulary, bad sentence structure and then the AI to rewrite this in a professional tone so he can send it to his boss and his boss receives it and he's very pleased to see that and almost surprised to see that this guy in the office is like, he wrote this? that's good. And I think it gives an image of AI. I don't like this one.
Anne Trager (26:45)
Mm.
Yeah, I get it. Well, first of all, it sounds like the guy, that I would just be bored if I didn't have to do anything and still had to be in the office. So I don't know any, nobody I work with wants to be bored at work. First of all, and then there's this sort of idea of not actually being who you are and so giving a false image and I don't know, there's a lot of, yeah, I hear what you're saying.
Fabrice Neuman (26:59)
Yeah. So, so there are a couple of other ads that are a bit like this as far as I'm concerned. I will urge you to go watch those. We'll put the links in the show notes. It's weird. Yeah, exactly. It's weird to me to tell people to go see ads, but I really I would really like to know what people think about it to see whether I'm alone on that.
Anne Trager (27:32)
And share your opinion about them.
Fabrice Neuman (27:46)
Anyway, so this is a future I like, this future depicted by those Apple ads. So, thank you, Anne, I think you have something you want to share with us, so we end this on a more positive note.
Anne Trager (28:00)
Yes, I am the one to bring the positive. I would like to share a quote from a French author named Antoine de Saint-Exupery. And he wrote, "As for the future, your task is not to foresee it, but to enable it." And so I think this summarizes well what we've been talking about today is by trying it and using these new tools, understanding how they work, that we will help to shape these new tools and we will enable that future.
Fabrice Neuman (28:32)
Alright, that's it for episode 5. Thank you for listening. You can find all the links mentioned in this episode in the show notes directly in your podcast app. And don't forget to go to our website, humanpulsepodcast.com to find more information and all of our previous episodes and also to send us feedback. Don't forget to do that. If you want, we will read it and answer it maybe on this very podcast. Go to humanpulsepodcast.com.
Anne Trager (29:00)
And thank you for subscribing, sharing, and please leave a review. We'll talk to you in two weeks.
Fabrice Neuman (29:07)
Bye all.
…