Are We Getting AI All Wrong?

The Human Pulse Podcast - Ep. #19

Back to list of episodes

LINKS AND SHOW NOTES:
In this episode of the Human Pulse podcast, hosts Anne Trager and Fabrice Neuman delve into the complexities of artificial intelligence, discussing its potential, common misuses, and the importance of using AI as a tool rather than a crutch. They explore how AI can enhance creativity and productivity when used correctly, emphasizing the need for specificity in prompts and the value of human discernment in the process. The conversation also touches on the future of AI and its role in human interaction, highlighting the necessity of maintaining human connections in an increasingly automated world.

Recording Date: July 13th, 2025
Hosts: Anne Trager – Human Potential & Executive Performance Coach & Fabrice Neuman – Tech Consultant for Small Businesses

Reach out:
Anne on Bluesky
Fabrice on Bluesky
Anne on LinkedIn
Fabrice on LinkedIn

We also appreciate a 5-star rating and review in Apple Podcasts and Spotify.

Chapters
(00:00) Intro
(02:16) Customizing AI: Spaces, Projects & Notebook LM
(05:21) Voice & Interactive Modes in Action
(08:59) AI as Assistant: Key Use Cases
(15:59) When AI Gets It Wrong & Human Judgment
(18:23) Automation’s Edge & Habit Flexibility
(22:49) Tips for Maximizing Your AI
(24:44) Conclusion






See transcription below

Resources and Links:

Harvard Business Review IdeaCast episode 1034
How to Build an AI Assistant for Any Challenge
https://hbr.org/podcast/2025/07/how-to-build-an-ai-assistant-for-any-challenge

Alexandra Samuel (guest of the IdeaCast episode)
https://www.alexandrasamuel.com

Her article in Harvard Business Review
https://hbr.org/2025/03/how-to-build-your-own-ai-assistant

Google’s Notebook LM
Notebooklm.google


And also:
Anne’s Free Sleep Guide: Potentialize.me/sleep.

Anne's website
https://potentializer-academy.com

Brought to you by:
www.potentializer-academy.com & www.pro-fusion-conseils.fr

Episode transcription

(Be aware this transcription was done by AI and might contain some mistakes)

Transcript

Anne Trager (00:00)
Hi everyone and welcome to the Human Pulse podcast where we talk about living well with technology. I'm Anne Trager, a human potential and performance coach.

Fabrice Neuman (00:09)
And I'm Fabrice Niemen, a tech consultant for small businesses.

Anne Trager (00:13)
This is episode 19, which we recorded on July 13th, 2025.

Fabrice Neuman (00:19)
Human pulse is usually never longer than 30 minutes, so let's get started.

Anne Trager (00:23)
So Fabrice, you raised a really good question the other day. Are we getting AI all wrong? And I think you're not alone with this question. I'm hearing now, as I put my ear to the ground, that most people, from CEOs to individuals, are using AI as more or less a glorified search engine, which in my sense is not using it its full capacity.

Fabrice Neuman (00:47)
Well, probably not. The thing is there are lots of promises about AI. And my question was, after using Chat GPT for a while, almost exclusively, that was the point for me to use just one tool exclusively to see whether it could provide me better answers by basically learning from who I am. And my conclusion is that I don't get that.

I don't get that result and it's very frustrating. There are two ways of looking at it, are, is the tool not good enough or am I using it wrong? Right? Because basically my first conclusion was maybe I don't need to pay for Chat GPT with all the plus advantages because I don't get the results I was looking for. And when I told you that, you told me, but you're not using Spaces or Projects as they are called in Chat GPT is one way to create more specialized areas for one particular context. And I don't use that that much. I tried without getting such a good result either. So I'm not sure what. So I wonder. And so you seem to use that particular feature more than I do like Spaces because you use a Perplexity more or Projects as they're called in Chat GPT. So how do you do that?

Anne Trager (02:06)
So you're right. I do use Spaces in Perplexity. So I have different Spaces which have different background information and different instructions. And I attach files to them so that there's some further information. And what that allows me to do is to not have to prompt all the same stuff every single time. So I use that. You know, sometimes, and you're right, sometimes it just comes up with crap because what AI does is it juxtaposes different things and ideas and it doesn't have any discernment. It just juxtaposes. And we've talked about this before. So for me, it still helps me to think differently because I really never would have thought of putting those two ideas together or of that weird metaphor it comes up with. Not that I would necessarily use that metaphor, but it sort of knocks my brain out of this regular space. And maybe it makes me smile or it makes me chill a little bit and think about things differently. So, that's what I'm finding to be really, really interesting right now. There's always value in thinking differently.

What I'm doing now is moving towards more challenging prompts, really structuring prompts, having different layers of prompts, asking for different perspectives, rather than asking for answers, putting in things and then asking for different perspectives or responses or what am I not seeing here or, you know, what are three other ways of looking at this or whatever. Those kinds of ways of prompting. And that's getting me more interesting information. I still don't come up with anything I can use right out of the hat, so to speak. really have to, I have to take it and then go do something with it in order to create something of quality. And so I pay a lot of attention to the way other people are prompting as well.

And so I heard I heard a prompt recently on an HPR Ideacast podcast episode, which was interviewing a person named Alexandra Samuel, who talks a lot about using agents and using assistants and so forth. And one of the prompts that she mentioned in this episode was, me 10 reasons that my current strategy is a terrible idea. So I actually can't wait to try that. I'm kind of scared to try that because, wow, what's it going to tell me?

And it will give you various perspectives and then I'll see what happens. Well, I'll let you know when I actually get the courage to try that because I don't know if I really want to know. I kind of like being right.

Fabrice Neuman (04:42)
Hmm.

Well, understand. Yeah, I understand you don't want to be pummeled by your AI saying, you're all wrong, which is not going to happen, I think, because as we know, AI chatbots are programmed to be nice and somewhat often, more often than not, too nice. You know, they don't want to fight with their users. But what I hear from that is that instead of using it as a glorified Web search engine as you said at the beginning and I still think they're pretty good at that in some respect, but it's to use the AI chatbot as a sounding board.
It really Talks to me if you will in many respects because this is one of the my my preferred use of chatbots, which is to use them with the voice mode as I often said here. I like to talk to chatbots because it's very natural and you can have a back and forth and they're programmed to ask you questions to go further into the conversation. Sometimes a little too much. really... it seems that the chatbot is really eager to further the conversation which is a bad sometimes.

So, and I used to do that with ChatGPT and I changed because their as a side note, their latest voice mode is unbearable to me. You know, they wanted to sound so human that they made it add more ums and ahs and hesitations, even more than I do while doing this podcast, as sometimes you tell me. So I changed from ChadGPT to mainly because of the voice mode. But doing that, I think it's a very valuable tool using it this way, because first of all, you can do that while doing something else, like you talk, and I also use more and more the interactive video mode. And I did that with the car, for example, I was looking for an option in all the entertainment and through the screen setting system and I couldn't find it. so I while showing it the screen, you know, so, and I basically said, here I am, I'm looking for this feature I can't find, where is it?

And actually to me the interesting point is that the feature did not exist in the way that it could be turned on or off. It just needed something else to be turned on so that feature I was looking for was working, which was to have the right side mirror to turn down when putting the car in reverse so you could see a little bit better.

Anne Trager (07:36)
And you needed that to have the entertainment system work?

Fabrice Neuman (07:36)
And in order to... No, no, no. needed mirror settings set to set the driver's side on so that the passenger side could tilt down. It's not written anywhere. I couldn't find it. And it helped me do that through conversation and showing. And that I find fantastic.

Anne Trager (07:55)
Okay, okay.

So that's really a fascinating use of AI. think if we go back to that original question, are we getting it all wrong? I think what we're getting wrong is to think that generative AI or AI is the end all and be all, when actually it is a tool that can respond to specific situations. And the more specific we get about it, then maybe the more useful it can be, that all takes a little bit of an investment in time. You're intriguing me now because I don't use the voice live conversation at all. I haven't tried it yet. I don't know, it means I have to do something on my phone and I get really annoyed immediately when I have to connect something else to my phone, so I just…

Fabrice Neuman (08:46)
Heh.

Anne Trager (08:49)
…Don't do it. That's another issue. I do not know if AI can actually resolve that.

Fabrice Neuman (08:49)
But to me, that's another issue.

But you go back to the main topic. To me, it's the way to use it as an assistant. And ⁓ Alexandra Samuel, you quoted before, talks a lot in that podcast episode about using AI as an assistant. And it wasn't very clear to me when I listened to her, but to me, this is the expression of that.

Anne Trager (08:56)
Yeah.

Mm.

Fabrice Neuman (09:12)
It's an assistant. You ask questions, how can I do that? Or do this for me or something like that. And a way, it's an assistant that can help you only when you need to because you can cut it off whenever you want. And this is this assistant.

This is this assistant way of using it that provides me with the richest results, if you will.

Anne Trager (09:33)
How interesting, how interesting. Yeah, I think that, you know, that's where we're going more and more. And what makes it interesting is that everybody can sort of create their own assistants. And I invite people to listen to that podcast because she mentions how she has made some assistants up.

And then, so after you asked the question, I went around and asked a few people how they're using it. And I thought I'd share that here. So first I spoke with a writer. She does nonfiction writing, and this is how she uses it. She uses it for research.

She uses it for outlining and organizing information, know, like throwing in an interview and saying, you know, organize it, don't change anything, but organize it by topics. Because of course, when people speak, they go back and forth. They don't always speak about the same topic at the same time, for example. And she also uses it to apply her style guide.

So whatever, you know, put a hyphen there and don't cap there and cap there and whatever the style guide is so that she doesn't actually have to do that anymore. So I think these are very specific uses that can be very effective. so that's one example. Another example is a business owner I know who really uses it, who has really trained their Chat GPT with their entire backlog of everything they've ever written and all of their services and all of that and put it all in there and speaks back and forth while exploring ideas and is very satisfied with it. Is very happy with it. It's got a name. I know actually two of them who do that and are like really, really satisfied with that.

There's another tool that I have used which I am a certified Meet Yourself Axon coach and Meet Yourself is a really, really interesting assessment that you can do that's based on your DNA and a whole bunch of other things. It's really beautiful. so as part of that, I have access to the Meet Yourself bot, which I can ask questions to when I'm debriefing other people and so on and so forth in order to get more information about the genetics behind whatever it is that I'm looking for. And I find that to work very well. So it has a very set documentation behind it. The data is delimited and verified.

I can have access to that data and ask questions to it. I find that to be useful. And all of this gets me thinking about, what if I were to make an... the coach bot, would people like that? If I were to put in all of my writings and all of my thinkings and the tools that I use and so on and so forth, would people use it? I know coaches, a lot of coaches are doing this, or at least there are services selling this service to coaches, but I don't know if people are really using it.

However, this is where it's going, is how you can distill that personalized information. I also saw another service that this to websites where it would pull the website information together and you could speak to the website to get answers from the website. So there are all kinds of ways it's being used. And again, I don't know how useful it is. I honestly believe people come to me for coaching for the actual human interaction. Not that having all of that information somewhere else wouldn't be useful.

Fabrice Neuman (12:31)
Yes.

Anne Trager (12:46)
I don't know. don't know. But anyway, basically it seems that for now, the more you feed specific information in and contain that information within some walls, then the more useful it could be. So it would be my coach bot with my curated information rather than the whole world as a coach bot, which is not curated and could...

Fabrice Neuman (12:47)
Yeah.

Anne Trager (13:06)
You know, how trustworthy is that? So anyway, so the more contained it is and the more curated it is, maybe the more useful it is. That's my question. This is why I like using Notebook LM, which allows you to upload and contain a specific database of information and ask questions to it.

Fabrice Neuman (13:19)
Hmm.

Well, I'll come back to that, think there's a very good point you're making, which emphasizes the fact that I was not using Chat GPT the right way in many cases, is you want to contain. So if you want to create a chatbot that reflects a person or set to a certain amount of data, it requires work.

And this is why for me, Chat GPT didn't reach the result I expected because it doesn't know whether it should use this set of data or this one and not mix. The example comes to mind, I think I told you that on this podcast already, which is at some point after a few months, I asked Chat GPT to write a bio about me with all the things it knew about me.

And it added a few things, a couple of sentences about piloting a plane, which has nothing to do with me, but has everything to do with our daughter who is a pilot. And the thing is, it's because at some point in time, I asked Chat GPT a few questions about, you know, planes and piloting and what have you, and aero clubs and stuff like that.

And it didn't make a difference between this being not about me and all the other things that I do in my life. And so this is where the, probably the AI assistants are not smart enough yet because it couldn't know that it was related to me or not. And so you have to tell it. either by including in the prompt, saying that this is a question not related to my profession or me personally, but to somebody else. Or you have to create those Projects or there's walled gardens, if you will. And it does require lots of work. And I'm not sure I'm willing to do that. And this relates to what you just said about Notebook LM, which is a Google tool that I don't think we talk or use enough, because it's based on that particular way of thinking, which is you create notebooks that you feed with different sources, whether they're text or videos or audio files, whatever you can throw at. And then you can use that basically contained reservoir of data…

Anne Trager (15:49)
Hmm.

Fabrice Neuman (15:50)
…asking questions, creating, as we know, that's amazing, audio podcasts out of them so you can actually hear about all those sources and extract data from that particular set. that's probably one way of using AI as its most efficient peak.

Anne Trager (16:08)
Yeah, I think so. And I'm with you. I don't actually have the patience or the thinking preference to actually dig into all the details and go right to the end. I'm very attracted by the idea of having these assistants and then you actually have to put it together. And I'm like, yeah, right. OK, what's the next big idea I can explore? So the day that I can say…

Fabrice Neuman (16:26)
hehe

Anne Trager (16:28)
…to to my AI, hey, I would like you to develop this AI for me and trust its judgment and its discernment. But as I said earlier, it doesn't actually have judgment and discernment. And it makes me think about how beautiful we humans are, because actually, if you were having a conversation with a human, it would have picked up.

That the piloting is not your thing because of the other context around it and the other things it knows about. I mean, the human mind is really quite incredible and you don't actually have to say, don't make stuff up here and don't do this or do this or only talk. I mean, you don't have to specifically say those kinds of things, which is where it becomes hard to interact as of now with these tools because...

Fabrice Neuman (16:52)
Exactly.

Yeah.

Anne Trager (17:13)
It doesn't have the same reflexes wherever they come from. And I don't know enough about how we think as humans to say, where does that come from? I think it's a really hard question. I think that's what people are trying to create and so far it's not there yet. However, it is still very useful and the idea is to really break down your work into parts.

And to find the low-stake grunt work that you don't want to be doing and have the assistant do that. And I'm not going to say agent because I'm not exactly sure what an agent is. And that's a conversation for another time, I think. But to have your AI assistant do that, like your research or extracting information from the interviews, which I mentioned earlier, and to put it into a process where you, the person, can then do the added value stuff.

And again, it sounds like a lot of work to put that together and maybe it is a bit of a pipe dream, but I really would like that to work. And I have a hard time like figuring out what those low state parts are.

Anyway, so that's where I am on the topic.

Fabrice Neuman (18:13)
Yeah, yeah, and we discussed that many times and I try to find those little parts as well and to break my task into little parts and every time I do that, I fall into that rabbit hole of, trying to, because I guess one of the main goals is to try to automate things.

And so you need, in order to automate, you need to be able to tell whichever tool you use those little process parts that basically it's a if then else loops, right? If this then do that or do something else. And maybe it's just my brain. I can't do that with what I do on a daily basis. It never worked. But

Anne Trager (18:50)
Mm-hmm.

So, I would like to propose that both you and I can't do that because all that we do is add value. Anyway, just saying, you know.

Fabrice Neuman (19:03)
Yes, okay, thank you. And thank you for listening. ⁓

So, but maybe not. But I would add to that, that it's my experience. I've been working in tech for 30 years. And the search for automation I've been hearing as well for thirty years. And...

Anne Trager (19:12)
Yeah.

Fabrice Neuman (19:23)
As far as I'm concerned, it never works. I really doubt, you know, every time I go watch a YouTube video on somebody saying, I automated this or that and now I do this in 10 minutes instead of two hours because it's all automated. And I have to admit, every time I find problem here, like the person doesn't say exactly what he or she does.

Because every time I try and do that, it's my bias, obviously. You need to make sure that the machine does exactly what you want to do. for me, it's the same thing as trying to automate your wake up time. But I never wake up at the same time. And you try to automate, for example, with a...

Anne Trager (20:07)
Yeah.

Fabrice Neuman (20:11)
…connected devices in your home. Okay, so you turn on the heat at such a time and then you turn it off when blah blah blah. But then you have holidays, you have people coming over and all the automation processes you put in place, you have to stop because something happens. And if you try and create an automation that tries into account everything that can happen, all the details.

Then it's not automation, it's just life and it's messy.

Anne Trager (20:41)
That's so interesting. And I think we come back to the idea of the walled garden, that automation works. I mean, I was thinking of like automotive assembly lines, which are all automated and they work perfectly. But it's like this really, there's one thing to do. It's very contained and you don't have the outside environment coming into play as you do in life. So that in some cases automation is really...

Fabrice Neuman (20:53)
Yes.

Anne Trager (21:05)
…beautiful and effective in these very contained situations. And then as soon as you have any semblance of spontaneity that comes into the picture, it becomes really hard. And it made me think of something else. What you were saying is we've talked about habits before here as kind of the way the human...

mind automates things is we do habits. And I work with a lot of high performers who struggle with their habits, not because they can't create habits, it's because their habits are too rigid for a human being. And it becomes impossible. So they say, I'm going to go running every day for 25 minutes and then it rains or and then they have a sore ankle or they slept bad or they had too many glasses of wine the night before, whatever humans do, we all do that kind of stuff.

Fabrice Neuman (21:41)
Yeah.

Anne Trager (21:47)
And so I'm constantly talking about having as a human being in your automation process about having that flexibility to do something else when your initial automation doesn't work. So the day that AI can have that kind of flexibility, it'll be really cool. But I'm not seeing it yet.

Fabrice Neuman (22:06)
What a great rabbit hole. Automation versus flexibility. ⁓ it's a whole new can of worms. I can't wait to talk about that.

Anne Trager (22:10)
Yeah.

I don't think I don't. Yeah, we're going to go over time if we do that because we're coming up to the end. So I'm going to wrap it up with a few things that I've heard about dealing with your AI that you may or may not want to try to see if you can get it to work better for you and maybe let us know how it's going. So I keep hearing treat it like a relationship. So like you would an assistant or something like that rather than…

Fabrice Neuman (22:19)
yet.

Mm-hmm.

Anne Trager (22:39)
…just a tool. Some people say, including naming it and being polite. So the politeness, I've heard some people say, well, just in case it does become conscious one day, maybe it will remember that you are polite to it. I think that's a really good reason to be polite to your AI. You never know. It's the what if.

Fabrice Neuman (22:55)
Even though, as we mentioned before, it's not very good at remembering every tiny detail, so maybe not.

Anne Trager (22:59)
Yeah, so maybe not. so some people have great success naming it for whatever reason, asking it for advice on how to use it. It is a great piece of advice that I've heard. Actively pushing it to challenge your thinking because as we said, AI can be a sycophant. It will just flatter you to death unless you actually push it to challenge you and to challenge your thinking. And then of course, don't ever make decisions based solely on AI because it just doesn't make sense. AI isn't there yet.

And the other thing that I think is really important goes back to being polite is that we have to keep practicing being with other human beings because when you have somebody who's constantly flattering you and telling you're right and answering all of your questions without any hesitation and without any emotion and without any, you know, grumbling or, you know, or wait, let me have my coffee or whatever.

Fabrice Neuman (23:55)
Hmph.

Anne Trager (23:59)
It's easy to become impatient with other humans. And that's kind of sad.

Fabrice Neuman (24:04)
Yeah, that's very

Yeah, so to me, basically, it means that we go back to the main advice we can give to people, reminding them is just a It can help us, can enrich us, but it's adding to all list of tools we already have available to us. And I don't think we can use it any differently for now. We'll see where it goes, but for now it's just another tool.

Anne Trager (24:34)
Absolutely. Well, so there we go. Thank you for joining us today. Please do visit the human pulse podcast.com. So I say that again, human pulse podcast.com for links and past episodes.

Fabrice Neuman (24:50)
listen to your podcast. helps other people find us.

Anne Trager (24:56)
Thanks for sharing this with someone who, well, is getting AI all wrong. And see you in a couple of weeks.

Fabrice Neuman (25:00)
Bye everyone.