Back to What We Think

Fear, Trust, and AI

The language experts from maslansky + partners take on the smartest, savviest, and sometimes stupidest messages in the market today. CEO Michael Maslansky and President Lee Carter bring their experience with words, communication, and behavioral science to the table — along with a colleague or client — and offer up a “lay of the language.” Their insight helps make sense of business, life, and culture, and proves over and over again that It’s Not What You Say, It’s What They Hear™.

How can companies use language to overcome fear and build trust in AI? In this episode, Michael Maslansky, Keith Yazmir, and Robert Ledniczky dive into the rapidly changing world of AI and explore how companies should be talking about it. They discuss widespread concerns and excitement surrounding AI to set the stage for recommendations on how to generate trust. By narrating the AI story with a focus on benefits, responsibility, and familiarity, they shed light on how companies can successfully communicate about how they’re integrating AI into their business.

Listen below or on your preferred streaming platform:

LINKS MENTIONED IN THE SHOW

Michael Maslansky’s book, The Language of Trust

maslansky + partners newsletter

maslansky + partners LinkedIn

maslansky + partners Twitter

TRANSCRIPT BELOW

Episode 309 – AI

Michael Maslansky:

They said, what? Welcome to HearSay, a podcast from the language strategist at Maslansky and Partners, where we give our take on the strategy behind the smartest, savviest, and stupidest messages in the market today and what you can learn from them. Our philosophy is it’s not what you say, it’s what they hear, and that’s why we call this HearSay. I’m Michael Maslansky, CEO of Maslansky and Partners and author of The Language of Trust. And I am joined today by two of my clever colleagues, my partner, Keith Yazmir, and Rob Ledniczky, Senior Director of our team and a longtime member of our team as well. Hello, gentlemen.

Keith:

Hello, hello.

Robert Ledniczky:

Very excited to dive in and chat about some AI here.

Michael Maslansky:

I’m sure that there’s some silly AI joke that I could make about not having written anything in here, but I’m going to pass on that and jump right to the meat of this. So AI, generative AI, machine learning, natural language process. There are a million names for it, but I think you have to be under a rock to not know that this is a thing. For some people, it’s the next big thing. For others, it could be the last big thing. And that’s what we’re gonna talk about today. We’ve got this incredible new technology. It’s created a lot of fear and concern. It’s also created a ton of excitement. We’re gonna dig into different aspects of what it means, of what companies are doing today, and what they should be doing to talk to their different stakeholders about it. So gentlemen, let’s start with the fear. At a societal level, we’re hearing a lot about this existential crisis. But what are people really afraid of? And what are really companies doing about it to address these fears?

Keith Yazmir:

So one of the interesting things that’s going out there is you hear a lot of people who are at the forefront of AI raising the biggest alarm. You have people that were part of creating this technology that are saying, this is really dangerous. We need to make sure to rein it in. You have other people who have quit their jobs and are out there kind of ringing bells. There are several theories of what’s going on here, but I think many of them go back to the axiom, at least in politics, where when someone’s doing something that doesn’t totally make sense, ask yourself why are they doing that? Don’t just assume they’re being irrational and not making sense. And there are two theories right now in terms of why companies are treating fear, AI companies the way they are. One is… For investors, you want to be the humongous thing that’s coming and potentially really scary and changing the world. You want to be that technology that is going to be like the printing press, which some people are out there crying about. If you have a little fear out there with it for Wall Street, that’s not such a bad thing. Secondly, though, the reason why they are talking about the dangers potentially is because it’s also a smart business. to be telling Washington to regulate you before Washington decides to regulate you. You end up on the side of the angels realizing people in Washington don’t even understand what you’re doing and it’s going to take an awfully long time for them to catch up to it. So it’s, it, there are a lot of smart reasons why the actual actors in this drama are the ones that are out there calling for regulation.

Michael Maslansky:

So, if we talk about regulation as a message, not as a substantive idea, like, what’s the benefit of actually just talking about it aside from the action that they’re looking to get at?

Keith Yazmir:

Well, if I want you to trust me, I can either say, Michael, trust me. Or I can say, Rob, put some rules around what I’m saying and doing so that we can be doing this fairly and responsibly. And if I say that to Rob, Michael, you’re much more likely to trust me So there’s a very powerful communications piece behind, the act of, these seven. AI leaders who recently went to the White House and stood for photo ops with the president, saying, yes, we are here to talk about responsible development and innovation and regulations.

Michael Maslansky:

And so Rob, what else are tech companies talking about in terms of trying to address or anticipate this fear that people have?

Robert Ledniczky:

Yeah, I think as we think about the fear element, there are two pieces of it. There’s the individual, which is the, you know, how does this impact my job and how does this impact me and my future and in my career. There’s the second piece, which Keith touched on, which is the, you know, the broad existential terminator. Are we creating, the last thing that’s going to destroy us all? I think what we’re seeing a lot more of the communication comes from the tech folks is on more of the latter, which is more of the reason to not get in the way of AI development and stop AI development in its path. I think there’s some interesting communication that’s happening there around when you actually look at what a lot of these AI leaders are saying around kind of limiting AI development to a certain number of players who are going to be deemed responsible enough to carry this forward. There’s an interesting counterargument to that’s coming out of a lot of the smaller kind of tech VC world, the Mark Andreessen’s of the world who are saying, you know, that just creates an oligarchy of AI that is going to be bad for and stifle development and kind of by not making it open source is going to lead to kind of power concentrating within a few companies. That seems to be where tech companies seem to be leaning the heaviest from what I’m seeing in terms of a regulatory standpoint. But that’s very different from the language that needs to be used to make the population more comfortable with the former piece of, well, is this going to come and take my job? Is this going to render my job kind of futile or useless? Which is, as we get in and we think a little bit more about some of the functional considerations of talking about AI becomes a little bit more kind of prevalent and a little bit more relevant there.

Michael Maslansky:

So, you know, we’re talking about this idea of AI at the societal level of kind of this existential threat of fear, you know, AI has been around for a long time, right? There are a lot of products that we use today that we don’t think about it. We don’t really question whether or not AI is a problem. It is not so much AI, the thing itself, it is the kind of the pace at which we’re getting to kind of general intelligence or general artificial intelligence, that all of a sudden people have woken up to the risk that all this science fiction could become reality. And so on the one hand, you’ve got tech companies who have made, and I think are making all of their billions based on AI at some level pre-ChatGPT. All of a sudden the genie’s kind of out of the bottle, right? And now the thing that nobody really questioned, everybody thought was a good thing, can they put the genie back in the bottle or keep it tied to the bottle enough that regulation doesn’t come out and restrict much more of their development than the things that they were already doing pre-ChatGPT. And so I think they have a fear at that level. But then there’s this question that I want to get both of your take on this idea of responsible AI. And I think, what does that mean? What are they trying to do with that language and how effective is it gonna be?

Keith Yazmir:

Well, you see a lot of headlines around that. Again, it’s something that the administration was talking about in their press releases around having these seven AI CEOs come to meet. It’s a fascinating shift between 30 years ago when it felt like innovation and progress could do no wrong, right? We were all into what was going on with the web and with everything kind of moving forward. And the idea of suddenly flipping around and having this term, this modifier responsible coming before responsible innovation, responsible AI. Michael, you’ve been working a lot in completely other fields in terms of energy and ESG talking about responsibility. It really does feel like it actually fits into a broader societal focus and concern that perhaps all of this great progress that we’ve been kind of experiencing in technology over the past 30 years has some downsides to it and there are concerns to be had. And this term responsible kind of tries to flip that on its head. It says we are doing this, but we’re doing this like good adults should and we’re looking out for the good of society, which of course from the communications perspective could have some truth to it. Might not, but I think it’s a very smart way of labeling what you’re doing. It says I get it. There are concerns. This is potentially harmful. We’re doing this the right way though.

Robert Ledniczky:

Yeah, and I think it’s interesting to think about, is it AI itself that is being developed as being responsible, or is it the process to develop AI that is being labeled and described as responsible? And just talking about responsible AI seems to capture both of those, right? Keith, to your point, it’s the input we are developing this responsibly, we are doing this with good intention, and we are maybe not quite so sure of an understanding of what the outcome is actually going to be, what are we actually creating, versus this idea of the output and the actual technology itself being responsible, which is where you start to get into questions of, you know, will AI continue to ingrain certain biases within society? Will it run into issues of spreading fake news and spreading misinformation in election cycles? How do you create something itself which can be responsible, especially as you’re talking about something that is artificially intelligent and is going to have to make calls on what is good and what is bad, we are building to make calls on what is good and what is bad. So I think I saw some interesting experimentation that was done the other day with one of the AI generative art platforms, something, I don’t know if it was Dali or one of the other platforms, did a survey of the number of images that were created when certain prompts were put in. If you put in some stereotypes, it’s kind of regurgitating those, which gets into a whole sticky area of, OK, what is it actually? How do you create something that is in itself responsible and positive? And to what extent is it enough to just have good intentions going into it when you’re able to develop something, you know, something like AI?

Keith Yazmir:

There’s so much going on there. I love that differentiation between is AI itself responsible versus are the creators of AI working in a responsible way. There’s another idea out there why would all of these big companies be going out and talking about how AI is potentially going to turn us into this society run by computers because they’re perfectly happy with us here having this conversation about that because it’s out there it’s pie in the sky. It’s certainly not happening yet or really anytime soon. Whereas there are a million issues, that are documented, that have AI-driven features that are discriminatory. You have profiling algorithms that are out there resulting in real harm to real people. You have recruiting algorithms that are AI-driven that are resulting in inequality. You have copyright issues that are growing bigger and bigger by the day. But as long as I have you talking about the Terminator, nobody’s worried about these things because that’s going to be far worse and here we’re going to figure our way out. Now I don’t mean to be overly cynical there, but again from a communications perspective, there’s a lot going on in the fact that the conversation is being held at this very high level that is perhaps not as harmful as one might imagine to the industry and in fact in some ways somewhat helpful. So this responsible piece is fascinating, right because the whole question of AI is it sentient? When will it be sentient? Can it be sentient, right, it’s really interesting from a linguistic perspective when we start applying these fundamentally human adjectives to it. So if we’re talking about AI being responsible, what does that even mean? This machine is responsible, this set of algorithms and processes, and memory storage are responsible. It feeds right into the fears that are, this thing is sentient, this thing being more human than perhaps we want it to be. So there’s a really interesting tension there, Rob, that you kind of led us towards, that I think we’re gonna hear a lot more of that. kind of language and almost a battle between do we call this a machine, do we call this thoughtless or do we use language that actually anthropomorphizes what’s going on in AI.

Michael Maslansky:

So let’s make this a little bit more tangible by putting ourselves in the seats of tech companies and clients who may be, you know, trying to figure this out, you know, what is the strategy that they take? I think we can take it as a given that you are going to have critics of AI, either former programmers or developers, of large language models who change their minds and want this to stop, or just critics from the outside who will highlight all of the problems. And there are gonna be problems as with everything. Like with cars, some crashed, which didn’t mean that we wanted to get rid of cars. We’re gonna have a lot of that with AI. But let’s assume that inside these organizations, we’ve got people who believe in the promise of AI and believe that there are ways to control the development of AI so that it can be harnessed more for good and for evil. So they’ve set up this platform. They talk about responsible AI. They are then trying to discuss what it is that they can do or can’t do or should do or what’s happening out there. What do you recommend to them? To be focused on from a communication perspective to try and tamp down the fear without sounding like they’re describing something that’s completely unrealistic.

Robert Ledniczky:

As we just think about what would we recommend, you know, these tech companies say, how should they be going out and talking about, you know, the development of AI in a way that doesn’t continue to, breed fear in the way that it maybe has already. I think we’re seeing a lot of this language, around responsible AI. I’ve been interested to see and observe the differences between Microsoft and Google and Google coming out and Sundai Pinchai coming out and they’re trying to talk about intentionality, but I think it may be the wrong way. So, you know, talking about releasing BARD in a limited way or as a limited experiment. And there’s a broader context that you can’t ignore as you’re communicating about this, which is the business context, which is ultimately this is seen as a battle for market share and seen as a battle of who can develop the best tool the fastest. I think just from a business perspective, there seems to be this sense that there’s this great race, right? And maybe even in the geopolitical sense, we are in a great race to develop AI. And I think that’s engendering fear in itself. I think there’s this sense of, we are trying to develop this too quickly, we are trying to develop this too fast, and we will lose control of it, and it will run away from us. I think to counter that, I think the route that Google is going in terms of this idea of a limited way is the right idea, just perhaps the wrong language. I think it’s that word I just mentioned, intentionality. It’s about being intentional in the development of it. And I think a second word that I’d introduce, too, that’s important here is collaboration I think the more that these tech companies can be seen to collaborate with one another and be pooling resources and working together, Also will help to undercut some of this idea that competition is just going to drive it into the ground and create something that we ultimately can’t end up controlling.

Michael Maslansky:

So I think these points are really interesting. So from a meaning or implication perspective, you know, it seems like there are just a number of situations where tech companies are caught trying to thread a very difficult needle, right? So to the extent that it’s important to position this as a race or a competition for our future, it’s basically reinforcing the notion that this thing is so important and so powerful that the winner is gonna… you know, if not take all, take a lot and we can’t be a loser. Right. And so, you know, you want to say that we have to win this. Without making it seem like the loser or the impact of this is so powerful that we really should be worried about anybody winning it. And then, on the other hand, this idea of collaboration and control, you’ve got this situation where there’s been a lot of talk about who is responsible enough to have it or to develop it. And at the same time, as soon as you start to limit it to certain people, you’re basically saying that you’re creating an, you know, oligarchy of a set of developers. And so we’re kind of wherever you turn on this and then you’ve got responsible, even AI, to build an open, something that is kind of fundamentally open source, if not in the development itself, then in how it’s used, right? Because all the breaks that we’ve seen so far of AI with these prompts that get the AI to break out of its constraints are basically when we talk about responsibility, Keith, and you raised a bunch of examples before if it’s how businesses operate responsibly or how we treat the environment responsibly. It’s often communicated in the context of, there is one person who has control, who is saying that they’re gonna operate responsibly. Here we’re talking about AI, where really no matter how much control we impose, there are gonna be limits on control. Can you even talk about it as responsible when you know that people are gonna be able to interact with it and are gonna do everything that they can to try and make it irresponsible?

Keith Yazmir:

Right. It’s a really interesting area. I think it’s easy for us because we are language strategists to end up talking about these things in ways that, that make us look like we are minimizing the issues or we are being overly cynical because we’re talking about, oh, the language to use for it, but I think here, and you pointed to this Michael earlier, I think there really is a great opportunity for actual responsibility and the language of responsibility to converge. I certainly don’t mean to suggest that the companies that are driving AI out there right now are doing so in irresponsible ways, nor that do they not want to be doing this right and appropriately. I think that the language around responsibility is actually a fascinating one. We talked to so many of our clients and partners in every industry about how no matter what the reality is of the complexities of their world, when it comes to the general public when it comes to the state houses and the capitals around the world when it comes to customers when it comes to employees, the buck stops with whoever’s name is on the product or on the service or on the front of the door. And so getting back to your question about how to communicate. People are doing this really well right now. The Sam Altmans of the world, the folks, I have been very, very impressed by how the AI industry at large feels like they are approaching this from a communications perspective with their language very, very differently than 20 years ago when social media was beginning to explode, right? And it not very much feels like the AI community is being much more sensitive to external perceptions, to real potential downsides, to what’s going on in their industry. And, so I think that the language of responsibility, the fact that they are not afraid to call out, there are some potential concerns here and there are some things we as a society need to really be focused on, I think again not only is it appropriate and true, but from a communications perspective, it’s also very, very smart because I think they’re setting themselves up to be on our side, right? The consumer side and the regulator side, they’re saying, let’s work together on this. Rob, you talked about partnership before, and I think that’s a big theme of how they’re communicating about this. Say, don’t let me go out and just do anything I want. Let’s regulate this in a smart way. Let’s work in a responsible way and let’s make sure that we are actually serving society.

Michael Maslansky:

Okay, so let’s take it down from the high level, kind of the societal level, down to more of the functional level, because as we talked about earlier, AI already is, if not everywhere, in many places, in many products, and certainly companies are out there, not just tech companies, but other companies, are out there jumping on the AI bandwagon, and they’re trying to integrate AI into their proposition and their products and their services and their operations. And, you know, what is it that they need to think about? Again, let’s put ourselves in the seats of these communicators or these executives at companies that are debating whether and how to integrate AI. What should they be worried about and what can they do about it?

Keith Yazmir:

Well, it’s interesting because I think there are two very divergent approaches. There’s one, for example, in our own industry, marketing and communications and consulting, where you see every company on the planet talking very loudly about how, oh, we have AI-driven tools now and we are leveraging AI and it’s really being used as kind of a bright and shiny object. It’s the latest thing, people are using it almost as a catchphrase to say, we’re engaged, we’re on the cutting edge, we’re taking advantage of everything. I think in a more practical way for industries that actually are using AI in very practical ways and have been doing so for some time, it’s quite the opposite. It’s what we talk a lot about with our clients who really want to talk about the “what.” We often pull them back to say, talk about the why, talk about the benefit, talk about why people care here. To that extent, there are a lot of companies out there that… are not talking about AI but using a lot of it. Chatbots are the easiest example. People are leaning into this technology in a lot of different ways. And I would argue there that almost the less you talk about AI and the more you talk about why what we’re doing is so special and unique for you, the better off you are. Because you don’t raise any of these issues. You don’t attach yourself to any of these debates. You’re… providing a wonderful service that is reacting to your individual unique customers.

Robert Ledniczky:

I think that’s so important. And when you think about what that why is, and from some of the work that we’ve done with some companies that are using AI, I really see it break into three buckets as we think about the benefits of using AI. So one is AI as an equalizer. One is AI as an improver. And the other is AI as an answer to questions that we haven’t been able to answer before. So if we break each one of those down through perhaps a slightly boring example, which is tax management in asset management. The use of AI in tax management is not new and is starting to come through now. And as we’ve thought about that language, there’s this idea of AI is the equalizer. So now every advisor is able to effectively manage tax across their portfolios, no matter the size of the client or the assets under management. You’ve then got this idea of AI as an improver. So as you think about then, what AI is able to do for the advisor themselves. A lot of the language there is not around replacing. It’s not about automating tasks. It’s about empowering. So it’s about having an extension of your team. It’s about making tasks effortless rather than taking tasks off of the plate. So you’re still keeping the advisor in control. You’ve just got this kind of assistant, this chatbot, whatever it might be that is able to automate a lot of these tasks and the benefit of that. I think that comes back to a lot of what Sam Altman talks about when he talks about AI as a human amplifier. And then I think the third category is this really interesting piece, which is AI as an answer, and the ability of AI to do things that we aren’t able to do or would take us tremendous millions of hours to do. So as that runs into then thinking about tax management and even things like risk, we’ve talked about the use of AI in software that allows asset managers to run 20 million stress tests a month across portfolios, which is something that wouldn’t be possible before and is now possible through machine learning and artificial intelligence. And I think that’s where you start to get into some of these really exciting potentials as well in the bigger picture of what if AI is an answer for cancer? What if it can help us in pharmaceuticals find the answers that we haven’t been able to find to different diseases? What if it can help us solve climate change and get us to more powerful renewable energy sources quicker? So as I think about that and as I think about from what we see across how we communicate about the benefit, it’s really breaking it into those, you know, is it equalizing access? Is it improving the ability of people to do what they do? Or is it about being able to do new things that we weren’t able to do before? And, and, you know, that can be any one of those three things, or it could be one of those things that you as you’re rolling out or companies, as they’re rolling out AI, can speak to.

Keith Yazmir:

And Rob, in terms of those three, do the companies who fit in each of those three buckets, do they talk about AI differently?

Robert Ledniczky:

I think that’s an interesting question, right? And it always comes to your point. Certain companies need to talk about AI as the shiny new thing. There are gonna be certain people and certain companies that want to be working with other companies that are using AI because it’s the hot new thing. The CEO is telling them, you need to go and work with a company that you need to embed AI into our processes. You need to find companies to work with. In those cases, it’s gonna be important to talk about the fact that you are using AI. I think we will see that disappear much as we saw, like any new technology, right? The use of cloud, no one talks about cloud anymore, really. Everything is just now there. You don’t talk about AWS hosting Netflix. You just get the benefit of being able to access content wherever you want on whatever device you want. So I think there’ll be a transitional period overall. I think in certain industries where AI, as we talked about, isn’t necessarily new, there’s less of a need to speak to it. But I think… It’s interesting to see others like Apple, completely not using the word AI at all, perhaps not trying to tap into those fears, but starting to lead more with some of those benefits of how is it gonna make your life easier? And how is it gonna make the experience of using products more seamless?

Michael Maslansky:

Well, I think, Rob, the three that you’ve come up with, equalizing, I would say probably democratizing was a big word for a long time. With the internet, the original promise of the internet, I think that idea of expanding access at a time when across a whole range of topics, industries, and needs, there may be limits to access is incredibly powerful. The idea of improving or making it easier is a kind of universal benefit that seems to always resonate. The human condition is one where the world is getting more complex. Therefore, we want things to be easier, the things that we need to do. And so that resonates a tremendous amount. And then the third one of allowing us to be able to do kind of things that we couldn’t do before, this kind of innovation, this advancement, this progress. Like there are three fundamental ideas that really every company that’s looking at product innovation with AI, should be thinking about how they can take their message and put it there. I think those are incredibly kind of powerful principles for them to be trying to appeal to. And we’ve seen that a lot as well in terms of, we’ve seen it in the healthcare space, I’ve seen it overall in the tech space as well in terms of what people want from these tools. you know, as you both know well when we think about framing and we think about the categories of frames that we use, and we’ve got these three buckets, there’s the what it is kind of the identity, which would be very AI focused talking about the thing that it is, what it is that we’ve created. It’s a little bit self-referential. Then we’ve got what it does, which is the function. And that’s where I think we’ve these three kind of sit for the most part. They are about what it does about the function. Then there is the category of what it means where you ultimately translate this into the benefit of having something made easier, right? When, you know, if it’s easier to find information then you become more informed. And whatever the benefit of being informed on that subject is, is ultimately what it means. I think the functional place is where we are with a lot of this communication. And I think it’s really important. I think we’ll ultimately see a lot of benefits down the road as well if you get to the, you know, it allows us to do things that we couldn’t do before in healthcare, for example, and therefore people are living longer, right? I mean, that’s not a function of the AI, that’s a benefit of having these tools that are out there. And so I think that is really powerful.

Keith Yazmir:

I think there are some interesting parallels here with the transformation. The automotive industry has been. Undergoing over the past 10 years, which is, of course, both electric vehicles and, and autonomous drive vehicles. And very interestingly in America, the first electric vehicle to get talked about at all was, was the Chevrolet vault, the Chevy vault. And for a long time, the CEO of Chevy, I think very correctly probably devoted 80% of their public speaking opportunities to talking about the future and investment in electric and talking about the Chevy Volt, where, of course, during that entire time, the Chevy Volt didn’t make up, it made up 0% of their profits. And it was very interesting to be along for that ride, so to speak because once they actually launched the Chevy Volt to mass production and started trying to sell it, their advertising and marketing messages transformed. What had started out was the revolution in driving and they had the lightning and they had all these things in their print ads. And what you saw as soon as they launched was that their headline was fascinating. It said, “Chevy Volt, more car than electric.” It said, remember guys, this is actually a car. The thing you’re used to, the thing you’re comfortable with, the thing you want and will pay money for. And just you reminded me, Michael, in talking about the evolution of where we’re probably going here we’re talking a lot about AI now. The more AI is embedded and used and the more comfortable people become, the more the story is going to have nothing to do with AI whatsoever.

Michael Maslansky:

Yeah, and actually, I mean, I think that’s right there in the car example. You’ve also got the, what people really want is a car that’s electric. They don’t want something electric that happens to be a car. And, you know, we’ve seen that you know, that people often get that balance wrong in a lot of cases. And in this case, the thing that you’re selling is still the most important thing. The fact that it’s driven by AI is maybe what makes it better, but don’t forget. What it is that you’re trying to sell. Let me flip because we’ve talked a lot about the positive side and I think there’s a lot more that we could cover, but there’s also a negative side about what the implications of talking about AI are, in products and things that companies need to be prepared for. Like, you know, for one example in some of the tech work that we’ve done around search around algorithms, you know, people don’t want the internet to know everything about from a privacy perspective, from just a creep factor perspective. And so all of a sudden, the question is, like, if you say, this is AI, we’ve got AI built into our systems, what do you need to protect against?

Keith Yazmir:

Well, what comes to mind to me is the focus on privacy that we’ve had really powerfully for the past probably five years, right? And it started in Europe and now is in the US where every time you go to a website, it pops up saying, “You’re being tracked in this way. Is this okay? Or do you want to set your things?” And that reminds me a lot of some of the concerns I think people are going to start attaching to AI. So I think it continues from what we were talking about the big tech companies talking about AI and responsibility. When you are telling people that you’re using an AI-powered product, there’s likely going to be some small or even not-so-small print that explains, here’s what that means, here’s how that might impact you, here’s what you might want to be wondering about, and here’s what you might be wondering about, or ways that you might opt out perhaps, or any host of other things. I think we’re gonna start seeing some of that. There’s also this question about watermarks, about people wanting to know if a visual, or if a piece of text was generated by a human being, or whether it was generated by AI. And I think you’re going to start seeing companies using that as a bit of a competitive advantage. We’re trying to differentiate from folks saying we’re upfront. We’re going to tell you when something is being created artificially. We’re going to tell you when it was one of our people doing it. And you’re going to see other companies saying, we don’t use AI. We have people doing this and they’re going to try to make that into some sort of kind of positive differentiator. So I think there’s gonna be a lot of different ways people are gonna be treating this. It’s hard to say where that’s gonna take us.

Robert Ledniczky:

I think it’s just important as companies think about this, to your point, Keith, that there is transparency and they’re upfront and that they’re clear on how AI is being used and how that relates to the data that they may or may not be collecting or using about you. So as it relates to data, if you think about an example of AI-enabled customer service, or we’re using AI to better process customer service claims, well… There’s an interesting question in there if you’re talking about that, what are you doing with my data? How are you using that? What are you putting into AI? How is AI using that? If you talk instead about using AI to better service solutions to customer service representatives to get you an answer quicker, they’re the same thing. You’re just being specific about how the AI is being used and how your data relates to that. So I’m always a little skeptical to the extent that people talk about they actually… care about how their data is used. I think there’s a very broad swathe of the population that doesn’t really care, that will go online and is happy to have the data collected, but doesn’t think too much about it. I think it’s that piece again of coming back to, are you linking it to the benefit? Are you talking about, okay, yes, we’re using your data, or no, we’re not using your data? And then if you are, how are you using it? And what is the benefit to me of you using my data? I think that is really the piece there. too much of a knee-jerk reaction about collecting too much.

Michael Maslansky:

So Rob, I totally agree with that. I think one of the risks right now in this kind of AI craze is that people start throwing around the term and they don’t connect it to what’s in it for me for the consumer. And people start to turn against that. And they’re like, well, I don’t see why there’s anything in it for me on the positive side. I know that there are things that may be negative about it in terms of having my data, in terms of putting it out into the world, or it is fake, or misinformation, or whatever it is. And so it’s a very, it’s going to become more and more important for companies if they’re going to talk about the AI to talk about why the AI makes a difference. And if you just said that customer service, it will reduce the number of times that I’m told that my call is important to you, or that you’re actually going to know who I am when I pick up the phone in a customer service context, I’m going to be much more interested in that AI than if it’s something else that’s just helpful to the company. So I think that it’s a great point. I also, there’s something that I think is an interesting dynamic as well about them, there are going to be places where it’s going to be important to talk about AI as kind of evolutionary. It’s incremental. It’s a compliment. It’s a co-pilot, as Microsoft has talked about it. It helps. It supports. But it doesn’t really do things that you couldn’t do on your own. It just makes them faster. It makes them easier. And then there’s the revolutionary side where it does things that you couldn’t do on your own. And, it may be that we really need a different lexicon to talk about each because they are very different functions, very different sets of benefits, and very different levels of credibility that you’re going to have around each.

Keith Yazmir:

That’s such a fascinating point and I know we’re about the end, but it seems to wrap up a lot of what we’ve been talking about in terms of up until now. New innovations that are introduced are always under the rubric of a new tool for you to use and benefit from this is the first time in history with potentially the exception of autonomous drive vehicles, where the debate is more around. Or is this something that is going to, in a sense, use me and be in charge and control? And I think that really underlies a lot of the question marks and the emotion that really imbue this entire conversation.

Robert Ledniczky:

I think the big question for me in terms of communicating about it is, and we touched a little bit on this, but it’s the extent to which it’s helpful to anthropomorphize AI. To what extent is it actually useful to try to make it feel friendly, to try to make it feel like it’s something you can trust by, you know, you see a lot of the names of these tools, right? It’s Jasper, it’s Claude, it’s Amelia. It’s these… personal, they are human names. To what extent should you lean into the human and lean into the intelligent and not make a distinction between artificial and organic intelligence versus leaning more into the technology and actually disassociating for the fear that AI is going to blow the lines and that we are going to end up in a place where you can’t tell the difference between a machine and a person and ultimately. What is the difference there? So I think there’s a big question on the extent to which actually anthropomorphization is helpful versus harmful as we’re trying to think about building trust in these tools.

Keith Yazmir:

For me, we help a lot of our clients and partners help to build and rebuild trust with different stakeholders. And a fundamental piece of that is always starting from where your audiences are, as opposed to trying to convince them to come over to where you are. And here, again, I think that there are a lot of actors who have been doing this quite well up until now. But that remains the most important guidance that I have, which is to understand your stakeholders’ concerns as well as your stakeholders’ needs and wants and meet them where they are. This idea of acting responsibly, this idea of we too have concerns and thus we are working to help address them as we move forward is again, I think very responsible in… talking about their actions, but a very effective way of communicating around this and of bringing your audiences with you, whether those audiences are customers, employees, regulators, or others.

Michael Maslansky:

Awesome. Insightful, just as I suspected. Thank you Keith and Rob, I really appreciate you joining us today. For more language insights and to be in the loop on all the other fun stuff we’re doing follow us on LinkedIn at maslansky + partners and join our mailing list at maslansky.com/connect. That is all for now. Stay tuned for more episodes of HearSay because when it comes to truly effective communications, it’s not what you say it’s what they hear.

Thank you for your interest.