The Itsy-bitsy truth about handling AI and marketing

by | 11 Apr, 2024 | The Brands Blog

This Brands Blog is based on presentation given to members by Jonathan Gabay in March 2024. A video of this presentation can be found our YouTube channel here.

The British Brands Group recently invited me to deliver a keynote to its members at its Annual General Meeting. My subject? FOMAI – Fear of Missing AI.

If you are one of the millions suffering from this disorder, I sincerely wish you a speedy resolution.

This anxiety is similar to a programme once offered by numerous zoos to help those battling arachnophobia. Often anxious and fidgety, visitors would start by observing spiders safely behind glass. Over time, those plucking up the courage would reach into the eerie tanks, allowing non-venomous spiders to wander over their hands.

In time, brave participants would hold a tarantula, their palms clammy with sweat. To their amazement, they often tolerated the spiders and appreciated their extraordinary natural abilities.

For many, FOMAI is a daunting, multi-eyed tarantula with one glaring eye seemingly intent on taking away jobs.

From a paralysing force to a force to motivate

 So, how worried should we be about AI? Is the fear merely fear itself?

In my keynote, I explored how AI supports teams.

Areas such as generative AI have had mixed success. However, the latest updates improve accuracy.

Professional training that combines technology with human expertise is helping professionals recognise AI as an enhancement to natural creativity, not a replacement.

More is good; too much is just a stomach ache

The real issue lies in the practical application of AI and related technologies.

While mindful of data security issues, managers can quickly become enticed by the substantial time and cost savings promised by AI – tenfold, twentyfold and even more.

So, they rush to implement AI in campaigns. Yet, the sheer quantity of content and immediate cost reductions remain alluring, especially in a volatile job market.

It’s a clear case of short-term tactics being favoured over long-term strategy.

Reaching beyond average

 Awareness and consistent messaging have always been critical facets of successful branding.

However, powerful messages are shaped by human insight and a deep understanding of the brand. Otherwise, a narrative may seem insincere or at best, fail to add real value.

That can be catastrophic for brands, their teams and customers alike.

So, before taking the plunge to purge yourself of FOMAI, it makes sense to get a grip on what drives AI.

It all boils down to a simple calculated truth: AI takes gigabytes of data, analyses it and delivers succinct answers; it calculates average popular answers, producing (guess what…)  ‘average’ answers.

At Saatchi & Saatchi, I was taught the importance of at least aiming to settle for nothing less than excellence based on the agency’s creative principle: “Best is Better.”

Nowadays, it’s hard to avoid generic marketing messages that dissolve into a dull hum of background noise. AI risks exacerbating this trend, producing bland content mixed with synthetic elements. That’s the last thing consumers want or need.

They deserve engaging brand stories that inspire and resonate with their continuous search for meaning, purpose and belonging.

From radio ga-go to web-connected goo-goo

In a world increasingly steered by AI, young marketers are accustomed to feeding it data rather than employing their brilliant minds, often opting to press buttons rather than think critically.

This widespread trend transforms individuals into mere cogs in a machine governed by efficiency. From managers glued to analytics dashboards to creatives tethered to graphic and editing tools, every modern product pitch seems incomplete without the tagline “powered by AI.”

In advertising and marketing, creative minds are becoming confined to monotonous tasks reminiscent of Charlie Chaplin’s “Modern Times,” with each low-level operator fine-tuning a small part of a bigger set of cogs controlled by a higher-level AI “Wizard of Oz.”

It strips away personal connection, preventing individuals from seeing the impact of their work on the larger vision. As widely reported, it even pushes the dreary mechanics of work towards a crisis of identity and mental well-being.

Reinterpreting truth

In advertising, the craft of shaping complex narratives into succinct headlines is well-established. Yet, we may reduce rich, multi-layered brand stories to overly simplistic snippets, infantilising consumers and diminishing brands.

The hazards of constantly staring at screens aren’t limited to neck strain, which can eventually lead to conditions like Temporal Arthritis; it’s also about complacency, akin to cognitively dissonant smokers laughing off health warnings whilst cracking open one more box of sticks. We’re happy to offload daily thinking to one of the major tech-run cognitive processing warehouses operated by Microsoft, Anthropic, Google and soon, with the introduction of ReALM, Apple.

If it sounds like I’m a Neo-Luddite opposing progress, I assure you that I celebrate technology that enhances creativity without dominating it, like predictive analytics that streamlines customer experiences.

Yet, handling technology wisely calls for critical thought, ethics, balance, respect, and integrity.

Balancing justice

At the British Brands Group keynote, I met several legal experts who specialised in brand IP and digital privacy. They pointed out that while AI is a valuable tool, it cannot supplant the nuanced experience and refined legal judgement of professionals.

Many legal practices use Microsoft Office 365’s Copilot to create introductory slides, design graphics and summarise documents. While AI models are adept at handling predefined variables, including those trained on legal decisions, they cannot – and should not – replace human intellect and empathy.

Law and precedents may be clear-cut, but circumstances reflecting the principles of Iustitia often sway the scales towards a guilty or not guilty verdict. (A reason why juries are a cornerstone of the British criminal justice system).

In this context, AI is a strict judge sticking to the letter of the law, whereas AI users, including lawyers, are more akin to barristers who engage the jury’s sense of compassion and wisdom.

Losing a sense of perspective

 Where questions of truth emerge, issues of trust inevitably follow.

The shift towards remote working and using tech like Zoom or MSTeams has dulled old-style innate instincts to judge trust based on in-person interactions.

We decipher semiotic signals through screens, which separate us from the full sensory experience of personal encounters, including sound, touch and even scent.

In this way, trust becomes disconnected, much like playing a video game or remotely operating a drone loaded with lethal explosives from the comfort of an office, far removed from the consequences of pushing a button.

It is ‘there’, we are ‘here’, delineating feelings of detachment and depersonalisation.

“What you’re seeing and what you’re reading is not what’s happening” (Donald Trump)

Several years ago, before the rise of AI, I was invited by Meltwater, the media and consumer intelligence group, to speak about Trump’s presidential campaign.

Discussions were rife with stories about Cambridge Analytica’s alleged manipulation and so-called presidential ‘honey-badger’ propaganda tactics.

With the 24/25 election looming, organisations like OpenAI, Gemini, and Anthropic are keen to counteract and control such manipulation. However, evading restrictions on open platforms remains easy for those with time and know-how.

For the majority remaining restricted by the major tech corporations’ controls, there’s a risk that AI models’ constrained responses could lead to an authoritarian approach to information sharing.

Commoditised truth

In recent months, we’ve seen the formidable influence of political social media manipulation, with terrorist groups and rogue state-backed bots using it to fuel hatred and division.

This AI-driven technology has twisted the meaning of ‘context’ to fit the agenda of those who harness AI to amplify their voice the loudest.

Our knee-jerk world demands everything NOW.

However, ‘now’ comes at a cost.

Amidst the din of voices, many react only to the most visually arresting, eloquently phrased emotional appeals or voices endorsed by their peers. (Who are also invariably influenced by the algorithms).

The implications extend beyond corporate AI to phenomena like AI-fuelled Citizen Journalism. Once hailed as a pillar of democracy, the tool erodes the credibility of established news outlets.

In the rush for immediate content, we encounter a contradiction in news consumption.

Just as some are satisfied with the partial capabilities and truths from a free version of Chat-GPT 3.5 over a full-fledged paid update, many settle for brief, complimentary news snippets over comprehensive reports.

It creates a two-tiered news brand system.

Those who invest in reputable sources like The Wall Street Journal, The Times or The Economist are treated to thorough, nuanced journalism.

The rest become infantilised, with simplistic, shallow versions of truth.

The quest for quick headlines often leads people to reinforcement bubbles that echo their biases rather than offering balanced perspectives. Popular searches and algorithms may even amplify the more extreme viewpoints of fringe groups.

Journalism once thrived on lengthy apprenticeships that cultivated distinguished careers in uncovering truths for the public good.

Now, Clickbait Journalism waters down factual integrity into enticing but nutritionless tidbits, feeding a cycle of content that the junk-news obese widely devour yet ultimately still find unsatisfying.

Such a practice easily seeps into areas such as marketing communications. Once again, this feeds the cycle of churned content, leading to mass-consumed yet still ultimately unsatisfying communications.

Too late to go ‘cold turkey’

In March 2024, Sir Martin Sorrell cautioned marketers about AI, declaring, “Turkeys don’t vote for Christmas.” He implied that over 200,000 jobs in fields like media planning could be replaced by algorithms.

However, I’m optimistic that as some roles disappear, new ones will emerge.

What’s critical is whether these new roles will enhance or diminish the beautiful ingenuity of creative marketing.

That will depend on whose finger is swiping and pressing the buttons.

Intrinsic creative abilities such as writing, designing, communicating, developing products and understanding brand psychology empower teams to excel. These skills distinguish the exceptional from the mediocre.

So, perhaps it all comes down to confidently managing AI – just as you would handle an elegant, if not somewhat intimidating, spider (complete with those beady eyes) – with care, consideration and mindful respect.

 

Jonathan Gabay is an author, broadcaster, lecturer and branding / PR specialist

The views expressed in this blog are not necessarily those of the British Brands Group.

 


Click here to email this to a friend, or share via using the social buttons below: