By David Pring-Mill
The following text has been excerpted from Section 3.1 of the Policy2050 report “Generative AI for Marketing: Use Cases, Technological Developments, and Trends (2023-2025)” in order to serve as a product sample and fulfill Policy2050’s mission “to keep the most socially-relevant insights outside of any paywall.”
Increased awareness of the default AI “style” could motivate content creators to reorient around originality or even unpredictability as a qualitative metric. In the marketing trade publication The Drum, James Addlestone, Chief Strategy Officer at the performance brand agency Journey Further, proposed this type of shift in focus: “In the long run, the challenge won’t be ‘how do I appear authentic?’ We will always lose that battle if we attempt to win stylistically. Instead, we will need to focus more and more on ‘how do I write genuinely original content?’”
Not Quite There Yet
Some marketers believe there’s a dash of magic in a skilled author’s voice that jazzes up even redundant information. While many content creators don’t think this has been digitally replicated yet, they don’t rule out the possibility that it eventually could be.
The business education provider Section illustrated this in an Instagram slideshow that compared corporate slogans with ChatGPT’s copy. The AI chatbot swapped the McDonald’s slogan “I’m lovin’ it” with the oddly literal and overwritten phrase “Experience the taste of happiness.” It didn’t perform any better in financial services marketing. Capital One’s “What’s in your wallet?” slogan is more approachable than the, again very literal, AI copy “Empowering you to achieve your financial goals.” While Section concluded that the AI “just can’t grasp that human element of a great slogan,“ there’s a counterargument that it might have generated better options with more specific prompting.
Some experiments don’t just expose the differences between professionally-written copy versus AI output — they show the difference between professional marketers and non-professionals. When side-by-side comparisons of agency vs. AI output shows that the AI options are staggeringly worse, and commenters suggest that it isn’t, it illuminates the rarity of discernment. Many people can’t quite put their finger on whether something works linguistically and creatively, or why it works, but campaign metrics and sales data provide another source of truth in a business context.
Amy Halls, a senior marketing and talent manager for Sphere Digital Recruitment, still sees uniqueness and a nuanced depth of understanding in non-mechanical talents, writing that “emotions, consciousness, and self-awareness” all contribute to linguistic expression. At the same time, she concedes that ChatGPT sometimes “regurgitates information in a way that sounds uncannily human, in a VERY short amount of time,” and it allows her to “work faster and scope out topics.” While Halls noted that each form of generation has its strengths and limitations, her LinkedIn network poll, which had a limited sample size, was almost evenly divided on the question of whether ChatGPT will eventually be able to fully replace writers and content creators. These results were identical to a similar LinkedIn poll with a greater number of respondents. In both cases, 52% predicted that yes, AI would replace copywriters.
Sales and marketing professionals were among the early adopters of ChatGPT. Although generative AI has already sparked fears of technological unemployment in these exact roles, there is an emerging contention that the technology’s use, at least in its current form, requires its own set of skills or techniques, as well as subject matter expertise or at least general knowledge of the matters being queried or tasked.
OpenAI CEO Sam Altman has suggested that an artist will still perform the best with image generation not because they add one magic word at the end of a prompt but because their creative eye allows them to articulate a more complete vision. He has also concluded that there’s value in coming at the technology from a fresh angle or with a beginner’s mind, a phenomenon that is apparent when children use AI.
AI art generator Midjourney enables “multi prompts,” meaning their AI can factor in two or more separate concepts if instructed to do so with a double colon. This allows the user to assign relative importance to different aspects of their idea or requirements. In the first example provided, Midjourney notes that “hot dog” will meet the user’s expectations of a grilled or steamed sausage served in a bun, while “hot:: dog” applies the adjective to the noun and generates a dog with steam, or even sparks, coming off it.
Meanwhile, some ChatGPT users are even using the tool to prompt them – generating questions so that they can respond with their own subject matter expertise, in their own voice.
Artistic Movements and Philosophies
It’s worth zooming out even further. These generative AI products and services won’t operate in a vacuum. Organizations often have established, or entrenched, ways of working and collaborating that may not be compatible with generative AI usage. Content creation typically goes through multiple drafts – in the worst cases, this happens without a sense of direction, consensus, or established criteria. In such scenarios, a tool might simply propel that content through permutations and compromises at a more rapid pace, but it can’t definitively say what’s right or wrong, mediate incompatible personalities, or subtract from too many cooks in the kitchen.
Generative AI, often pitched as a frictionless content creation tool, seems destined for a certain amount of friction, perhaps even an AI identity crisis. Is it primarily a personal assistant, a researcher, an artist, a muse, or an entire computational and inherently philosophical evolution, if not a revolution, disrupting certain stakeholders? For example, how will it be balanced with original, people-led creativity?
Postmodern art was, consciously or unwittingly, premised on the belief that ideas and techniques had been exhausted, could be systematically identified and categorized, and could even be scrambled to reflect a sense of public disquiet. This anxiety and its artistic responses can’t be neatly summarized, but postmodern art was driven in part by a suspicion that the machinery of capitalism and science had been geared around an increasingly dehumanizing process.
Later, post-internet but still pre-ChatGPT and technically pre-iPhone, it became apparent that another shift in cultural production had occurred. Scholar Alan Kirby declared the dawn of a “digimodernism” or “pseudo-modernism,” observing that “somewhere in the late 1990s or early 2000s, the emergence of new technologies re-structured, violently and forever, the nature of the author, the reader and the text, and the relationships between them.” This media restructuring obsessively accounts for engagement and subscription-related data, which elevates the recipient of content to the status of “a partial or whole author,” according to Kirby, sometimes resulting in “the excruciating banality and vacuity of the cultural products thereby generated.”
After citing reality TV, modern news programs, and computer games as examples of simulated or constrained interactivity, Kirby invoked VFX-heavy movies as their own modern form of reality departure, arguing that “cinema has given cultural ground not merely to the computer as a generator of its images, but to the computer game as the model of its relationship with the viewer.”
As generative AI creates its own sense of disquiet and further blurs the relationship between creators, their tools, the creators of the works that tools processed and regurgitated, and the audiences subjected to these ephemeral, endlessly circular inputs and outputs, philosophical foundations might deserve more mainstream attention. The latest generative AI developments seem to further highlight Kirby’s early aughts concern that ephemeral digital media meant that society was shrugging off its cultural inheritance and memory, in favor of trance-like states comprised of “cultural actions in the present moment with no sense of either past or future.”
The postmodernist idea that words aren’t static, and language is a self-referential process where meanings contrast, build, and function alongside each other, might also be invoked as a reason to keep humans-in-the-loop. We need to feel how the AI-generated creative options resonate, in order to initially filter them, and then produce split-tested content by which audiences/users/consumers filter themselves as either in-market or uninterested. If marketers immediately put every AI-generated media asset into a campaign, it could be quite costly.
Implicit Motivations
The motivations of generative AI companies and their users may inspire another distinct set of philosophical or ethical questions. For example, NightCafe describes its user experience as essentially a way of shortcutting the artistic process: “Creating art is satisfying. It scratches an itch. It can be therapy. It makes you feel better. But most methods of art creation require skill. They must be learned and practiced, and without the skill, you don’t get to experience that satisfaction.”
This is an odd phrasing – doesn’t the satisfaction come, in part, from learning and practicing the skill? This description conflates intrinsic motivation, described in psychology as “the doing of an activity for its inherent satisfaction rather than for some separable consequence,” with the intermittent reinforcement or reward variability that plays a role in hardwiring gambling addictions and possibly other digitally-interfaced behaviors, such as videogames.
It’s also worth exploring the motivations of generative AI companies, especially if the goal is to accurately predict their disruptive effects on creative jobs. Even AI innovators, rooted in logic, have cognitive biases, which may lead them to undervalue other roles or overlook seemingly minor, yet ultimately impactful, qualitative attributes of performance.
Anyone who has dipped their toe into office politics or thought critically about it knows that there can be friction between departments, that technical roles sometimes resent the reductive communications or presentations of B2B marketing, and so on. A devil’s advocate argument could be made for whoever is perceived as the opposing force.
Yet in the context of automation, it’s worth asking, are you more inclined to think that someone else’s role can be automated, than the one with which you’re familiar? If so, how might this reflect a bias and skew automation timelines?
The full report “Generative AI for Marketing: Use Cases, Technological Developments, and Trends (2023-2025)” is now available for purchase on Policy2050.com.