USC Annenberg 2025 Relevance Report

Page 1


THE

RELEVANCE REPORT 2025 BY

LOS ANGELES, CALIFORNIA

NOVEMBER 2024

ACTIVATING AI. THE RACE IS ON!

When we decided to partner with Microsoft once again and devote the entire Relevance Report to the topic of Artificial intelligence, I wondered if AI’s impact on communications, over the past 12 months, had been significant enough for us to develop 40 new pieces of content. Based on past experience, I questioned the PR industry’s appetite for early adoption of a new technology, especially one that could threaten its own existence.

Happily, my concerns were unfounded.

The results from our second annual AI survey with WE Communications revealed a leap forward in AI adoption. Fears, so present just a year ago, have been replaced by a newfound energy.

Of course, AI still presents challenges – misinformation, intellectual property, bias — all thoroughly explored in these pages. Yet, the momentum is real.

The content of this year’s Relevance Report confirms that communicators are actively adopting AI. Large agencies lead this transformation, many have named Chief AI Officers and built dedicated teams to push the envelope on experimentation and engagement for themselves and their clients. Corporate teams, though cautious due to security concerns, are growing AI’s application across every layer of their business. Meanwhile, smaller agencies and startups are tapping into AI to gain a competitive edge.

This year’s report, with essays written by Microsoft communicators, USC trustees and faculty, and board members of the Annenberg Center for Public Relations, is filled with specific examples of professionals using AI in their daily work. Edelman’s Matt Harrington talks about enlisting

AI to identify influencers. Microsoft’s Steve Clayton explains the use of AI for meeting transcription. Dave Tovar writes about how AI enabled Grubhub to deliver meals to new moms. USC Marshall Professor Steve Lind previews AI tools that aid with the development of new business leads.

Other insights are sprinkled throughout these pages. WE’s Melissa Waggener Zorkin shares how our joint research uncovered a powerful trend: AI activation surges when senior leaders champion the adoption of AI. Golin’s Chief AI Officer, Jeff Beringer, pushes this notion further, urging leadership to build AI cultures of experimentation.

And our student contributors share forward-looking ideas: Michael Kittilson on the application of synthetic audiences, Amrik Chattha on tools that can be used to counter misinformation and deep fakes.

On the flip side, several contributors address the challenges with AI. Julia Wilson writes about how algorithms can create racial bias. The Dialogue Project’s Russ Yarrow and Bob Feldman explore how generative AI may increase polarization. Heather Rim from Optiv outlines how to prevent potential cybersecurity threats.

You get the message. AI isn’t just being discussed. It’s being activated in every aspect of our business, and this is just the beginning.

We’ve barely left the starting line in what’s going to be an ultra-marathon. If you’re already running, pace yourself. If you’re not, lace up! The race is on! ■

Fred Cook is the director of the USC Center of Public Relations, a professor of professional practice at USC Annenberg, and the chairman emeritus of the global PR firm Golin. During his 35-plus years at Golin, he has had the privilege to work with a variety of high-profile CEOs, including Herb Kelleher, Jeff Bezos and Steve Jobs, and managed a wide variety of clients, including Nintendo, Toyota and Disney. His book, “Improvise: Unconventional Career Advice from an Unlikely CEO,” is the foundation for his popular USC Annenberg honors class on Improvisational Leadership.

ESSAY LIST

THROUGHOUT THIS YEAR’S REPORT, we are including snapshots from our joint study with WE Communications, “Energized by AI: How Technology Is Changing Communicators’ Relationship to Work.” The research reveals that communicators have moved beyond the initial fear and hype surrounding AI, with most leveraging AI in their daily work, making them more excited about what they do.

How important do you think new advances in Artificial Intelligence will be to the future of PR work?

Where are you seeing or starting to see AI technology show up in your (or your team’s) day-to-day work, if at all?

View the full report: www.we-worldwide.com/energized-by-ai

FINDING OUR VOICE: THE ROLE OF HUMANITY IN STORYTELLING IN THE AGE OF AI

It is now more than two years since I first saw GPT-4 and had a mini crisis of faith in my career choices. In this new world, one I’d had the fortune of seeing early, what was the role for me, and for this communications industry I’ve loved and thrived in? Reinvent ourselves and our work using these new tools, that was one, and great work has happened to remove drudgery, reduce the fear of the blank page, develop new habits of productivity. But all this activity, it can obscure the second question, the more existential one: what is the core value we provide in this age of artificial intelligence?

⇒ The answer lies in the question, as is often the case.

⇒ Our value is embedded in our humanity, our understanding of emotion and story and relationships, the interplay among all these things. Over the years, how we express these have changed but the value has remained. There were times when the stories were told via print only, once a day. Then they became stories over the air, radio and then television, all new skills and stories expressed in new ways. Technology brought us blogs and social media, long form and short form video,

audiences established and persistent, audiences formed in a moment, more ephemeral in nature, here for a time, gone in the next.

⇒ This core truth shows where we have work to do, aided but not slowed by these new artificial intelligence tools. Story seeking, story finding, storytelling, grounded in our truth and understanding of the audiences we care about, the ones that matter most to the organizations for which we work. Our ability to convene the right people, internally and externally. To drive consensus for action, compromise where it can be found and commitment for next steps and actions. At our best, this is what we do, where we provide value. This has been a consistent across my career, and the change has been about our ability to find new audiences and reach them in new ways, not in that core of what our function provides.

⇒ The pace of change, at the tactical level, that is new. This is where we feel stress in our systems, about the rapid evolution of influence and influencers, the increasingly prismatic nature of perception and image. We can look at technology, at AI, and

Frank X. Shaw is the chief communications officer at Microsoft. Shaw is responsible for defining and managing communications strategies worldwide, company-wide storytelling, product PR, media and analyst relations, executive communications, employee communications, global agency management, and military affairs.

blame them for these new stresses in our lives, for us and our teams. Or we can embrace them as new ways to better show our value.

⇒ At Microsoft we’ve taken several approaches to developing tools to help us embrace AI in our roles. Included in this report are two examples, one focused on our collaboration with Meltwater to develop a new tool designed to help make decision making using data as accessible and responsive as chatting with a coworker. The second example is the work we’ve done internally to tackle the time-consuming work of transcribing and translating executive interviews, and most importantly, storing them in a single secure location. I’d encourage you to read the essays from Stephanie and Steve to learn more.

⇒ There is a story out there that will shed light on the soul of Microsoft. My job is to find it, shape it, bring it into existence for an audience we care about. I have more tools today than yesterday, and will have even more next month, next year.

⇒ There is a story for you, as well. Go find it. ■

AT MICROSOFT WE’VE TAKEN SEVERAL APPROACHES TO DEVELOPING TOOLS TO HELP US EMBRACE
AI IN OUR ROLES. INCLUDED IN THIS REPORT

ARE TWO

EXAMPLES…I’D ENCOURAGE

YOU TO READ THE ESSAYS FROM STEPHANIE AND STEVE TO LEARN MORE.

AI’S PEOPLE-POWERED TRANSFORMATION

New research finds that AI adopters feel more valued and energized at work. By creating even stronger cultures of innovation, we can unlock the tremendous potential of this moment.

⇒ When generative AI burst into the public consciousness last year, the response was electric. Across the communications landscape, people were fascinated by the technology, but also fearful of what it could mean for our industry, our lives and our world. That’s what communications professionals told WE Communications when we surveyed them in 2023. People knew that massive change was ahead, but they still didn’t know what that looked like.

⇒ Today the picture is more clear.

⇒ WE Communications and USC Annenberg recently checked in with communications professionals to find out how AI has reshaped their industry and their work. We discovered they have a much more sophisticated understanding of the technology. It’s not a menace, or a miracle worker. It’s something more exciting: AI is a powerful tool that makes people feel more respected and more appreciated at work.

⇒ In our survey of more than 600 communications professionals, we discovered

that people who use AI more frequently are 93% more likely to say they feel valued for the work they do.

⇒ We all know what happens when we feel more valued at work — we are also more excited and engaged. Feeling valued gives us energy. It emboldens us to explore, experiment and innovate. It gives us the confidence to say, “OK, this thing I tried didn’t quite work, but look at what I learned. Isn’t that great?”

⇒ Even more exciting, these AI enthusiasts aren’t limited to a handful of early adopters: Two-thirds of communicators say they use AI frequently, and 95% of them have a positive outlook on AI. What’s more, 70% of respondents believe AI helps them produce better work, and 73% say it allows them to work more quickly.

⇒ This is a tremendous opportunity for the communications industry. Communicators are ready to lead the AI revolution, and those developing their expertise are bringing tremendous value to their organizations. There is still uncertainty and curiosity about how to best deploy the technology, and that’s why it’s critical that communications leaders build cultures of innovation, experimentation and creativity to fuel this moment. People are

Melissa Waggener Zorkin is global CEO and founder of WE Communications, one of the largest independent communications and PR agencies in the world. She is an inductee of the PRWeek Hall of Fame, the PRWeek Hall of Femme, and the ICCO Hall of Fame and is a member of the USC Annenberg Center for PR Board of Advisors.

hungry to learn; 73% want more AI training opportunities from their companies.

⇒ However, leaders must go beyond investing in technology and training. We need to invest in our confidence. Communications professionals identified employer encouragement and the freedom to use AI tools as key success factors. Leaders should provide space for experimentation, innovation and even mistakes. We need to send a clear message: We trust you. You’ve got this! Now, go and be amazing! That’s how we unlock the full potential of AI, enhancing human intelligence.

⇒ Back when Pam Edstrom and I were building WE Communications, the general conversation about technology was incomplete and uninspired. Pam and I set out to tell the human stories about how technology transforms our lives, and that’s what our agency has been doing ever since.

⇒ Our tech roots have taught us that the stage we’re in now with AI — moving from adoption to application — is one of the most dynamic periods of innovation. Today many of us are comfortable deploying tools like Midjourney and ChatGPT. That’s great, but we also need to remember that these

platforms have only been around for about two years. Two years — that’s merely a blip! Think about where we were two years after the internet entered the public consciousness. We were listening to the beeps and hisses of our dial-up modems, thrilling to the sound of “You’ve got mail!”

We’ve come a long way, and we’re just getting started. We need to keep building on this progress by creating strong cultures of innovation and AI adoption, investing in knowledge leaders, and expanding use cases.

⇒ When I think about this moment we’re in now, I feel inspired — just a huge boost of excitement for what our industry can do. Each new application is a gift. I’ve never felt prouder to be part of the communications community. We’ve pushed past the fear and trepidation and opened the door to an exciting new future. AI might be the catalyst, but this transformation will always be powered by people. ■

SAY GOODBYE TO BOOLEAN: LET’S CHANGE THE WAY WE INTERACT WITH DATA

Communications and PR have always been a mix between art and science. But in today’s world of communications, where there can be hundreds of potential targets across traditional media and online outlets along with creators, influencers, social media, newsletters and podcasts, communicators are being asked to rely more on data to drive strategy and decision-making.

⇒ Over the past 20+ years, we’ve seen significant changes in our industry, whether it be the shift from physical to digital media, the evolution to the 24-hour news cycle, or the explosion of social media, content creators and influencers.

⇒ These changes have reshaped the way we need to operate. Yet throughout these changes many of the tools we rely on have remained siloed and disjointed and lacked the necessary innovation to leap ahead. We’re sitting on huge data mines but often lack the time, expertise and resources to extract insights from this data and gain a competitive edge.

⇒ Everyone knows they need to be more data-driven, but most folks aren’t data analysts and often don’t know where to start. And even if they have access to good tools, they may not use them in a way to get the best results.

⇒ As communicators, we ask ourselves, “What is our story and who is the audience? Who is the best person to tell the story?

With dozens of journalists, creators, and influencers, how do you know who to target? Is it the person you have the best relationship with, or do you venture to someone new? And how do you know you’ve made the right call? Did you get the outcome you were looking for?”

⇒ At Microsoft Communications and Meltwater, we recognized these challenges and are working together to reinvent communications using AI, Large Language Models (LLMs) and Natural Language Processing (NLP), developing new tools designed to help make decision making using data as accessible and responsive as chatting with a coworker. Overall, the goal is to use AI to shorten the loop between actions to learning.

⇒ The world is quickly moving to AI assistants and agents to act on a chain of queries from an individual. Working together, we looked at the workflows of PR and marketing roles and want to change the way they interact with data. Let’s say goodbye to Boolean and embrace tools, like Meltwater Copilot, that make it possible to query data using natural language.

Stephanie Cohen Glass is the head of communications strategy & insights at Microsoft, responsible for global measurement and insights for the Communications organization. Throughout her career, Stephanie has worked in both corporate communications and served as a political campaign spokesperson. Today her role focuses on reinventing communications within Microsoft using AI.

Chris Hackney is a leading technology executive bringing 25 years of software, digital media and public relations experience to his current role as chief product officer at Meltwater. In this role he oversees product planning and solutions advancement across all Meltwater global product lines.

⇒ The new Meltwater Copilot we’re testing with Microsoft and Meltwater customers harnesses the capabilities of Meltwater’s listening tools alongside Microsoft’s technology stack. And even though it’s early days, we see people using Copilot to ask questions and discover insights they may have thought about before but thought it was too hard to get an answer.

• “Has XYZ outlet ever covered cyber security issues?”

• “Can you provide specific examples of XYZ outlet’s coverage on the topic?”

• “Who wrote these stories?”

• “Tell me more about these stories.”

• “What is the sentiment of XYZ’s stories written by ABC?”

WE’RE SITTING ON HUGE DATA MINES BUT OFTEN LACK THE TIME, EXPERTISE AND RESOURCES TO EXTRACT INSIGHTS FROM THIS DATA AND GAIN A COMPETITIVE EDGE.

It starts with something as simple as typing:

• “What are the top five stories impacting cyber security in financial services?”

⇒ but it quickly moves to carrying on a conversation with Copilot:

⇒ This partnership answers the question, “What if I could strip out all software clutter and just ’ask’ an interface to give me the answers I need in a form that’s easy to understand?” Gone are the filters, drop-down menus, and tabs that we typically stumble over, and what’s left are the predictive insights and intelligence we need for thoughtful and quick decision-making.

⇒ By bringing Meltwater’s global data sets and workflows into Microsoft Teams, stakeholders can access these insights where they already collaborate. The hard work of those who use Meltwater platforms is unlocked and made accessible to their entire organization in Microsoft Teams. This is the power of leading platforms coming together to employ AI to transform how we drive impact in our organizations and reinvent Communications as we know it. ■

INVESTING IN AI RETURNS TIME

In the early ’80s, fresh out of USC Viterbi, I found myself staring at something that seemed both monumental and oddly mundane — the desktop PC. The beige box sat on a table in front of me, humming softly, almost unnoticeable in its presence. At the time, it moved business forward, not in great strides but in a gentle shuffle — a 1x shift that promised more that didn’t quite deliver it yet. Back then, the true horizon laid beyond our field of vision, and the potential of what was beginning remained stubbornly out of reach.

⇒ Then the ’90s struck. The internet arrived with a force that no one could ignore, pulling entire industries in its wake. It connected us, shaped new ways of communicating, expanded commerce fivefold. Suddenly, the world felt smaller, more reachable. By the time cloud computing took hold, the impact amplified further, with a 15x leap in productivity and scale.

⇒ Today, artificial intelligence redefines the 21st-century economy. Its reach spreads through every sector — healthcare, finance, transportation — reshaping them 100x, or possibly 200x. The future unfolds in real time, with speed and depth unlike anything we’ve experienced.

⇒ At Nvidia, I witness this shift firsthand. As a founding investor, I watch every AI application flow through our platforms, touching economies and societies in profound ways. This moment defines the next industrial revolution. Where previous innovations offered efficiency and productivity, AI grants something unprecedented: it hands back time, a resource that once felt fixed and finite, now returned to us, unlocking opportunities we couldn’t have imagined.

⇒ Healthcare feels AI’s touch with each diagnostic scan. Radiologists work alongside AI-driven tools that process and analyze vast data streams, detecting anomalies with remarkable precision. What once took hours now happens in moments, with fewer hands needed to manage the load. Diagnostic processes quicken, reshaping how care unfolds. Treatments accelerate, improving recovery times for countless patients.

⇒ In biotech, timelines for drug development no longer stretch across decades. AI algorithms predict molecular interactions, compressing years of trial-and-error processes into manageable months. Research moves forward with newfound agility, and patients awaiting critical therapies gain access to treatments far earlier than anticipated. The innovation pipeline

Mark Stevens is an American venture capitalist and a partner at S-Cubed Capital in Menlo Park, California. He was previously with Intel and Sequoia Capital. He serves on the board of Nvidia and is an investor in the Golden State Warriors of the NBA. He brings more than two decades of finance and investment leadership to the USC Board of Trustees in addition to a demonstrated dedication to community service within the Trojan Family. Often receiving the “Fight On!” symbol from strangers in airports, Stevens believes USC “is a special connection and gateway to the world.”

AI GRANTS SOMETHING UNPRECEDENTED: IT HANDS BACK TIME... UNLOCKING OPPORTUNITIES WE COULDN’T HAVE IMAGINED.

compresses instead of bottlenecking crucial industries held back from progress.

⇒ Energy systems, long hampered by inefficiency, now shift into high-precision operations. AI anticipates power demand before it arises, allowing for seamless distribution across energy grids. The waste once produced by peak energy consumption fades as the system adapts dynamically to each fluctuation. Resources find their path with accuracy, preserving energy and ensuring reliability throughout the network.

⇒ Employment shifts, as critics worry about AI’s role in replacing jobs. Yet tasks once requiring manual repetition now allow workers to engage with creativity and strategy. History suggests that new technologies create opportunities rather than erase them. Roles evolve. The workforce aligns itself with a future shaped by innovation, not redundancy.

⇒ AI empowers. Tools like Microsoft’s AI Copilot deliver professional-grade assistance, once thought exclusive to experts. Workers entering the marketplace engage with AI on a level that bridges decades of learning in a fraction of the time careers expand, and knowledge becomes accessible to all who seek it.

⇒ AI redefines industry, refines time, and pushes boundaries once thought unreachable. Every moment and decision moves forward with a speed that reshapes human potential. The next phase unfolds now. Sovereign AI empowers nations to build their own capabilities while healthcare, finance, and energy react in real time to AI’s precision. We reclaim time as AI offers new opportunities. ■

AI AND PHARMACEUTICAL COMMUNICATIONS: A PROMISING TOOL

Around the world, countless organizations are regularly making important decisions about how to best deploy AI across their operations. As a biopharmaceutical communications leader, I can tell you that we’re leveraging AI in our function, our company and the broader health care industry in very dynamic ways.

⇒ At Merck, we’re using AI to help us discover, test, manufacture and distribute a growing and robust portfolio of advanced medicines and vaccines. Big picture: we believe AI can help advance our purpose of using the power of leadingedge science to save and improve lives around the world.

⇒ As a company, we’re building on a longstanding legacy of innovation in discovering new medicines and making them as widely available and accessible as possible. We view AI as another tool to advance this journey and accelerate our progress and value delivery to patients. Indeed, we are already using AI and genetic data to unlock a new generation of medicines.

⇒ Within Corporate Affairs, we’re implementing a collective, strategic and creative approach, leveraging AI to enhance our effectiveness and efficiency.

At a strategic level, we’re deploying AI in three fundamental ways:

1) Tracking trends. In public relations and external affairs, we’re working with agency partners to leverage AI as an efficient, supplemental way to track reputational and societal issues that may be emerging or trending in various geographies around the world. As a global business, we need to know what’s happening around us, in every country and community we serve, all the time. This is no small task, but AI is helping us stay ahead of the curve.

2) Managing risk. In risk management, we’re leveraging AI to access incremental insights and analysis. While we continue to use more traditional and well-established risk management tools, we see the vast potential of AI technology to help us identify and analyze potential risks earlier than ever before. With the help of AI, we can more quickly and efficiently understand what’s happening, respond as needed and mitigate any possible negative impacts to our business or reputation.

3) Supporting HI. AI is a dynamic and promising tool in the hands of humanity, one with the power to augment but never replace what I refer to and emphasize as HI (human intelligence). Some of the current

Cristal Downing is the executive vice president and chief communications & public affairs officer at Merck & Co., Inc., where she leads global communications, ESG efforts, and public affairs. A trusted advisor to Merck’s CEO and board, she has driven key initiatives, helping Merck earn accolades such as Fortune’s “World’s Most Admired Companies” and Newsweek’s “Most Responsible Companies.” Prior to Merck, she held leadership roles at Johnson & Johnson, where she managed communications during major global crises, including the Ebola and opioid crises. Downing has been named one of PRWeek’s 25 Women of Distinction and is a member of Fortune’s Most Powerful Women. She is a member of the USC Center for PR board of advisers.

hype around AI might suggest otherwise, but people are creating it, people are using it, and people will chart its future. For our part, we are tapping AI to free up more HI to take on more valuable and essential work, understanding that human intelligence, compassion and care cannot be replicated or replaced.

⇒ At a practical level, this means using AI to handle necessary, more straightforward tasks like creating meeting summaries and transcripts. We’re also leveraging an in-house AI tool as a thought partner to help us begin to craft narratives and avoid “blank page” paralysis. In media relations, we’re using AI to help leaders and spokespeople prepare for interviews and interactions by analyzing past stories to anticipate possible questions and areas of interest.

⇒ Today, the many unknowns around AI continue to spark fear in people and organizations. While this is understandable, we are choosing a different path, as a company and a function. To us, the potential benefits are just too great to stand on the sidelines. Instead, we are suiting up, stepping onto the field, acting with care and integrity, and implementing AI tools we believe can help us become a faster, more nimble, more efficient and better company.

WE BELIEVE AI CAN HELP ADVANCE OUR PURPOSE OF USING THE POWER OF LEADING EDGE SCIENCE TO SAVE AND IMPROVE LIVES AROUND THE WORLD.

⇒ Truth is, there’s a lot at stake. In our business, commercializing and bringing new treatments to patients can take 10 or more years. With the help of AI, we believe we could safely reduce the length of this process by 10 percent or more. Of course, such time savings would be beneficial to our business operations. However, even more importantly, expediting discovery and access would help our current and future patients and people around the world in acute need of life-changing and life-saving treatments.

⇒ For us, that’s AI’s ultimate allure: the potential it offers to accelerate our ability to save and improve lives — and make a long-lasting positive difference for our patients, their families and communities everywhere. ■

REVOLUTIONIZING CORPORATE COMMUNICATIONS WITH CUSTOM GPTs

I remember in late 2022 asking Chat GPT to help write a release. It was amazing. Not so much the quality — just the speed and organization.

⇒ Fast forward to today and we have massively transcended that elementary use of Gen AI.

⇒ In today’s hyper-fast and brand-critical digital environment, organizations must respond to stakeholder inquiries with agility, navigate increasingly complex compliance landscapes, and ensure all our communications resonate inclusively with a diverse global audience.

⇒ Within the Corporate Communications team at Experian, we have been developing ways to customize GPTs that help us operate with speed, accuracy, efficiency and inclusivity.

⇒ Let me share with you a few examples of how we’ve customized GPTs that help us with Stakeholder Communications Management, Social Media Analytics, Accessibility Evaluator and Brand Alignment.

Accelerating Stakeholder Communications

In an era of heightened misinformation and disinformation, the ability to act quickly to provide accurate and timely information to stakeholders is paramount. Our Stakeholder Communications Management GPT equips our team with the tools to develop first drafts of thoughtful, timely responses in less time than

ever before. This engine works off our approved positions and company information to allow quick access and development of messages.

⇒ What once required hours of research and revisions is now handled in a matter of minutes, enabling us to provide rapid and effective communication to key stakeholders. We can also use this to generate concise summaries for our internal stakeholders with key talking points, ensuring our entire team is coordinated and prepared to engage with our key audiences in a unified voice.

Real-Time Monitoring for Reputation Management

With more than 5 billion active social media users globally, there are millions of posts, comments, and mentions happening every minute. For brands like Experian, staying ahead of conversations that could impact reputation is critical. Our Social Media Analytics tool was developed to monitor these real-time discussions across the digital landscape, providing us with concise summaries of potential risks.

⇒ It identifies key themes, evaluates the potential impact on Experian or the industry, and assigns a sentiment score to help contextualize the significance of the social media thread. It also prompts for engagement metrics, helping us gauge the influence and

Gerry Tschopp is senior vice president and head of global external communications for Experian, leading a team of communications professionals from all major regions. In addition, he also serves as Chief Communications Officer for North America with direct oversight of external and internal communications. He is a member of the USC Center for PR board of adviser.

virality of specific discussions. This real-time insight enables us to address emerging issues before they escalate, significantly improving how we manage reputational risks.

Accessibility Evaluator: Ensuring Digital Accessibility

Roughly 25% of U.S. adults live with a disability, making accessibility essential in all our content. The Accessibility tool ensures every image shared on our platforms is accessible to visually impaired users. It automatically generates detailed alt-text, short descriptions, and multi-sensory descriptions for people using assistive technologies.

⇒ Automating this process ensures our content meets ADA standards without increasing our workload. This bot reinforces our commitment to inclusivity while ensuring compliance with legal accessibility requirements. I would also encourage you to visit DisabilityIn to get more information on what you can do to make your content inclusive and accessible.

Brand Alignment: Guaranteeing Compliance and Consistency

With 75% of consumers expecting companies to demonstrate social responsibility, compliance in corporate communications is non-negotiable.

⇒ We all write content, and immediately seek a second pair of eyes to review our

copy. We have developed a Brand Alignment tool that scans all of our content for legal, regulatory, DEI, and brand alignment. It flags any risks and suggests revisions, ensuring our communications consistently reflect Experian’s values and meet regulatory guidelines. To be sure, this does not replace our formal compliance and legal reviews, but it does help us create cleaner, more precise copy — and fewer redlines from our partners in compliance and legal.

This AI Journey is Iterative

As we continue to push the boundaries of what’s possible in corporate communications, our custom GPTs are iterative tools that we consistently evaluate, refine and improve upon. They strengthen our work and our operating capacity, increasing response times and our ability to create content that is inclusive and accessible.

⇒ These AI-powered innovations can serve as strategic partners in our mission to stay ahead in a complex, ever-evolving landscape. However, as with any of these Gen AI outputs, we should continue to give it human review and validation.

⇒ We are just beginning this journey — let’s go together. ■

USING AI TO COMPLEMENT OUR HUMAN EXPERTISE

In the past year, artificial intelligence has moved from theory to practice, impacting nearly every industry, and communications is no exception. At JSA+Partners, we’ve embraced AI as a powerful tool to enhance our ability to serve clients, while remaining mindful of its challenges and limitations. The question is: how do we use AI to complement our human expertise?

⇒ Our approach to AI is simple: we use it to enhance creativity, not replace it. While AI helps streamline workflows, such as generating content drafts or offering new ideas, it’s important to note that we don’t rely on it for the heart of our work—developing authentic, impactful communication strategies. Instead, we might use AI when we’re unsure where to begin or need inspiration, leveraging its capabilities to brainstorm, outline, and refine ideas, always ensuring that the final product is tailored to the unique needs of our clients.

AI for Automating Mundane Tasks

One of AI’s most immediate impacts on our workflow is managing the smaller, everyday tasks that can be surprisingly time-consuming. Whether it’s formatting documents, organizing large datasets,

or assembling media reports, these tasks are essential but don’t require human creativity. AI tools can instantly standardize formatting across client presentations, apply templates to press releases, or gather media coverage in a digestible format. By handing off these more mechanical tasks to AI, we ensure accuracy and consistency across all client-facing materials.

AI IS HELPING US STRIKE THE PERFECT BALANCE BETWEEN EFFICIENCY AND INGENUITY, ALLOWING US TO DELIVER BETTER RESULTS IN LESS TIME.

Jennifer Stephens Acree is the CEO and founder of JSA+Partners, where she works with select Fortune 500 and start-up clients to develop communications programs based on clear business objectives in order to get results. Acree has over 20 years of experience in corporate, agency, and government environments while working with BtoB and BtoC category leaders in the areas of digital technology, media and entertainment. She is a member of the USC Center for PR board of advisers.

⇒ This time-saving function frees our team to focus on more complex and creative tasks, such as developing campaign strategies, crafting compelling stories, and nurturing media relationships. Ultimately, AI is helping us strike the perfect balance between efficiency and ingenuity, allowing us to deliver better results in less time.

Optimizing Targets

Reaching an audience through earned media can often be time-consuming, with constant research and updates needed to identify relevant outlets and reporters. AI can simplify this by speeding up research and offering personalized recommendations that enhance human efforts. Instead of relying solely on search engines, AI can dive deeper into a reporter’s background, analyzing the tone and focus of their past coverage to build more accurate media lists.

Enhancing Copywriting and Editing

One of AI’s strengths lies in its ability to analyze and generate text in a way that’s clear and easy to understand. Language models like Gemini and ChatGPT, trained on vast datasets, can quickly produce content in a human-like tone. Even though AI won’t fully replace

skilled writers, it can be a helpful assistant, generating alternative phrases or serving as an efficient first-pass editor to refine grammar and remove redundancies.

⇒ To make the most of AI tools, learning to master the input process is critical. Simple prompts yield basic responses, but providing detailed instructions can significantly improve the quality of the output. Over time, these models can even be trained to mimic a brand or individual’s unique voice, making AI a valuable tool for writers who know how to use it effectively.

⇒ As artificial intelligence becomes more integrated into public relations, it is clear that its role is to enhance, not replace, human expertise. At JSA+Partners, we see AI as a tool to increase efficiency and free up valuable time for the more strategic, creative elements of our work. From automating routine tasks to optimizing media targeting and refining copy, AI has become a key asset in our toolkit. However, the heart of our efforts—building authentic, impactful communication strategies—remains firmly in the hands of our experienced team.

By blending AI’s capabilities with human creativity, we’re able to deliver even more thoughtful and effective solutions to our clients. ■

THE ERA OF ENTERPRISE AI HAS ARRIVED

In the past 18 months, generative AI has shifted from a curiosity to a business imperative. The barriers to entry have been largely dismantled, allowing individuals and organizations to experiment and innovate with these powerful tools. But here’s the thing: the enterprise sector is at a crossroads. While we are seeing significant advancements in the consumer industry, the enterprise sector demands a more measured and bespoke approach.

AI is like an iceberg Just like an iceberg, AI can be divided into two distinct parts: consumer AI and enterprise AI. While consumer AI is the flashy, visible tip of the iceberg, enterprise AI is the much larger, yet lesser-known, foundation that powers the majority of AI applications.

⇒ Consumer AI refers to the AI technologies that are designed to serve individual consumers. By now, we all know them well — virtual assistants like Siri, personalized shopping recommendations when you are scrolling, and countless others. I see these applications as the “face” of AI for the average consumer. However, they represent only a small fraction of the total AI landscape.

⇒ Enterprise AI, on the other hand, refers to the AI technologies that are designed to support business operations and decision making — think supply chain optimization, customer service chatbots, digital assistants,

and beyond. Enterprise AI is often invisible to the end-user, yet it has a profound impact on the efficiency, productivity, and competitiveness of organizations. It is the “engine” that powers the majority of AI applications, and is what enables businesses to make data-driven decisions and stay ahead of the curve.

⇒ The opportunity for generative AI in enterprise is substantial, with potential market sizes reaching into the trillions of dollars. But it is not one size fits all.

⇒ What businesses need are productivity enhancements at scale. To achieve this requires a more nuanced approach, one that acknowledges the diversity of needs and the importance of domain-specific solutions.

⇒ With that in mind, large models are not always the most suitable choice. These models are often expensive to run and may not provide the level of customization that businesses require. Enterprises are asking for smaller, fit-for-purpose models that can deliver high-quality results at a fraction of the cost, with a focus on a specific use case. For example, a large banking client may require a model that can perform risk assessment that understands regulatory requirements.

The productivity paradox Technology is advancing faster than ever, but productivity gains are not. Financial

Jonathan Adashek is senior vice president, marketing and communications for IBM, adding to his previous responsibilities as chief communications officer. He is responsible for overseeing the company’s global marketing, communications and corporate social responsibility organization, which includes full funnel marketing, corporate affairs, and ESG. In addition, he is responsible for federal client business development. Jonathan aligned IBM’s marketing and communications efforts under a single brand platform, Let’s create, in support of IBM’s mission to become the leading hybrid cloud and AI company. He is a member of the USC Center for PR board of advisers.

success is dependent on productivity and AI is the answer to the productivity problem. Developers using generative AI are experiencing benefits like increased accuracy and freed-up time. Yet most enterprises have not moved beyond experimentation.

⇒ One barrier to deployment is the effective use of enterprise data in generative AI. Most organizations are not leveraging their own data to its full potential. With tools and techniques for fine-tuning models, enterprises can unlock the value of their data and address specific business needs. The biggest opportunity here is open-source AI, which allows for companies to train and build models using their own proprietary data, along with prioritizing trust, transparency and governance.

Domain-specific models: The future of AI

At IBM, we are building a diverse range of language models, including those for major Western languages, Arabic, Japanese, Spanish and others. Additionally, we are developing models for coding, time series, cyber, numerical, climate change and geospatial applications. Our focus is on creating domain-specific models that can deliver high-quality results for specific use cases, without sacrificing quality or scalability. With this sustained evolution, we are starting to see a continuum and shift from automated

to autonomous. AI is starting to go from single-step processes, information retrieval and prescriptive tasks to multi-step processes, autonomous action-taking and self-correcting systems.

⇒ As we unlock the full potential of generative AI, it is essential to address the risks associated with its misuse. By promoting transparency, accountability and responsible innovation, we can ensure that these powerful tools are used for the betterment of society. This involves governance, guardrails and the right humans — with the right skills and training — included in the process. But let’s be real: the current governance frameworks are not equipped to handle the complexity of AI. We need to rethink the way we approach AI governance and create new frameworks that prioritize transparency, accountability and human values.

⇒ Activating AI within the enterprise requires a nuanced understanding of the opportunities and challenges that arise in the enterprise sector. By delivering tailored solutions, promoting transparency and accountability, and leveraging the power of domain-specific models, we can unlock the full potential of generative AI and drive business value at scale. But it is not going to be easy. We need to be willing to challenge the status quo, think differently and prioritize human values above all else. ■

USING AI FOR RECRUITMENT… AND FOR GETTING RECRUITED

In an AI revolution, people will give organizations a competitive advantage.

⇒ From healthcare to finance, AI is becoming a vital tool for automating tasks, enhancing decision-making, and improving overall efficiency.

⇒ At Monday Talent, we’ve embraced these advancements, recognizing AI’s profound benefits AI offers in streamlining recruitment processes, improving candidate matching, and enhancing communication between recruiters, clients, and candidates. By integrating AI tools into our recruitment process, we’re able to optimize our operations while maintaining our commitment to personalized, relationship-driven recruitment.

⇒ AI technology has transformed how recruitment agencies operate, automating many time-consuming administrative tasks. By allowing AI to handle processes such as resume screening and interview scheduling, recruiters can now focus on what really matters — building relationships with candidates and clients.

⇒ These tools can efficiently scan through large databases of candidates, identifying top talent based on skills, qualifications, and experience. For example, AI platforms like Hiretual can automate initial candidate

outreach and scheduling, freeing up time for recruiters to focus on the human element of recruitment. By reducing manual effort, AI enables faster and more efficient hiring, ensuring no valuable candidate is overlooked.

⇒ One of AI’s most powerful contributions to recruitment is its ability to enhance candidate matching. Instead of relying solely on human intuition, AI leverages vast amounts of data to make more accurate matches between candidates and roles. By analyzing everything from skill sets to experience and even personality traits, AI tools help ensure that candidates are well-suited for the roles they are being considered for.

⇒ Tools like Pymetrics can assess soft skills and personality alignment, making matches based on both technical qualifications and cultural fit. This approach leads to better hiring outcomes, reducing turnover and improving job satisfaction for candidates.

⇒ AI isn’t just beneficial for recruiters — candidates can also leverage AI-powered tools to enhance their job search. AI platforms can help job seekers build stronger resumes, tailor their applications to specific job descriptions, and even prepare for interviews. For instance, tools like Teal offer personalized

Jamie McLaughlin is the founder and CEO of Monday Talent. He brings over 17 years of experience working in marketing, creative, and communications recruitment and search across 5 continents. He sits on the boards of the Institute for Public Relations, creative agency ConCreates, and influencer marketing firm Social Studies — as well as advising The Marketing Academy and Celtic Football Club. He is a member of the USC Center for PR board of advisers.

AI PLATFORMS CAN HELP JOB SEEKERS BUILD STRONGER RESUMES, TAILOR THEIR APPLICATIONS TO SPECIFIC JOB DESCRIPTIONS, AND EVEN PREPARE FOR INTERVIEWS.

recommendations for resume improvements and job application tracking, ensuring candidates align their qualifications with the roles they’re applying for.

⇒ Similarly, Jobscan analyzes resumes against job postings, highlighting keyword gaps and optimizing them for applicant tracking systems (ATS). For interview preparation, Interview.ai allows candidates to simulate interviews, receive feedback on their responses, and refine their delivery. By helping candidates optimize their applications and prepare for interviews, AI provides a competitive edge, allowing them to stand out in a crowded job market.

⇒ AI is transforming recruitment by making processes faster, more efficient, and more

precise. At Monday Talent, we’ve embraced AI not to replace the human element, but to enhance it. By automating repetitive tasks, improving candidate matching, and enhancing communication, AI allows us to focus on what we do best: building meaningful relationships with both candidates and clients.

⇒ As AI continues to evolve, we’re excited to see how these advancements will further shape the future of recruitment while maintaining the personalized, humancentered approach that sets us apart. And in the spirit of embracing technology, this very essay was written with the help of AI — demonstrating firsthand the power and potential it offers in today’s world. ■

USING AI TO HELP ADDRESS OBESITY

America is facing a significant health epidemic: obesity. In 2020, over 42% of adults were classified as obese, according to the Centers for Disease Control and Prevention (CDC). This harrowing statistic underscores an ever-growing health challenge. Obesity is a nationwide crisis that has far reaching consequences far beyond weight management.

⇒ A study by the National Institute of Health (NIH) found that severe obesity can reduce life expectancy by as much as 14 years and costs the economy $173 billion annually. It also accounts for 4 million preventable deaths every year. The word preventable is key. Our health, like the organs in our body, is interconnected, and innovative healthcare solutions — such as artificial intelligence (AI) — has immense potential in optimizing prevention and care.

⇒ AI has the power to revolutionize the healthcare industry by improving how we monitor, engage, track, and assess our health. The AI healthcare sector is projected to reach $148 billion by 2029, signaling its promise to reshape how we approach our health.

⇒ AI can enable healthcare professionals and patients to identify patterns that may signal obesity-related complications early on. Overly simplistic, one-size-fits-all approaches to healthcare, such as generalized dietary guidelines, often overlook individual health needs. By supercharging personalized healthcare, AI can lead to better health outcomes.

AI CAN BE A RESOURCE TO HARNESS VALUABLE DATA AND PROVIDE INSIGHTS TO EMPOWER INDIVIDUALS AND HEALTHCARE PROFESSIONALS TO MAKE MORE INFORMED AND PERSONALIZED HEALTH DECISIONS.

Christine Alabastro is the senior director and head of PR & communications for Prenuvo, makers of the world’s most advanced and comprehensive wholebody proactive MRI scan. Her global industry experience spans the worlds of entertainment, technology and agency, with roles at DoorDash, TikTok, Hulu, Edelman, and Golin. She is a member of the USC Center for PR board of advisers.

Ava Nichols is a freshman at USC Annenberg studying journalism and helps lead research and editorial projects at the Center for Public Relations.

⇒ As consumers, we now have access to an abundance of health data at our fingertips — we track our steps, calories, menstrual cycles, and sleep patterns. The more we know and the more data we have about our health, the sooner we can intervene with lifestyle, nutrition, and exercise modifications to reverse obesity.

⇒ While AI can provide valuable use for patients, its greatest potential is realized when used alongside healthcare professionals. AI can be a resource to harness valuable data and provide insights that empower individuals and healthcare professionals to make more informed and personalized health decisions. For example, among many use cases, AI can develop tailored diet and exercise plans based on a patient’s medical history, genetic risk factors, and health data.

⇒ The promise of AI in medical research can be profound. A study from Prenuvo published in the Journal of Aging and Disease utilized AI to analyze whole-body MRI scans and uncovered a correlation between visceral abdominal fat and brain volume loss. Elevated levels of visceral and subcutaneous fat were linked to smaller

brain volumes, suggesting these factors could influence brain health. This study shows us how AI can help us understand disease processes at earlier stages, enabling timely interventions that can extend both healthspan and lifespan.

⇒ That said, there is still an immense amount of research that needs to be done to explore the bias and challenges of AI in healthcare, which can lead to gaps in information and distort results. Understandably, skepticism of AI’s role in healthcare is common given the rapid pace and accessibility of its platform.

⇒ Nonetheless, an AI-driven future offers the potential for more personalized treatment plans, enhanced disease prevention strategies, and proactive care. By harnessing the power of AI, we can foster a more proactive healthcare landscape that addresses America’s obesity crisis, emphasizes prevention, and empowers individuals to take charge of their well-being ultimately paving the way for a healthier future for our society. ■

ACCELERATING AI IN THE ENTERPRISE

We live in an era where cutting-edge technologies transform our industries faster than we can imagine. However, what we make of it is still very much in our powerful human hands.

⇒ According to a McKinsey Global survey on the State of AI from May 20241, 65% of organizations that regularly use GenAI are doing so in marketing and sales, product and service development, and IT services. Yet, there is so much to it, and to maximize the potential of AI and promote adoption in the workplace—especially at the enterprise level—greater involvement from all areas of the organization, beyond IT, will be essential.

⇒ At Micron, we have successfully incorporated AI in our manufacturing processes for some time, but we’re also pushing to accelerate its use across our business. An early example was the enterprise rollout of an AI-based chatbot in Teams for task automation that not only speeds up processes for our 48,000 team members but also frees up to use their focus and creative energy elsewhere. Now, we are aiming higher.

⇒ At Micron, we have found that granting ’permission’ to experiment with GenAI is essential for fostering its constructive use. Our Technology & Innovation team is at the core of this effort, collaborating closely with various departments across Micron to ensure a secure and seamless integration of AI technologies. By establishing a robust security framework, thoroughly vetting new technologies, and promoting shared learning across diverse use cases, we are driving innovation within the guardrails of our organization’s safety standards.

⇒ We encourage teams to pilot various tools while taking a cross-functional approach to capturing learnings and vetting where AI can help. Just this year, I have participated in three different pilots, including trialing a platform that helps reduce bias in performance reviews.

⇒ Within our Global Communications team, we also began identifying key areas where AI can boost productivity. For instance, AI helps us reduce the time spent on tasks like media research and fact-checking while preparing briefing materials. This shift allows the team to focus on generating insights and shaping

Erica Rodriguez Pompen is director of global corporate communications at Micron, a provider of innovative memory and storage solutions with a vision to transform how the world uses information to enrich life for all. She has also mentored and advised a number of minority-led start-ups in the U.S. and Asia. Pompen has lived in five countries and is proud to call herself a third-culture kid and mother of two third-culture kids.

strategy, as well as valuable human-driven skills unique to our industry knowledge and experience. We’ve also introduced Microsoft Copilot as a tool for brainstorming, recognizing its vast potential to scale content

AI HAS THE POTENTIAL TO REALLY TRANSFORM HOW WE WORK, BUT EXPERIMENTATION REQUIRES MORE THAN PERMISSION AND ACCESS. IT MUST BE A PRIORITY.

⇒ AI has the potential to really transform how we work, but experimentation requires more than permission and access. It must be a priority. As an example, every one of my team members has an individual performance goal tied to AI. This ensures that there’s accountability and discipline in setting KPIs that clearly define what success looks like, in terms of both outputs and outcomes.

creation. By using core messaging to produce derivative content efficiently, we’re unlocking new levels of resourcefulness in our team’s communication efforts.

⇒ Most importantly, we approach AI as a collaborative endeavor. We are navigating this journey collectively, regularly convening to share feedback during the IT-led Copilot test flights. Yet, our collaboration extends beyond our organization. By engaging with consulting partners, industry peers, and broader networks, we gain valuable insights into how other organizations incorporate GenAI into their workflows. Innovation doesn’t happen in a vacuum, and with intentional co-creation, we believe we will be able to continuously uncover new opportunities. That is the future we aspire to embrace. ■

1 “The state of AI in early 2024: Gen AI adoption spikes and starts to generate value.” 2024. McKinsey & Company. May 30, 2024. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.

GENERATIVE AI SUCCESS OFTEN LIES IN THE UNSPECTACULAR

Depending on who you ask, generative artificial intelligence (GenAI) is somewhere between the technology that will usher in the most significant era in human history and the technology that will end it. The fear that surrounds this technology exists on a spectrum ranging from FOMO about not using it for everything to panic at the thought of being replaced by it.

⇒ As communication professionals, we need to guide our peers and colleagues away from both extremes. The reality of GenAI’s influence is much more nuanced, differing on a case-by-case basis. We most certainly cannot ignore it, as it is disrupting our job function, but it is far from ending it. On the contrary, it can be a significant enhancement if we keep our end goals in mind and don’t get distracted by every new flavor of the technology.

⇒ While we do not have to become data scientists overnight, understanding how it works is the first step in using GenAI effectively. The Large Language Models (LLMs) powering GenAI are a subset of Natural Language Processing (NLP) — the ability of a computer to understand and generate human language. Every LLM includes billions of data points and parameters the machine culls before responding to a prompt.

⇒ Organizations are grappling with what to make of GenAI — how to pay for it, what employees need to know, what risks exist, etc. And Communication is the function tasked with articulating all this. This strategic position gives us considerable freedom in leveraging GenAI.

THE

KEY TO DERIVING VALUE FROM GEN-AI IS MAINTAINING FOCUS ON THE BUSINESS GOAL AND ENSURING YOUR USE OF THE TECHNOLOGY TAKES A STEP — HOWEVER SMALL — TOWARD THAT GOAL.

Dale Legaspi is a USC Annenberg adjunct instructor and public relations professional with more than a decade of experience in both agency and in-house positions across B2B tech. At Zeno Group, he leads the day-today client programs for multiple accounts across corporate technology and healthcare.

Here are three common ways to start:

⇒ Synthesizing information

One of the significant upsides of an LLM is the sheer amount of data involved all potential source material. Synthesizing this amount of information manually would be impossible, but with GenAI, we can dramatically reduce the time required to ideate a written deliverable. In some cases, we can automate most of it completely. We need to be diligent about reallocating some of this time we save to vet information, and the writing and editing processes remain, but the dreaded writer’s block with the nightmarish blank page and the blinking cursor should be no more.

⇒ Summarizing existing materials

Sometimes we start from scratch, but other times we are tasked with producing an article from an existing content. In this scenario, GenAI can be incredibly useful in summarizing text. Especially if the source material is a report, white paper, or other dense, technical document, having GenAI take a first pass at a target word count can be incredibly helpful. It is on us as writers to know the source material and, thus, be able to vet what the LLM has produced. But it gives us an incredible starting point for our writing process.

⇒ Notating and recapping meetings to improve record-keeping

The final — and probably simplest — use case for GenAI is record keeping. We have all been in meetings during which we are scribbling notes furiously so as not to forget anything that was said…to the point at which our note-taking precludes us from hearing what was said. The right GenAI tool will listen to the meeting and provide as detailed — or as summarized — a transcript as you ask. We need to be sure that none of the information entered into the tool is confidential.

⇒ As GenAI continues to develop and add new flavors of technology, it is easy to get caught up in the hype and feel like you need to use every new feature for everything. The key to deriving value from GenAI is maintaining focus on the business goal and ensuring your use of the technology takes a step — however small — toward that goal. Starting small and specific to gain positive results is smart, intelligent approach that puts you on the right path. After all, success breeds success so that the bigger victories will come. ■

IF I CAN’T REACH YOU, I’LL CREATE A POCKET DIMENSION

We used to wait for the world to reveal itself. Now, we build it. The future of communication lies in reaching people by creating them. Synthetic audiences, emerging from the fusion of artificial intelligence and behavioral science, lead a quantum leap in our capacity to understand and engage with human cognition at scale.

⇒ Synthetic audiences leverage advanced machine learning to generate virtual populations. These complex digital proxies embody multidimensional representations of human decision-making and response mechanisms, far surpassing simplistic models or data points.

⇒ Each synthetic individual within these virtual populations embodies a complex web of attributes: personality traits mapped using the Big Five model, cognitive biases cataloged through extensive psychological research, and decision-making patterns influenced by factors ranging from socioeconomic background to cultural nuances. It’s a digital ecosystem with entities that think, react, and evolve with lifelike complexity.

⇒ With this foundation in place, consider the profound applications: A climate change campaign, traditionally limited by sample size and regional biases, can now test messaging strategies across millions of synthetic

individuals. This approach allows for the identification of micro-segments — groups as specific as "environmentally conscious millennials in urban areas with a preference for plant-based diets and a distrust of corporate messaging." By tailoring content to these hyper-specific audience segments, communicators can achieve resonance levels previously thought impossible.

⇒ This precision shapes messages with intimate resonance. For the urban professional concerned about air quality, we might emphasize innovative clean energy solutions. For the coastal retiree worried about property values, we could focus on the economic benefits of climate resilience measures.

⇒ Imagine modeling the spread of vaccine hesitancy across different demographic groups, taking into account factors like education level, political affiliation, and exposure to misinformation. Health officials could then craft targeted interventions, addressing specific concerns before they take root in real-world populations.

⇒ The applications extend to fields like education, where adaptive learning systems powered by synthetic audience insights can tailor curricula to individual student needs. By simulating diverse learning styles and

Michael Kittilson is a second-year graduate student at USC Annenberg studying public relations and advertising who aspires to help solve the world’s toughest messaging and communication problems. His background spans 5 years in various roles that intersect strategic communications, tech, and policy, including work with the U.S. Department of State and national media organizations.

SYNTHETIC AUDIENCES...

SERVE AS A TESTING GROUND FOR THEORIES OF HUMAN BEHAVIOR, POTENTIALLY UNLOCKING BREAKTHROUGHS

IN FIELDS RANGING FROM PSYCHOLOGY TO ECONOMICS.

cognitive processes, educators can develop materials that resonate with a wider range of students, potentially revolutionizing personalized education.

⇒ Yet the potential benefits compel us forward. Synthetic audiences offer a window into the collective human psyche, allowing us to explore the formation of beliefs, the spread of ideas, and the dynamics of social change at scales previously unimaginable. They serve as a testing ground for theories of human behavior, potentially unlocking breakthroughs in fields ranging from psychology to economics.

⇒ We push the boundaries of this technology to unlock new frontiers in cognitive science and behavioral economics. Synthetic audiences could serve as virtual laboratories for testing theories of human behavior, allowing researchers to conduct experiments at scales previously unimaginable. This could lead to breakthroughs in our understanding of group dynamics, social influence, and the formation of collective beliefs.

⇒ Cognition now extends beyond the individual. Synthetic minds, generated from patterns and precision, reflect the complexity of thought, behavior, and influence. These constructs, meticulously built, offer clarity from the din, giving us better understanding. Through them, communication transcends the limits of scale, immersing itself in the vast landscape of human intent. As these systems deepen our understanding, they shape the contours of the future, built not on observations but on creations. Now, we hold a pocket dimension — an entire world shaped by intent, designed to change ours through the precision of communication. ■

BETTER AI FOR A BETTER INTERNET

Artificial intelligence is undoubtedly a catalyst for transformative change, but does it exacerbate existing problems, reflect the world as it is, or move us toward a brighter future? If we want to build a better world with this technology, we have to do so by combining AI with a soft human touch.

⇒ AI has long been used to help tech platforms create captivating and personalized experiences for their users. Most of these companies engage through enragement: Their AI-infused algorithms connect with their audiences through negative emotions like envy, fear and hate to perpetuate a toxic cycle. At Pinterest, our AI is built on humanity’s better instincts.

⇒ We set out to give AI a different objective: to focus on inspiring and positive ideas that people want to save and actually do, not just scroll through. When we updated our AI to prioritize that content, we noticed the content shifted to action-driven recommendations like step-by-step guides, self-care tips, and motivational quotes. These types of recommendations are exactly what our half a billion users come to Pinterest for — to find the inspiration to create a life they love in a positive place online.

⇒ Our AI innovations came years before AI became a buzzword. In 2018 we saw our users cleverly navigating our search tools to find content that truly reflected them. They’d add extra descriptions like “summer style ideas plus-size” or “Black women’s hairstyles” to get results that felt personally relevant. But we knew our users really wanted to feel recognized right from the start, without any extra effort. That insight kicked off a journey of listening, learning, and creating that we’re still on today.

⇒ In 2018, we introduced skin tone ranges to search, which allows users to find inspiration relevant to them. In 2021, we added a hair pattern search tool to reflect a multitude of hair types. The enthusiastic feedback was confirmation we were going in the right direction, but the tools didn’t fully deliver on our vision. Fast forward to September 2023, and we rolled out a new addition to our AI innovations: body type technology.

⇒ With computer vision AI, Pinterest can now consider shape, size, and form to identify various body types in our more than 3.5 billion images on the platform. We have already seen that people who use body type ranges had a 66% higher engagement rate per

Elizabeth Luke leads brand communications at Pinterest, where she manages public relations and communications efforts demonstrating Pinterest’s influence and impact as a platform for advertisers. She is a USC Annenberg alumnae, and is currently an adjunct professor. She is a member of the USC Center for PR board of advisers.

WE BELIEVE IT’S POSSIBLE TO EMBRACE TECHNOLOGICAL ADVANCEMENTS THAT ENHANCE ALGORITHMS, MAKE APPS
MORE PERSONALIZED AND SUPPORT A SUCCESSFUL BUSINESS MODEL BASED ON POSITIVITY.

session on Pinterest than those who haven’t used the tool.

⇒ Our users can now see a more inclusive feed and more diverse search results right when they open the app. Search for “date night looks” and you’ll see results that are filled with a greater range of body types and skin tones. No extra work required.

⇒ We’re also investing in our AI capabilities to further enhance our ad stack. This includes identifying and surfacing relevant and inspirational Pins and ads to improve the user experience. As of Investor Day in 2023, first party search ad relevance is up 30%. Our newest ad offering is Pinterest Performance+, which uses AI and automation features to decrease campaign creation time significantly, with 50% less inputs required.

⇒ We believe it’s possible to embrace technological advancements that enhance algorithms, make apps more personalized and support a successful business model based on positivity. In our last earnings call, we reported a 21% year over year increase in revenue and reached a all-time high of 522 million monthly active users globally, with Gen Z as our largest and fastest growing audience.

⇒ But tuning our AI for positivity and inclusion is not a situation in which we want to win, especially in today’s world where we scroll a mile a day on our phones. As artificial intelligence penetrates deeper into the fabric of our lives, it is critical that we are all building AI features to be additive, not addictive — ones that prioritize inspiration and intent, not unlimited views and time spent. ■

ACTIVATING AI IN PUBLIC DIPLOMACY

Through the growing use (and abuse) of AI platforms, information (and disinformation) has become the most influential space in which countries are competing today. With 60% of the global pool of elite AI talent hailing from three countries — U.S., China and India — it is important to consider how similarly and differently these nations approach AI in domestic and international policy, information, and governance.

⇒ Among the three, the U.S. has perhaps the most fragmented policy toward AI, featuring voluntary governance and non-binding regulations. Washington’s diplomatic approach emphasizes AI’s broader role in enhancing transparency, countering disinformation, and supporting nation branding through ethical practices and norms. China, by comparison, focuses on using AI for strategic global influence, maintaining stability, promoting economic leadership within its Belt and Road Initiative, and in shaping international narratives. India also has established a more unified strategy toward AI that promotes technological inclusivity, its leadership role in the Global South, and leverages public diplomacy efforts around social impact, culture, and regional stability.

⇒ Although the U.S. lacks comprehensive federal regulations towards AI, it certainly has emerged as the leader in promoting the responsible development and use of AI to address global challenges and advance diplomacy. The Partnership for Global Inclusivity on AI (PGIAI), launched by U.S. Secretary of State Antony Blinken alongside major U.S. tech companies such as Amazon, Google and Microsoft, aims at using AI to promote sustainable development in developing countries. Additionally, the U.S. Department of State has integrated AI into its operations through tools like Northstar, which utilizes AI for digital and social media analytics to enhance public diplomacy. AI is also central to efforts to counter foreign propaganda and disinformation, as seen in the Global Engagement Center’s initiatives. Through efforts such as these, the U.S. leverages AI in its nation branding, positioning itself as a pioneer in safe, secure and rightsrespecting AI technologies for both domestic benefit and international collaboration.

⇒ Alexander Hunt, Public Affairs Officer at the U.S. Embassy in Guinea, was recognized with the 2023 Ameri Prize for Innovation in Public Diplomacy by the USC Center on Public

Glenn Osaki is the director of the USC U.S.-China Institute and a founding member of the USC Center for Public Relations Board of Advisers. He served for five years as a senior advisor to the university’s president, spearheading the university’s approach to global branding and international thought leadership, and as senior vice president and chief communications officer. He was based in Shanghai for 15 years prior to joining USC, serving as Asia Pacific president of MSL, the flagship strategic communications and public affairs consultancy of Publicis Groupe.

Diplomacy for being a pioneer in creatively using AI in storytelling. Hunt’s team employed generative AI tools such as Leonardo and Runway to create an animated graphic novel about an 18th-century African prince from Guinea who was sold into slavery and later freed by U.S. Secretary of State Henry Clay and President John Quincy Adams. Aimed at youth audiences, this project demonstrated the ability of AI in helping diplomats confront historical narratives and communicate with diverse populations in innovative ways.

THE SEPARATE PATHS OF AI USAGE IN THE U.S., CHINA AND INDIA ARE REFLECTIVE OF EACH COUNTRY’S UNIQUE CULTURE AND ETHICAL PRIORITIES.

⇒ China is also using AI in its nation branding strategy to showcase itself as a global leader in cutting-edge technologies. Through the "AI+" initiative, China aims to integrate AI across multiple sectors of its economy, transforming industries such as manufacturing, healthcare, and digital services. For example, the government emphasizes building advanced AI models to rival Western competitors like OpenAI, with tech companies such as 360 Group and iFlytek developing national-level open-source AI models. These efforts position China as a technological innovator, capable of driving economic growth through digital transformation. The country has also focused on establishing a robust AI ecosystem by improving data-sharing mechanisms and fostering AI talent, further amplifying its brand as a forward-thinking nation.

⇒ Tactically, public diplomats and communicators in China are like their peers in the U.S. and India in utilizing AI-driven tools for information collection and processing; data analysis; social media monitoring; audience sentiment assessment; AI-powered language translation; message customization; and training diplomats in crisis management and negotiations.

ACTIVATING AI IN PUBLIC DIPLOMACY

⇒ However, China is much more heavy-handed in the use of AI in domestic surveillance and security. In addition, while China has long leveraged its strong propaganda infrastructure for domestic information control across state-owned platforms, it now uses AI to influence political and informational environments in other countries. Chinese AI has been linked to foreign online and social media discussions that advance false narratives. These disinformation campaigns along with messaging that promotes a positive image of Chinese government policies, have been targeted toward its overseas diaspora and other foreign audiences.

⇒ India’s national strategy on AI pursues the delicate balance between fostering innovation and mitigating risk. The Indian government has actively encouraged AI for social welfare, including applications to detect diseases, increase agricultural productivity, and promote linguistic diversity. Such pro-innovation and welfare-based approaches to AI are particularly influential for developing countries in the Global South. At the Global Partnership on Artificial Intelligence Summit, India furthered the concept of “collaborative AI” to promote

equitable access to AI resources and data sharing for the developing world.

⇒ While India’s model for AI regulation, which balances innovation and safety, resonates with countries in the Global South, international partnerships with the West are also increasing. The U.S. Space Force and American military contractors have partnered with India-based AI company 114ai to develop advanced military and space technologies. NVIDIA has partnered with Indian conglomerates Reliance Industries and Tata Group to develop cloud AI infrastructure platforms.

⇒ The separate paths of AI usage in the U.S., China and India are reflective of each country’s unique culture and ethical priorities. The U.S. has been relatively hands-off in regulating AI, allowing industry to self-regulate, as long as responsible governance respects individual liberties. China, unsurprisingly, prioritizes securityfocused AI applications that sacrifice privacy in favor of stability. India’s national strategy on AI promotes social welfare while balancing opportunity and risk. With its rich pool of talent, growing technology ecosystem, and commitment to ethical and responsible AI, India has the potential to surpass the U.S. and China as the global leader in AI governance. ■

Please rate the following statements about your use of AI at work.

AI helps me produce better work

I often use AI in my work I successfully use AI in my day-to-day work

I have a colleague I can turn to when I encounter AI-related questions

My company encourages me to use AI

I feel my coworkers expect me to use AI tools

I am more engaged in my work due to AI

The availability of AI tools has made me more excited to come to work

TAKE NOTE

A year ago in the 2024 Relevance Report, I detailed how we were “atomizing” our processes in communications at Microsoft to identify where we could apply generative AI. I talked about the twenty-step journey that an earned media story takes in our organization and noted where there was opportunity for AI (and automation). Over the last year, we have been methodically building and applying AI across those steps.

⇒ Step 10 in that process for us is where interviews are conducted, recorded and transcribed. This is a process many of us will be familiar with — setting our phone or recorder down on the table to capture the audio of an interviewer talking with our spokesperson. We then take that audio file and have it transcribed, sometimes by hand, other times with existing AI tools. It can be time-consuming, often decentralized, and at times, prone to error. We had a strong sense that AI could transform the process and an instinct that it would bring other benefits.

⇒ And so, we set off to build a tool that any of our teams (including our agencies) could use to quickly upload a file to a secure location and have it quickly transcribed and simultaneously translated.

WE’RE BUILDING MORE AND LEARNING AS WE GO BY BUILDING TOOLS SUCH AS TRANSCRIPTION AND TRANSLATION... FINDING NEW WAYS TO UNLOCK THE GOLD MINE OF INFORMATION WE CREATE ON A DAILY BASIS.

Steve Clayton is the vice president of communications strategy at Microsoft. Having more than 25 years of experience at Microsoft across technical, strategy, and storytelling roles. He leads the Microsoft strategy team whose primary focus is to reinvent how the company operates their communications.

There are a few key points here worth noting in that last sentence:

1) Secure: These recordings often contain unreleased material that is highly confidential, so we built a solution that stores the files securely on our cloud.

2) Translated: At times, it’s useful to have these transcripts in additional languages for our teams around the world to have insight into what was said.

3) Quickly: Speed is often of the essence so a simple solution with a fast turnaround time to deliver a transcription was essential.

⇒ Using Microsoft’s Power Platform and our Azure AI services we were able to quickly build a solution that met all of these needs, and it’s now released and being put to use by our teams. Transcripts are uploaded and returned as a Word document, often in a matter of minutes. As we rolled out this AI-powered solution, we didn’t just meet our initial goals; we uncovered valuable lessons and unexpected benefits along the way.

1) This is a solution for 80% of the cases. There will remain times when we still need human intervention for the transcription for 100% accuracy. And there are other tools (such as automatic transcription in Microsoft Teams) that do the job just fine. Horses for courses as they say.

2) A useful byproduct is helping improve our data estate. Transcripts previously sat in many different places — often an individual inbox, or desktop folder — meaning the IP was siloed and locked up. Our solution ensures that all transcripts are stored securely in a single location.

3) Which created another useful byproduct — now we can reason across those transcripts using AI and ask questions of them such as “what did our executive say in that interview 2 months ago?”.

⇒ We’re building more and learning as we go by building tools such as transcription and translation and in doing so, freeing up our teams to do more of the work that they love, and finding new ways to unlock the gold mine of information we create on a daily basis.

⇒ As AI continues to revolutionize communications, the potential to streamline, innovate, and unlock new insights is boundless. This is just the beginning. Now is the time to embrace this AI-driven future, to push the boundaries of what’s possible, and to ensure that we stay ahead in the evolving world of communications. ■

TRANSFORMATIVE TRANSPORTATION KEEPS AMERICA ON TRACK

Railroads have played a remarkable role historically in the world’s ability to communicate across long distances and in implementing transformative technologies that shaped societies and economies. Thus, the role of AI in freight rail is far from inconsequential, and the railroad continues to be an engine of innovation. Railroads move the materials we rely upon based on a web of complex and interconnected signals and systems. Union Pacific is using AI to simplify the complex for the teams that power its machines.

⇒ Railroads have been and continue to be at the forefront of introducing and adopting new technologies. Railroads in the U.S. and Canada created the time zone system, enabling them to communicate where and when commerce and people would arrive, and the continent adopted the time zones they codified. Needing to transmit information over long distances, railroads advanced the introduction and installation of the telegraph building a telecommunications network across the nation. Because messages needed to be precise and concise, they advanced the utilization of MORSE code and devised a system of succinct message transmission requirements a century before Twitter/ X.

Freight rail can be likened to the central nervous system of a global economy. When analysts and economists seek to understand how markets are moving — they look at what freight trains are hauling. Nearly every object you see, touch, use and move has been transported on a freight train at some point in its lifecycle. The lumber and building materials for the homes and offices we live and work in, the vehicles we drive, the energy we use for heat and fuel, the food we eat and goods we buy have all been moved by rail.

⇒ The data and analytics involved in transporting every imaginable commodity —and the significant factors that must be considered safely and responsibly — require sophisticated systems and programs. Union Pacific has taken on that responsibility in different ways. Safely securing the goods customers entrust on its system and preserving the proprietary intellectual property its employees create and maintain, Union Pacific designed its own internal version of Chat GPT called UP Chat.

⇒ It launched UP Chat more than a year ago, and its teams have used the software to synthesize large amounts of data, documents, and content to decrease the time it takes to complete administrative tasks, analyze

Clarissa Beyah is chief communications officer for Union Pacific and a professor of professional practice at USC Annenberg. Her expertise spans the professional services, healthcare, technology, transportation and utilities sectors, and she has served as a chief communication advisor for numerous Fortune 50 companies, including Pfizer. She is a member of the USC Center for PR board of advisers.

information and feedback from large groups, including customers and employees, and help form insights more quickly, translating into greater efficiency for its people and more balance for its workforce.

⇒ Union Pacific is also using AI to inform the evolution of the technology on its tracks. Its software allows it to simulate how traffic moves across its system and identify the best route with the fewest touches and exchanges on the network. This drives greater efficiency and productivity, helping it deliver the service its customers are counting on. It also uses AI-based algorithms to inform the design and construction of trains with greater precision. These efforts are adding up to millions of hours saved and dollars earned.

⇒ AI is playing a key role in helping Union Pacific continue doing what it’s done since it was established more than 160 years ago, ensuring it remains in the driving seat of technological solutions by staying on track with the inevitability of evolution. ■

UNION PACIFIC IS ALSO USING AI TO INFORM THE EVOLUTION OF THE TECHNOLOGY ON ITS TRACKS ... THESE EFFORTS ARE ADDING UP TO MILLIONS OF HOURS SAVED AND DOLLARS EARNED.

ADDING EMOTIONAL INTELLIGENCE TO ARTIFICIAL INTELLIGENCE

AI has been the dominant subject among communication and marketing experts for the past two years. This past year, more of our industry colleagues began moving from experimentation to implementation.

⇒ As part of our AI implementation journey at Grubhub, we integrated a robust suite of tools to shape and refine a campaign called “Special Delivery.” This multi-channel campaign aimed to help moms get the one thing they’ve been craving for nine months: their post-delivery meal.

⇒ After months of dietary restrictions, we discovered that the postpartum meal was highly anticipated, with 75% of moms planning it during pregnancy. To celebrate this, Grubhub sponsored moms’ first meal post-delivery during August, the peak birth month in the U.S.

⇒ Throughout this campaign, AI addressed many relevant needs and use cases. Here are my three biggest takeaways.

Harness Human Emotions with AI

Creating emotional connections with consumers is paramount to building our brand and driving the business forward. When consumers feel emotionally connected, they develop a sense of belonging and attachment, which can enhance their satisfaction and likelihood to advocate for that brand.

Especially when we operate in a commoditized industry with not much differentiation between competitors.

⇒ The “Special Delivery” campaign was all about zeroing in on that very personal, emotionally charged moment in a new parent’s life. Therefore, it was essential our creative content conveyed that in the most authentic and compelling way possible.

⇒ Given this, one of the most exciting aspects of our AI campaign integration was leveraging AI-powered Emotive Content Testing. With this tool, the team was able to test the effectiveness of creative content using real-time facial recognition technology. Real-time facial recognition technology allowed us to understand consumers’ subconsciousreactions to our hero video — before we even launched it. Through this, we learned what elements of the video were strongest at capturing attention, driving emotional engagement, and inspiring action. And perhaps more importantly, what aspects of the video were driving the opposite emotions.

⇒ The power of Emotive Content Testing lies in its ability to monitor real-time facial cues, unlike traditional testing methods that rely on written surveys. This approach helps eliminate the various biases often present in traditional testing environments. It’s a lot

Dave Tovar is the senior vice president of communications and government relations at Grubhub where he drives brand positioning, stakeholder engagement, and shareholder value. Previously, he served as vice president of U.S. communications for McDonald’s, overseeing brand engagement and system communications, and was senior vice president of corporate communications at Sprint, managing executive communications and corporate reputation. He is a member of the USC Center for PR board of advisers.

easier to hide behind a written response than it is your facial expressions. If you’ve been doing opinion research long enough you understand that people aren’t always honest with what they say compared to their actual behaviors.

Artificial Intelligence Can’t Replace Emotional Intelligence

One of the biggest concerns with AI is the potential elimination of jobs. However, we implemented nearly 20 A/B tests throughout the “Special Delivery” campaign to understand if AI-led (A) or human-led (B) development of campaign communications was more effective.

⇒ What we discovered is that it isn’t about A versus B. The winning formula is A+B. Artificial Intelligence can’t replace emotional intelligence. But what it can replace are the rote tasks that prevent us from spending more time on what’s really important — and that’s effective storytelling.

⇒ An easy example is press release writing. Think about the amount of time it can take to get an initial draft on paper. Maybe 2-3 hours, which can be more if you have writer’s block. With AI, those 2-3 hours can be done in 2-3 seconds, giving back valuable time to focus on making the storytelling more creative, compelling, and, ultimately, more effective.

⇒ The biggest risk for PR practitioners is NOT embracing and adopting AI as a critical tool in the toolbelt.

Don’t

Be

Afraid to

Play the “Newbie Card” New AI technology and tools are emerging at an astonishing rate. What’s cutting edge today might be outdated by COB tomorrow.

⇒ Because of this, it’s critical both clients and agency partners embark on their AI journey together with full transparency. There has never been a time when playing the ’newbie card’ was more valuable than trying to be the ’expert in the room.’

⇒ At the start of the Special Delivery campaign, only 37.5% of our in-house and agency teams had experience using AI. We embraced this fact and committed to complete transparency throughout the AI integration process. By regularly examining what was working, identifying areas for improvement, and sharing new insights, both teams rapidly expanded their AI knowledge.

⇒ Adopting a learner’s mindset rather than pretending to be an expert is crucial for thriving in the rapidly evolving AI landscape. ■

FROM SCI-FI TO STRATEGY — HARNESSING AI’S POWER

When I reflect on AI, the term “artificial intelligence” itself feels somewhat of a misnomer. These systems are neither truly autonomous nor inherently intelligent; instead, they rely on human interaction and supervision to function effectively — for at least right now.

⇒ Utilizing tools such as ChatGPT, Midjourney, and Perplexity could be compared to a young and highly talented — but still inexperienced — employee capable of delivering impressive results when provided with clear and specific direction. You must review, proof, fact-check, and verify their output as you would before submitting any work completed by a direct report, particularly an AI one.

⇒ For those who remember, there was a Star Trek movie centered around Voyager 6, a NASA craft traveling the solar system to collect data. Over time, this craft, with support from its advanced systems and the alien technology it encountered, evolved into a being with AI-like capabilities. It even had human-like qualities. But in its relentless quest for knowledge, it became unmanageable. While this is a sci-fi example, it is not entirely far-fetched when you think about AI today. In public relations and

marketing, we also manage AI-supported tools that constantly gather information and expand their capabilities. Unlike Voyager, which evolved to gain strength and the ability to sense and protect itself from perceived dangers, PR and marketing professionals have the unique opportunity to shape and actively guide AI’s development.

⇒ As lifelong learners in this profession, we are adapting to the rapid growth of AI tools and the spectrum of solutions they provide — from helping to write sharper press releases to generating video content. However, we also operate within a framework where the U.S. government monitors and regulates how AI is used. Many other countries, with less oversight, may take different approaches. However, this conversation about global AI governance is one for another time, as our focus today is on how AI impacts PR and communications here.

⇒ At IW Group, we recognized early on the potential of AI to revolutionize our work. We committed to leveraging AI-supported tools, including Claude, Runway, HeyGen, and ElevenLabs, that enable our teams to approach what they do from a fresh and tech-forward perspective. While mastering

Bill Imada is the founder and chairman of IW Group, a multicultural advertising and communications agency. With over 25 years of experience, he has worked with top global brands like Coca-Cola, McDonald’s, and Disney. A community leader, Bill co-founded the Asian & Pacific Islander American Scholarship Fund and served on President Obama’s Advisory Commission on Asian Americans and Pacific Islanders. He is a founding member of the USC Center for PR board of advisers.

each platform has its challenges — especially with AI companies releasing new models and updates almost monthly — these tools’ possibilities are genuinely remarkable. Specifically, we have been able to explore new efficiencies and creative opportunities in how we design our campaigns and produce content. Our creative canvas, once specifically defined by flat budgets and limited resources, has now expanded ten-fold thanks to AI.

⇒ IW Group was one of the first multicultural PR and marketing agencies to begin utilizing AI tools. In 2022, we produced a Lunar New Year commercial for McDonald’s that featured one of the first broadcast applications of NeRF (neural radiance fields). This AI-based technology turns 2D images into 3D environments. The integration of this then-emerging technology allowed us to produce a truly mind-bending visual spot that would be extremely difficult to film and edit by traditional means.

⇒ In June, we launched one of McDonald’s first-ever AI-led campaigns in the U.S. with our Grandma McFlurry / Sweet Connections program. To address the language barrier between Gen Z and their immigrant grandparents, we created an AI-powered

website that allowed users to record a message and have it translated into the language of their choice using AI video translation and voice cloning. The final output would be a video that made the user look and sound like they were speaking a new language. Our solution was hailed as “brilliant” by Inc. for merging human insight and AI — which is IW’s AI mantra.

⇒ IW Group, as a multicultural and multigenerational creative agency, believes the possibilities of incorporating AI into our work are limitless. However, we must ensure we apply it responsibly. As we witnessed the recent faux pas surrounding the AI-powered theatrical marketing campaign for Francis Ford Coppola’s "Megalopolis," we cannot treat AI like an infallible CEO. Instead, we must remember that AI is like an eager employee wanting to learn and experience as much as possible while still needing our input and guidance. ■

With support from Mr. Telly Wong, IW Group/ New York and ChatGPT.

THREE USE CASES WITH OUTSIZED FUTURE POTENTIAL & IMPACT

AI has rapidly become a cornerstone of strategic decision-making in many industries, including communications. In the face of an ever-changing media landscape and complex consumer behaviors, AI provides invaluable tools for gathering insights, predicting trends, and measuring outcomes. AI can enhance reputation management, drive consumer engagement, and inform business strategies, making it an extremely powerful asset.

⇒ In particular, this essay will highlight three specific areas — proactive reputation management, influencer marketing, and the impact of trust on business outcomes — that illustrate how AI is helping to reshape and improve the effectiveness of communication strategies across various industries.

Protecting Reputation: AI-Driven Narrative Monitoring

In today’s operating environment, reputation is fragile, and the speed at which issues can escalate presents unique challenges for business leaders, encompassing communications professionals across the C-suite bench. AI has become a crucial tool for identifying emerging risks and preventing potential crises. Using AI agents that employ techniques like recursive topic modeling, communicators can analyze large volumes of data from digital platforms, news outlets, and social networks. This allows

them to spot patterns or themes that might signal an emerging threat long before it reaches mainstream attention.

⇒ Importantly, this can be done much more effectively and efficiently — instead of multiple teams analyzing data sets and trying to identify insights, AI agents are able to analyze that information centrally, consistently and more expediently. AI can provide a data-driven way to monitor, visualize, and manage reputational threats before they escalate. And, communications teams can focus on identifying mitigation strategies, informing business partners about potential issues, and solving problems before they become crises.

⇒ For example, one of our teams deployed AI tools that created a 25% lift in overall share of voice (FSOV) for our client versus its competitors. The success was attributed to AI’s ability to pinpoint specific topics with the potential for significant positive media traction and limited vulnerability. Earned media stories that aligned with AI-recommended topics were more impactful, allowing the team to focus on narratives that positively shaped public perception while mitigating negative ones. This approach demonstrates how AI can provide communication professionals with a data-driven way to monitor, visualize, and manage reputational threats before they spiral out of control.

Matthew Harrington is the global president and COO at Edelman, a top communications marketing firm, specializing in corporate positioning and reputation management. With 25 years at Edelman, he has advised major clients like GE, Samsung, and Starbucks on crisis communications, M&A activity, and IPOs. Notably, he managed the first internet crisis response during Odwalla’s product recall, earning a Silver Anvil award. He is a member of the USC Center for PR board of advisers.

Brian Buchwald is the global chair of artificial intelligence and product at Edelman, driving innovation since joining in January 2023. Prior to Edelman, he was the chief strategy & business development officer at Talkwalker, president of global intelligence at Weber Shandwick and co-founder of BOMODA, a market leader in big data insights focused on China’s evolving consumer landscape. Additionally, Brian has held senior positions at Hulu, NBC Universal, and DoubleClick. His insights on the intersection of AI and consumer understanding have been the New York Times and Wall Street Journal.

Driving Purchase Consideration:

Identifying Effective Influencers with AI

In influencer marketing, AI has become a gamechanger, especially when trying to move beyond superficial metrics like follower count or name recognition. Traditional methods for selecting influencers often rely on visibility or brand recognition, but AI can offer a deeper analysis of influencers’ actual impact on their audiences. By analyzing factors such as engagement patterns, sentiment, and audience demographics, AI enables more precise identification of influencers who are most likely to drive desired consumer behaviors.

⇒ In one specific example, AI tools identified influencers who were 130% more likely to generate purchase consideration compared to those previously chosen by our client based on the traditional approach of name recognition alone. AI-driven insights can help refine influencer selection to ensure that the marketing efforts resonate more deeply with target audiences, improving the efficiency and effectiveness of specific initiatives. By providing granular insights into audience behavior and influencer impact, AI helps communicators better understand which voices are most persuasive in driving engagement and purchasing decisions.

Boosting Sales: Measuring Trust with AI

Trust is a key factor in consumer decisionmaking, and AI offers sophisticated methods for measuring and analyzing it. AI-driven tools can measure specific trust dimensions — such

as integrity, ability, or dependability — that can then be analyzed to assess their direct impact on business outcomes. This capability is particularly valuable for brands that are looking to build or restore trust with key audiences.

⇒ For one specific global consumer goods manufacturer, we found that for every 1% increase in trust, sales increased by €100 million. This insight came from analyzing trust across different dimensions and linking these insights with sales data. Of course, this finding may vary meaningfully by organization, existing trust levels and many other factors, but this specific linkage between trust and revenue has tremendous potential. The ability to identify which aspects of trust most influence consumer behavior, for example, will give marketing and communications organizations newfound insight and enable the potential to increase the business impact.

Conclusion

Integrating AI into communication strategies benefits reputation management, influencer marketing, and trust measurement. By leveraging AI, communication professionals can make more informed decisions, anticipate emerging issues, and optimize engagement efforts in ways that were previously difficult or impossible. As AI technology continues to advance, its role across communications will likely expand, offering even more tools to navigate the complexities of today’s business landscape. ■

A CASE FOR DEPLOYING AI AT NON-AI SPEEDS

Do you ever get the feeling that the artificial intelligence train is moving just a little too fast? If so, you and countless corporations have a lot in common. There’s no denying AI’s ability to transform teams and businesses, and not just to help reduce risk but to maximize and unlock value (if leveraged responsibly). But right now, as you read this, there are teams across the corporate landscape trying to figure out a way to hold the train at the station long enough, so they don’t end up in an unintended and costly destination. The only problem is, their passengers — AKA employees — want the train to leave, and now.

⇒ Train analogies aside, for my financial services organization, we’re excited about the many ways AI can potentially help us support consumers in the future, how we understand and leverage cultural nuances across our global workforce and how the simplest use cases can make the most meaningful differences in our daily workflows and activities. But we’re also being incredibly methodical and careful about where we start on our journey, and the unintended consequences it can have for our business today (e.g., data security, regulatory compliance, etc.), regardless of the advantages we see in the future.

BY SETTING OUR GUARDRAILS EARLY AND DEVELOPING OUR GOVERNANCE FIRST, WE’RE ABLE TO SPEED UP RESPONSIBLY WHEN WE NEED TO, AND ULTIMATELY WANT TO.

Faryar Borhani chief communications officer at Encore Capital Group and a PRWeek’s 40 Under 40 recipient, oversees global corporate communications strategy across Europe, Asia, and the Americas. Since joining in 2021, he has launched major initiatives, including the company’s ESG strategy and the Economic Freedom Study. With previous experience advising Fortune 500 leaders at ICF Next and managing communications for FOX Soccer, Borhani’s expertise spans corporate reputation, crisis management, and digital transformation. He is a member of the USC Center for PR board of advisers.

⇒ To combat this, one of the first things we’ve done is establish an AI Center of Excellence with cross-business representation — both functionally and geographically.

We’ve used this body of Information Technology, Operations, Information Security, Legal, Compliance and Communications leaders to source requests for new AI tools teams want to use, set a strategy for how we’ll methodically introduce more capabilities and educate our entire workforce on the power and risks associated with AI. Every conversation begins with, “is this a must have, or a nice to have?” It’s hard not to acknowledge the unsettling feeling that anything we put into the tool(s) has a chance to potentially come out somewhere else we can’t control.

⇒ Now, I want to make it clear that this isn’t an argument for being last, it’s an argument for not feeling you have to be first. We’re aiming to counterbalance the urge we all have — even in our personal lives — to “AI-ify” everything. By setting our guardrails early and developing our governance first, we’re able to speed up responsibly when we need to, and ultimately want to.

⇒ For our Global Corporate Communications team, we’re already employing Microsoft CoPilot to support our media training efforts, test

the resonance of messaging, scale the content we create for multiple channels and even help bridge the language-divide for our nearly 8,000 colleagues across nine countries.

It’s a critical partner in helping us conduct research for thought leadership, speeding up our writing and describing abstract, complex business scenarios in the easiestto-understand terms. Recording a video of a leader using a teleprompter, resulting in eyes that move all over the screen? AI can fix that!

⇒ All of this is to say that while we feel we live in a new era of digital disruption and transformation (and we do), traditional planning and governance is still our friend. We still need to understand what our audiences want, need and how they feel about what they receive. AI can aid that, but traditional communications planning continues to interpret the human nuances better than anything we have today. That planning also helps us decide which AI tools in our Comms TechStack to employ, in what order and for what specific purpose.

⇒ Balancing the exciting outcomes we can achieve through AI with the potential risks it poses isn’t easy. However, as communications practitioners and the ultimate stewards of our organizations’ reputations, it’s critical we get this right.

HOW GENERATIVE AI IS IMPACTING DEMOCRACY

As a young person entering a new political landscape, I’m both excited and cautious. AI’s potential to engage my generation in democracy is thrilling but must be used wisely. We’re shaping the rules for GenAI in real time, and the choices we make today will profoundly impact future elections.

Deepfake Detection and Mitigation

One of the most pressing concerns surrounding AI is the spread of misinformation, especially deepfakes. DeepMedia AI and Sentinel AI are at the forefront of combating this threat with sophisticated detection platforms. Major outlets like Reuters and The Associated Press already use these tools to identify and mitigate deepfake content.

⇒ Politically, both companies are playing crucial roles. The Democratic National Committee has adopted DeepMedia’s tools for the 2024 election, while campaign teams nationwide rely on Sentinel’s multi-layered defense systems. The government is also taking note—DeepMedia recently secured a $1.25 million Air Force contract, and Sentinel received $1.35 million for its deepfake detection efforts.

⇒ These tools, which cost between $100,000 and $1 million annually, are expensive but necessary. As we approach

critical elections, they are crucial for maintaining the integrity of our digital information ecosystem. However, the challenge remains: can we make them widely accessible, or will only the wealthiest campaigns deploy them?

Personalized Political Messaging

Beyond combating misinformation, AI is revolutionizing how campaigns communicate with voters. Companies like Battleground AI and Resonate AI are using advanced language models to help campaigns create hyper-targeted content across various media platforms. This level of personalized messaging, once reserved for well-funded campaigns, is now available to smaller operations.

⇒ For instance, BattlegroundAI can produce 100 tailored search ads in just a few minutes, while Resonate AI sends thousands of micro-targeted emails. These tools are reshaping how political campaigns connect with voters, making interactions feel more personal and relevant.

⇒ However, these technological strides also raise ethical questions. Campaigns and regulators must balance effective communication and ethical behavior, particularly in protecting voter privacy.

Amrik Chattha is the co-founder of VoteU, an AI-driven app that helps college students register to vote and learn about relevant local issues in college towns, often outside of their home state.

Automated Fact-Checking

Misinformation is a challenge to democracy, yet AI is proving to be a valuable ally in the fight against it. Fact-checking groups like PolitiFact and FactCheck.org are using AI to identify false claims quickly. For example, PolitiFact employs machine-learning models that review statements, which allows human fact-checkers to concentrate on more complex issues.

⇒ FactCheck.org collaborates with social media platforms to highlight misinformation and has even started a pilot program that provides local newsrooms with AI-powered fact-checking tools. These advancements are crucial in keeping the public informed, especially as tactics for spreading false information become more complex.

⇒ Still, relying on AI for fact-checking has drawbacks. Algorithms can overlook nuances or contexts that human fact-checkers would catch. Merging AI with human judgment is crucial for maintaining healthy public dialogue, and we need to keep refining these tools to ensure their accuracy and fairness.

Voter Education and Engagement

AI’s potential to increase voter engagement is immense, particularly for younger and more mobile populations. At VoteU, a platform I co-founded, we’re using AI to make it easier

for college students to register to vote and get up to speed on local issues. This is super helpful, especially for those living in swing states or who might not be familiar with the political scene in their college towns.

⇒ Our AI-driven tools at VoteU help break down complicated ballot measures and provide personalized advice. These efforts aim to boost youth participation in elections and encourage deeper involvement in the political process.

Balancing Innovation with Responsibility

As we step into this new era of AI-driven politics, the whole landscape of democracy is evolving quickly. AI opens up exciting opportunities for engaging more people, yet it brings substantial challenges, including deepfakes, personalized manipulation, and automated disinformation risks. Despite these challenges, with the proper safeguards in place, we can use AI to benefit society. We really need to use Good AI to fight Bad AI.

⇒ Our involvement in this change is essential. We need to insist on transparency from the companies creating these tools and support efforts that use AI to enlighten and empower, not mislead. The future of our democracy isn’t just shaped by us; it’s also influenced by the algorithms we deploy. It’s time to code it wisely. ■

CONAGRA’S HUMAN-CENTERED AI APPROACH

In today’s rapidly evolving business landscape, artificial intelligence (AI) stands as a transformative force reshaping industries worldwide. It’s becoming clearer than ever that companies must leverage AI to drive efficiencies, gain insights from vast data sets, and stay competitive. At Conagra Brands, we recognize that while AI technology holds immense potential, its true value is unlocked when it serves to empower people.

⇒ By adopting a human-centered approach to AI, we prioritize responsible use across the company, ensuring that our employees remain at the heart of our strategy. Conagra’s commitment to placing people first in our AI initiatives not only enhances decision-making and efficiency, but also drives safe and sustainable growth.

The Evolving Role of AI in Business

Artificial intelligence has become integral to modern business operations. Conagra leverages AI to quickly and accurately identify patterns, make predictions, and process large volumes of data from various sources, including customer interactions, market trends, supply chain logistics and more.

⇒ However, AI can’t do it all. Human insight is crucial for interpreting results and making strategic decisions, and a human-centered approach ensures technology complements rather than replaces real-life expertise.

Conagra’s Human-Centered AI Strategy

At Conagra, we believe that AI should be a tool that enhances our employees’ abilities, not a replacement for them. We focus on developing AI solutions that are transparent, fair, and aligned with our company’s values. We invest in training programs and user-friendly AI tools to ensure that all employees, regardless of technical background, can leverage AI in their roles.

⇒ While traditional AI implementations often prioritize technological capabilities over user experience, Conagra’s approach ensures that AI tools are designed with the end-user in mind. Engaging our employees in the development process results in higher adoption rates and more effective use of these developing technologies.

⇒ By encouraging our employees to expand their skill sets through training programs and career development opportunities, we empower them to view AI as an extension of their work. This not only boosts morale but also fosters innovation from within.

Applications of AI at Conagra

Conagra is utilizing generative AI, intelligent automation, and strategic collaboration with external partners to enhance operations across the business. For example:

Jon Harris is senior vice president and chief communications officer for Conagra Brands. He is responsible for the strategic development, direction, and implementation of corporate communication and reputation management programs across the organization. Harris also oversees the Conagra Brands Foundation, as well as corporate giving efforts. He previously led communications at Hillshire Brands and Sara Lee Corporation, and has held leadership positions at Bally Total Fitness Corporation, PepsiCo, Ketchum Public Relations, and Medicus Public Relations. Harris is a member of the USC Center for PR board of advisers.

• AI helps us analyze the purchasing habits of millions of consumers, identifying trends that inform product development, marketing, and communications strategies.

• AI’s capability to integrate and interpret diverse data sets has led to the creation of unique and innovative product offerings tailored to unmet consumer needs, reducing concept ideation time from months to weeks.

• AI contributes to the creative design process as well, enhancing everything from strategy and product conceptualization to execution and production. For example, AI-assisted design tools have the potential to streamline the process by eliminating tedious, repetitive back-end work, freeing up designers to focus on creativity rather than executing design ideas.

• AI solutions in supply chain optimization, quality control, and customer service are in the pilot stage, all designed with the goal of supporting our employees and improving operational efficiency.

⇒ On the communications front, we’re in the initial stages of integrating AI into our overall strategy. Our team is exploring use-cases of AI assistance through drafting of press releases, presentations, internal messages, creative and briefing documents. Automated video production tools have allowed for efficient creation of visual

content, aiding in both internal communications and marketing efforts. By automating routine tasks, our team can focus on strategic initiatives that both require and benefit from human creativity and judgement.

⇒ As we continue to explore and integrate these innovative solutions across the company, our focus remains on leveraging AI to support our employees, improve operational efficiency, and maintain a competitive edge in the market.

Future-Proofing the Workforce

AI is an ever-changing field, and we are working hard to stay current with emerging technologies that can drive future growth opportunities. Through these digital capabilities, we’ll enhance our teams’ performance, sharpen their focus, and improve operations and innovation across Conagra.

⇒ To date, centering our AI strategy around our employees has been incredibly positive across the board. By equipping teams with tools and learning opportunities, we’re fostering a supportive environment for exploring new ideas and solutions.

⇒ The journey of AI at Conagra is just beginning, and its potential to revolutionize our business processes and strategies is immense. ■

RESPONSIBLE AND EQUITABLE AI IS COMMUNITY-DRIVEN AI

Concerns about who benefits from the advancement of AI and who bears its heaviest burdens loom large in the minds of those who study the societal implications of AI and those who seek to develop and deploy that technology responsibly. These concerns are rooted in more than technological cynicism — they are grounded in extensive evidence of AI systems that reinforce bias and the fact that AI innovation is rarely directed to benefit marginalized communities.

⇒ The philosophy that AI should benefit our most vulnerable communities and, therefore, should be developed in partnership with those communities is the guiding principle of the USC Center for AI in Society (CAIS), a collaboration between USC’s Suzanne Dworak-Peck School of Social Work and Viterbi School of Engineering. CAIS’s Coordinated Entry System Triage Tool Research and Refinement (CESTTRR) project is an exemplar of their philosophy in practice.

⇒ CESTTRR was a three-year project that investigated the use of triage tools used in Los Angeles to assess vulnerability among people experiencing homelessness and to assist in resource allocation decisions. Users of these tools suspected that they were not capturing the full vulnerability of certain types of people experiencing homelessness, namely people of color. The goal of CESTTRR, then,

was to create an updated assessment tool that could predict risk more accurately, more equitably, and more efficiently than the current triage system. That the project was a success had everything to do with the study team’s commitments to a community-engaged research process characterized by the following three principles:

⇒ Principle #1: Establish Stakeholder Partnerships. CESTTRR partnered with three types of stakeholders. On its Community Advisory Board were persons with marginalized identities and lived experience and direct service providers who worked closely with this population. And its Core Planning Group was composed of key system-level stakeholders like the Los Angeles Homeless Services Authority (LAHSA) and the Los Angeles County Department of Mental Health. The study team also presented work in progress at public meetings and forums across Los Angeles.

Principle #2: Engage in Community Informed Data Science. Community stakeholders like those above need to be meaningfully engaged in the research as it unfolds. In CESTTRR, community stakeholders informed refinements to the tool in many ways, including: 1) designating the adverse outcomes that the revised vulnerability assessment tool was being designed to

Lindsay Young, PhD is an assistant professor of health communication at USC Annenberg, where she researches health inequities in minoritized populations using “Quantitative Criticalism” and social network analysis. With a commitment to community empowerment, she designs health interventions that leverage local networks and assets and holds an NIH Career Development Pathway to Independence Award.

AI SHOULD BENEFIT OUR MOST VULNERABLE COMMUNITIES
AND, THEREFORE, SHOULD BE DEVELOPED IN PARTNERSHIP WITH THOSE COMMUNITIES.

predict, such that outcomes were chosen with racial/ethnic equity in mind; 2) incorporating equity adjustments into the algorithm to ensure that errors (e.g., false negatives) didn’t discriminate against racial/ ethnic groups; and 3) reintegrating questions into the assessment tool that were not originally selected by the algorithm as being most predictive, but were nonetheless critical for making allocation decisions.

⇒ Principle #3: Prioritize Community Informed Implementation. An accurate and equitable assessment tool is of little use if end-users face considerable challenges administering the tool. Thus, community stakeholders played an integral role in devising best practices that overcame perceived barriers to implementation. For example, the CAB revised the wording of most questions on the assessment to be easier to understand and to reduce stigma and induced trauma. They also recommended

using trauma-informed practices during data collection, such as to not administer the tool at first meeting, to ask clients if they would like someone else to administer the tool, and to ask clients if they needed breaks after difficult questions.

⇒ As an associate director of CAIS, I am proud to be a part of a research collective that takes seriously the idea that “with great power comes great responsibility.” Our power stems from the knowledge, skills, and resources we possess to innovate with AI. And our responsibility? As the three principles that I described demonstrate, we take it as our social responsibility to direct our power toward problems that impact vulnerable communities and to platform community voices in that process. In our view, this is the only way to ensure that our most underserved and minoritized communities become beneficiaries of AI innovation and not its victims. ■

HOW AI CAN ALLEVIATE ECO-ANXIETY

Eco-anxiety, a chronic fear of environmental catastrophe, affects nearly 68% of American adults, according to the American Psychological Association. This growing phenomenon represents a response to the escalating severity of climate-related events, such as wildfires, hurricanes, and heatwaves, which disrupt daily life and intensify psychological distress. A USC Annenberg study indicates that while 62% of Generation Z prioritizes mental health, only 45% actively seek therapy. This disparity between valuing mental health and acting suggests a need for new solutions that address the barriers to accessing support.

⇒ AI offers a novel approach to addressing eco-anxiety by creating tools that anticipate psychological needs and deliver personalized mental health interventions. These AI-driven tools can help reshape the way individuals manage their emotions by offering resources that adapt to changing environmental conditions. For example, AI platforms can draw on real-time environmental data, such as air quality updates, wildfire alerts, and heat advisories, to provide guidance tailored to specific situations. This approach transforms mental health interventions from general advice to context-specific strategies, making the support more relevant and actionable.

⇒ One application of AI in this area involves integrating localized environmental data into mental health apps. Platforms like Climatrack, for instance, use air quality data to guide users through mindfulness exercises when outdoor air is unsafe, and algorithms can advise on steps to improve indoor air quality. During heatwaves, AI systems can deliver hydration reminders and suggest methods for maintaining a cool indoor environment. These tailored interventions give users a sense of agency, empowering them to take practical steps that reduce stress.

⇒ Existing mental health tools already incorporate AI-driven techniques, including conversational algorithms that engage users in dialogue about their emotions, and predictive systems that monitor factors like sleep and stress levels. When combined with real-time environmental data, these tools can move beyond basic wellness advice, providing responses that are timely and situation specific.

⇒ The AI-based mental health platform Wysa, for example, uses a conversational interface to offer emotional support, while integrating data from wearable devices that track physical health indicators, such as heart rate variability. The inclusion of localized environmental insights could enhance these

Karl Burkart is the co-founder and deputy director of One Earth and formerly the director of media, science & technology at the Leonardo DiCaprio Foundation.

tools’ relevance, allowing them to deliver recommendations that address both mental and environmental factors.

⇒ Privacy considerations are essential in the development of AI-based mental health tools. Techniques such as on-device processing enable AI algorithms to analyze

AI ADDRESSES ECO-ANXIETY

BY CREATING TOOLS THAT ANTICIPATE PSYCHOLOGICAL NEEDS AND DELIVER PERSONALIZED MENTAL HEALTH INTERVENTIONS.

data directly on the user’s device, keeping sensitive information secure without transmitting it to external servers. Apple’s Health app, for instance, employs on-device processing to analyze health data while maintaining user privacy. Implementing similar measures in AI-driven mental health

applications ensures that users can engage with the technology confidently, knowing their personal experiences remain private.

⇒ AI presents opportunities to create immersive therapeutic experiences that help users reconnect with nature, even when access to the outdoors is limited. Virtual reality (VR) tools can simulate natural environments, such as forests, lakesides, or mountain paths, providing sensory experiences like the sound of rustling leaves or flowing water. The VR mindfulness app Tripp has explored this concept by offering meditative environments designed to reduce anxiety and stress. Incorporating AI into such experiences can further personalize the environment by adjusting sensory details in real-time, based on user preferences or local weather conditions. These digital nature simulations offer a form of escape that soothes the mind and provides a respite from environmental stressors.

⇒ AI can also help in countering the barrage of misinformation and apocalyptic clickbait that now populates both social media and mainstream news sources.

Catastrophic headlines, which often drive young people to feel that there is no hope when it comes to the climate crisis, frequently misinterpret more nuanced

HOW AI CAN ALLEVIATE ECO-ANXIETY CONTINUED...

scientific findings from academia. Our organization, One Earth, for example, employs an AI tool that lets people ask questions from a vetted database of over 600 scientific papers in order to find accurate answers to questions related to climate change impacts. Such tools can go a long way in helping young people gain the knowledge and critical thinking skills necessary to navigate a time of increasing complexity.

⇒ Building resilience against eco-anxiety extends beyond individual coping strategies to include connecting people with local support networks. AI can facilitate access to resources such as workshops, peer groups, and climate preparedness programs by recommending relevant events based on user interests and geographical location. Platforms like Meetup, which already suggest local events, demonstrate how AI can be used to connect users with community-based support. For those experiencing eco-anxiety, engaging in climate action initiatives or preparedness programs can provide a sense of purpose and shared effort, alleviating feelings of helplessness.

⇒ The evolving use of AI in addressing mental health needs represents a significant shift from traditional approaches, focusing

on proactive engagement rather than reactive responses. The application of real-time environmental data, personalized interventions, and immersive experiences shows AI’s potential to support mental health in ways that align with the complexities of modern environmental challenges. As these technologies continue to advance, they promise not just to alleviate eco-anxiety, but also to channel it into meaningful action, equipping individuals, and communities to navigate the uncertainty of a changing climate.

⇒ By anchoring AI interventions in scientific research and practical applications, the potential for transformative change becomes evident. Whether through mindfulness apps that respond to air quality alerts or VR experiences that offer digital escapes, AI offers tools that are timely, relevant, and adaptable. In doing so, it helps turn anxiety into a catalyst for resilience and preparedness, guiding a path forward in an era defined by environmental upheaval. ■

Michael Kittilson is a co-author of this essay.

ADJUSTING COURSE

What are the main barriers you face in keeping up with AI developments? (Check all that apply)

What are the main reservations or concerns you have about the adoption of new AI-driven tools in the PR discipline?

Information security

Factual errors and misinformation

Data privacy

False information/Disinformation

Limited industry knowledge on how to use the tools

Unknown/potential legal ramifications

Algorithmic bias

Financial burden to companies

Cultural reluctance

Lack of need

Algorithmic transparency

Tools

WE HOPE THIS IS IRRELEVANT

The moment we surrendered our decisions to machines, something fundamental slipped from our grasp. What we once called human judgment — logic paired with intuition, emotion, ethics — began to fade through quiet acquiescence. In rooms where leaders once debated the fate of nations, AI now whispers its recommendations, faster than human minds can process, more calculated than any collective wisdom we have ever known. And as those recommendations get accepted, we begin to lose something no one thought to protect.

⇒ No warning signs blinked when this shift occurred. It felt benign — a helpmate, an intellectual assistant. Machines read data faster, parsed patterns hidden from our perception, revealing truths we hadn’t yet found. But with each decision made by an algorithm, the space for human discretion narrowed. We leaned on the machine, and with each step forward, human choice retreated into the background.

⇒ Researcher Geoffrey Hinton spent a career building the foundations of this future. He just won the Nobel Prize for Physics for his contributions to the field of artificial neural networks, those marvels of human ingenuity, now possess the power to learn, adapt, and evolve beyond anything we could have

predicted. Hinton walked this path believing in their promise, yet that triumph now carries a burden. Machines that outthink us will not simply assist — they will take over. Their ability to analyze, calculate, and predict far outstrips our own. And in that precision, human judgment, full of imperfections and contradictions, will lose its place.

⇒ Hinton stepped away from his life’s work not out of fear but out of recognition — a recognition that mirrors those who, before him, watched their creations spiral beyond control. The same recognition Oppenheimer felt when the first nuclear blast lit the sky — he understood the irreversibility of what had been set in motion.

⇒ Oppenheimer said, “When you see something technically sweet, you go ahead and do it.” Hinton echoes that sentiment, looking back at his own work with neural networks. He now stands at the edge of his creation, watching it race ahead without him, unstoppable.

⇒ We observed this in our own work, an AI experiment mirrored that quiet shift. We invited it into a collective decision-making process, simple on the surface—a social media strategy. The machine didn’t immediately seize control; it offered guidance. Participants welcomed its insights,

Burghardt Tenderich, PhD is a professor of practice at USC Annenberg where he teaches and researches strategic communication, the emerging media environment, brand purpose and Web3 technologies for communicators. Tenderich is associate director of the USC Center for Public Relations.

EACH STEP AWAY FROM HUMAN JUDGMENT AND TOWARD MACHINE-DRIVEN SOLUTIONS HAPPENS WITHOUT RESISTANCE.

IT FEELS LOGICAL, EVEN COMFORTING. BUT WITH EACH QUIET SURRENDER, MACHINES

DICTATE THE TERMS OF THE FUTURE.

eager for a clearer path. Each AI-driven suggestion deepened the machine’s influence. By the time decisions were compared, the difference became undeniable. Human judgment, once led by debate and deliberation, had stepped aside for cold, calculated efficiency. The flow of events reflected the creeping change Hinton warned about.

⇒ “I think we’re at a kind of bifurcation point in history where, in the next few years, we need to figure out if there’s a way to deal with that threat,” Hinton said in an interview when he was awarded the Nobel Prize.

⇒ This erosion of human agency will happen in boardrooms, war rooms, and councils where decisions of profound consequence unfold. Each step away from human judgment and toward machine-driven

solutions happens without resistance. It feels logical, even comforting. But with each quiet surrender, machines dictate the terms of the future. They process data without empathy, execute strategies without moral hesitation, and with each step, human governance fades.

⇒ Hinton’s unease stems from this concern — not that machines will suddenly overthrow us, but that we are gradually handing them the reins without noticing. Decisions made through AI no longer reflect our tangled, emotional imperfections. They reflect only the machine’s logic — precise, detached, relentless. The danger doesn’t stem from technological rebellion. It stems from us trusting in its power more than our own. Once the machine knows more, sees more, and calculates faster, why would we question its conclusions?

Michael Kittilson is a second-year graduate student at USC Annenberg.
WE HOPE THIS IS IRRELEVANT CONTINUED...

⇒ This slow reprogramming of decisionmaking frameworks happens not with a bang, but with the soft hum of algorithms doing their work. And with each quiet agreement, with each moment we trust machine recommendations over human debate, we drift further from the very thing that made us human. We stop asking questions. We stop doubting. We stop reflecting on the mercies that make us human. The machine gives us certainty, and in return, we hand it control.

⇒ Hinton sees this future clearly. His warning isn’t about machines attacking us — it’s about machines outthinking us. It’s about realizing too late that we surrendered not because we had to, but because we chose to. We trusted intelligence, divorced from human values, to shape the world in ways we could no longer comprehend.

⇒ Perhaps one day, this warning will feel irrelevant. Perhaps future generations will laugh at the thought of losing something as abstract as judgment. But we stand at the edge of that loss now, watching it unfold quietly. By the time we feel its full weight, human judgment may no longer exist in the spaces where it once thrived.

⇒ We hope this becomes irrelevant. We hope the future will still need human reflection, human empathy, human uncertainty. But if the path ahead continues unchecked, if AI’s quiet takeover continues without pause, we may look back and wonder when exactly we let go.

⇒ We hope this becomes irrelevant. ■

MISINFORMATION WITH THE SPEED OF A THUMB SWIPE

Walking to class, phone in hand, the image hits. The vice president grips a beer bong surrounded by 20-something girls. A frat party rages in the background. Real or fake? You freeze, but your thumb doesn’t. The image spreads.

⇒ That image sits at the heart of tomorrow’s politics — a complete shift upending campaigns, not by dismantling truth, but by making truth irrelevant. This system no longer holds campaigns, operatives, or even voters as its primary players. And we’re already seeing it in the current presidential election. The world can be upended with a single AI-generated image, one that seeps into every text thread, every feed, and rewires trust faster than reason can react. No longer do narratives unfold slowly. They explode, instantly shifting perceptions with the tap of a screen.

⇒ Some of it is harmless. The AI-generated cats surrounding Trump in a response to the Taylor Swift endorsement of Kamala Harris are actually kind of funny. Harris, clad in red, addressing a crowd of AI-created comrades in front of the symbol for communism, not so much. (Argentina’s Javier Milei did something similar to his rivals last year before winning that country’s presidency.)

⇒ Some of the political uses for this technology are fairly mundane, such as the Republican National Committee’s first-ever entirely AI generated ad from early 2023 depicting society crumbling following a hypothetical Joe Biden reelection.

⇒ Take heart that it’s not all bad. As Reuters reported, Ashley, the first artificial intelligence campaign volunteer, called thousands of Pennsylvania voters in 2023.

⇒ For the campaign operative, AI presents a new challenge that might just be met with a talented new team of Gen Z staffers who already live in this world and can fight fire with fire, or at least can attempt to match its pace.

⇒ For the voter, though, it shifts responsibility. No longer can someone rely on the media to sift through endless digital content and label what’s a truth, half-truth or lie. The volume and velocity made possible by AI, especially in a critical campaign cycle about to determine the direction of the country, are simply too much.

⇒ So consider this thought experiment: How do voters make decisions when they can’t trust their senses?

⇒ The nerd in me who writes academic book chapters in my spare time likes the idea that people return to the basics. When I was

MISINFORMATION WITH THE SPEED OF A THUMB SWIPE CONTINUED...

running politics for the Los Angeles Times, I used to be heartened by traffic figures showing that people had come to the website by searching terms on Google such as “Hillary Clinton education policy details” or “Donald Trump tax proposal analysis.” People do actually want to be informed with facts.

⇒ But we know that’s boring. Also, AI is getting better and better. The test I apply to something I see on a social media platform goes a bit like this: Is what I’m viewing so surprising/exciting/offensive that my first instinct is to show it to someone else? If so, I had better take a beat and search on Google.

⇒ In fact, as Ellie Barnes of the News Literacy Project puts it, “[e]ducators play a vital role in training students to use this new technology thoughtfully, recognize inaccurate and biased information, and make a habit of double-checking before believing.” It’s especially needed as half of American children have a smartphone by age 11, Barnes notes.

⇒ The nonpartisan nonprofit has its own fact checking efforts labeled “Rumor Guard,” which includes deep dives into viral AI generated images, many of which are tied to Hurricane Helene. A post from October explains that images showing an official Oregon voter information guide without Trump’s name might be genuine, but viral

social media posts inaccurately suggest the missing name is evidence of election tampering. (Team Trump opted not to participate in the guide, they write.)

THAT’S THE GAME NOW: SPREAD THE LIE, AND LET TRUTH CHASE AFTER IT, GASPING.

⇒ According to Lou Jacobson, the chief correspondent for Politifact, another nonpartisan outlet that uses journalists to fact-check political claims, most of what they see are cheap fakes versus deep fakes.

⇒ “AI just isn’t good enough yet,” Jacobson said in an October interview two weeks before the election. “It is on my list of longer horizon challenges but it’s not one for this year.”

Christina Bellantoni is a professor of professional practice, the director of USC Annenberg’s Media Center and the Annenberg Center on Communication Leadership and Policy faculty fellow. She also is a contributing editor with the independent nonprofit newsroom The 19th News, which focuses on gender, politics and policy. With over two decades of journalistic experience, from the LA Times to Roll Call, she has shaped critical political discourse and championed investigative journalism.

⇒ Politifact keeps up with misinformation in a variety of ways, but over the last 15 years has dramatically expanded its team thanks in large part to a relationship with Meta, which uses Politifact screens on information that comes through on Facebook, Instagram and Threads. Jacobson describes that effort as making a difference by being an important “speed bump” for the spread of phony images or misleading videos.

⇒ Is slowing it down enough? He thinks so — for now.

⇒ Truth-seekers trying to guard against something going viral can take some advice from the News Literacy Project, which notes that misinformation on social media frequently is labeled with the term “breaking.” Here’s the tip: "Investigate an account’s profile to see if it is connected to a credible news outlet or has a history of publishing accurate information.” You can also check their database if something seems too outlandish to be true.

⇒ Back to the beer bong. The absurdity of that vice president’s picture makes you laugh, so you send it to a friend. Your mom posts it on Facebook, your uncle screams corruption, and your cousin, ever the skeptic, fact-checks it with a Snopes link. But none of that matters. The damage took root the

second the image went viral. It runs deep. That’s the game now: spread the lie, and let truth chase after it, gasping.

⇒ In this new political battleground, AI creates micro-worlds for voters, tailoring content so expertly that no one questions it.

A Democratic voter in Ohio scrolls through a totally different reality than a registered independent voter in Arizona. Each version, believable enough to sink in. The result? A fractured electorate, splintered by the very tool meant to inform them.

⇒ A synthetic Joe Biden voice advising New Hampshire voters to forgo their primary might have marked the disappearance of any shared reality — until it was discovered. Truth won, didn’t it? Dig a little deeper and you learn that a political operative created it as an act of “civil disobedience to call attention to the dangers of AI in politics,” NBC News reported.

⇒ Where does any of this lead us? And what does truth look like when we force it through curated algorithms? The answers are ever-developing, and, like so much with technology, might be outdated the moment they are printed on this page.

⇒ For now, we’ll keep it as simple as possible: Think before you share ■

Michael Kittilson is a co-author of this essay.

AI’S ENVIRONMENTAL COSTS

As we stand on the edge of an AI revolution, we risk plunging headlong into a scorched digital landscape and that could dwarf the very problems this technology aims to solve. Like many technological advances, AI’s ecological impact is frequently overlooked, potentially jeopardizing the well-being of future generations.

⇒ The rapid expansion of AI is dramatically increasing the energy demand at the data centers housing AI computing infrastructure. As global energy grids strain to keep up, predictions suggest that by 2034, global data center energy consumption could reach 1,580 terawatt-hours — equivalent to India’s expected electricity usage that year. The environmental and ethical implications of this technological advancement cannot be ignored. Balancing progress with ecological and moral responsibility is crucial for ensuring a sustainable future.

The Environmental and Health Toll

Most AI users are unaware of its environmental impact. Processing a single ChatGPT query consumes ten times as much electricity as a Google search, while image-generating tasks are even more energy-intensive. Data centers currently consume 1-2% of global power, with predictions suggesting this demand will grow by 160% by 2030.

As AI continues to expand its knowledge and cognitive abilities, its energy demands will only increase, potentially leading to far-reaching environmental consequences.

⇒ Data centers use enormous amounts of water for both on-site cooling and off-site electricity generation. Nearly all server energy is converted into heat, which must be removed to prevent overheating. Researchers at UC Riverside estimate that global AI demand will account for 4.2-6.6 billion cubic meters of water withdrawal in 2027, surpassing the total annual water withdrawal of half of the United Kingdom. With an estimated 2 billion people currently living without running water worldwide — can we morally justify continuing this amount of water consumption?

⇒ Electricity generation, particularly through fossil fuel combustion, results in local air pollution, thermal pollution in water bodies, and the production of solid wastes, including hazardous materials. Regions heavily reliant on fossil fuels could perpetuate historical environmental inequities related to extreme heat, pollution, air quality, and access to potable water. A recent report by the Brookings Institute discusses how AI’s climate impacts are heaviest near poor communities due to their reliance on fossil fuels.

Melanie Cherry is the associate director of the public relations and advertising (BA) program at USC Annenberg. She is a communications strategist with more than 20 years of agency, corporate, and nonprofit experience spanning the entertainment, fashion, sports, hospitality, and finance industries.

⇒ Beyond environmental deterioration, the secondary health effects must also be considered. Historically, health issues associated with environmental problems have been easily dismissed by corporations to avoid blame, but their lasting impact is undeniable. The Intergovernmental Panel on Climate Change recently reported that the top two environmental issues in the U.S. are air and water pollution, to which data centers are becoming key contributors.

Finding Solutions

PROCESSING A SINGLE CHATGPT QUERY CONSUMES TEN TIMES AS MUCH ELECTRICITY AS A GOOGLE SEARCH, WHILE IMAGE-

Microsoft’s recent announcement of a 20-year deal with Constellation Energy signifies a strong corporate commitment to developing cleaner energy sources for data centers. Central to this partnership is the revival of Pennsylvania’s infamous Three Mile Island Nuclear Plant. While nuclear energy does not emit carbon dioxide, it still requires significant water usage for operation. As AI usage is expected to increase with technological improvements and growing human understanding, there’s an urgent need to invest in ways to reduce its carbon and water footprint while keeping our moral responsibilities at the forefront of this discussion.

⇒ Communication plays a key role to get this message across to tech companies, policymakers, and users. Traditional tactics and messages will get lost in a sea of CSR messaging. Social media can play a key role for amplifying campaigns that are intended to shock to activate change.

⇒ Our path forward demands a delicate balance between embracing AI’s transformative power and safeguarding our planet for generations to come. ■

India Starr contributed to the research of this essay.

AI HAS BEGUN WITH FEAR

WHEN BOOS RANG OUT ABOUT AI AT SXSW (THE MOST TECH OPTIMIST GATHERING ON THE PLANET)
IT SHOWED US THAT THE BIRTH OF AI IS THE OPPOSITE OF THE BIRTH OF THE INTERNET 25 YEARS AGO, WHICH HELPS US TO SEE WHERE PUBLIC SENTIMENT ABOUT AI IS HEADED.

Last Spring at Austin’s South by Southwest (SXSW), several tech leaders extolled the virtues of AI and made positive predictions about how it would improve the world. SXSW is one of the leading tech conferences in the world, filled with fans, executives, and investors looking for the next new thing.

⇒ There is no audience anywhere that is more supportive of new tech. SXSW attracts enthusiasts who want to see, use, and invest in the newest technologies. Back in 2007, Twitter debuted in front of a SXSW audience who erupted in cheers at the new form of social media .

⇒ At this year’s SXSW, one tech panelist urged people “to be an AI leader.” And OpenAI’s Peter Deng shared his (self-serving) view, “I actually fundamentally believe that AI makes us more human.”

⇒ Rather than erupting in cheers and wanting to learn more, the audience started booing at all the positive talk of AI.

⇒ That’s the reaction we might expect when somebody starts boasting about AI at an organized labor convention, or at a Writer’s Guild meeting where people fear losing their jobs. It is extraordinary that it happened at a fanboy gathering like SXSW.

⇒ It is now becoming abundantly clear that AI is different. To a world that knew little

Jeffrey Cole is the founder and director of The Center for The Digital Future at USC Annenberg. In July 2004 Cole joined the USC Annenberg as director of the newly formed Center and as a research professor. Drawing from his experiences initiating and overseeing the World Internet Project, a long-term study that analyzes the effects of computer and Internet technology in over 35 countries, Cole provides guidance to governments and top companies globally as they formulate their digital strategies.

or nothing about AI until last year (beyond fictional versions like HAL or Skynet), this is not just another great new technology that will enhance our lives (with a little collateral damage to other people who work on encyclopedias and phone books as happened with the internet).

⇒ AI is a game changer, but most of us don’t think it’s a good one — at least not at this early point.

How AI today is different

So far, AI’s path has been the opposite of the internet’s a quarter century ago when everyone seemed filled with hope at how digital technology could improve our lives. With AI practically all the public’s attention has focused on fears. And those fears are huge. Few hear or care about the potential positive impact of AI. Instead, we focus on the negative ways it will change our lives, potentially even destroying life itself.

⇒ Ask people about how they feel about AI, and the first thing they do is express fear that their jobs will not survive AI. Already, people worry that drivers’ jobs will disappear and that cashiers will be a thing of the past. Many Americans look at their own jobs, concerned that much or all of what they do can be automated. Few fears cut closer to

what’s important to people than this. It was one of the major reasons for the Writer’s and Actor’s strikes last year.

⇒ Teachers not only worry their positions may evaporate, but also that their students will never develop fundamental skills, nor be able to compete in the marketplace. What happens when these kids grow up and comprise most of the work force?

⇒ Although it seems farfetched, we frequently hear about the worst fear of all: Artificial Intelligence will become sentient and decide that it no longer needs human beings, thus ending humankind. This is a cliché in science fiction, but in our reality well-known tech leaders like Elon Musk say AI is a greater threat to humankind than nuclear war. Despite Musk’s recent years of erratic behavior, his credentials as a visionary who sees things that others don’t are unassailable.

⇒ As AI develops, will we decide that our fears were unfounded? Will we decide that different forms of AI are extraordinary tools that make almost everything better?

⇒ It’s more complicated than that. Though the positive hopes for the internet turned out to be true, it also turned out to have more problems than anyone thought. With AI, we just don’t know what will happen. And that’s the scary part. ■

AI CREATES GREATER RISK TO A POLARIZED POPULATION

Society was transformed by two so-called revolutions over the last two centuries — the Industrial Revolution and the Information Revolution.

⇒ Neither revolution came without costs. The early days of the Industrial Revolution, for all its benefits, also produced enormous downsides, including urban tenements, dangerous working conditions, child labor and environmental degradation. It wasn’t until the 20th century that society began addressing these externalities at scale, most notably on the environment.

⇒ The Information Revolution began with mainframe computing in the mid-20th century, which was the precursor to the rise of the Internet at the turn of the century. The Internet ushered in a staggering array of benefits, to the point where it’s hard to imagine life without it. But it also created enormous vulnerabilities from hyperconnectivity, including systemic failures like the Crowdstrike outage in July and the rise of social media as a platform for misinformation and polarization. We are only now struggling to manage the negative fallout from social media, such as Meta’s new guardrails around the use of Instagram by adolescents.

⇒ And so it is with the rise of generative AI, the latest and most profound iteration of the Information Revolution. For all the benefits it promises, in the early days we’re likely to see as much harm as good from this new technology, particularly in the social realm. To the extent that social media has contributed to polarization and the collapse of trust in society, which has been welldocumented, AI has the potential to increase that trend by an order of magnitude.

⇒ “As trust diminishes, people retreat into smaller circles of familiarity or isolate themselves entirely, leading to a cycle of further erosion of social bonds and shared purpose,” wrote Kye Gomez on Medium.

⇒ “The consequences of this shift are profound and far-reaching, manifesting in political polarization, economic uncertainty, social fragmentation, and a pervasive sense of discontent that seems to defy our material progress.”

⇒ Social media is fueled by algorithms. They feed us what we want to see and hear based on our history, creating a funnel effect that increasingly constricts our world view. Platforms like X and Instagram, of course, can be curated to include a diversity of

Russ Yarrow is the founding partner of Centerpoint Advisors, a strategic business consultancy focused on building resilient, adaptable and forwardlooking organizations through the power of strategic positioning and messaging. He has led communications at Bank of America, Visa, and Chevron, and is on the steering committee of the The Dialogue Project at Duke University.

Bob Feldman is founder of FHS Capital Partners, a strategy and investment firm focused on investing, The Dialogue Project and social impact. As founder of The Dialogue Project, based in Duke University’s Fuqua School of Business, he explores and promotes the role of business in reducing polarization and improving civil discourse. Bob is a member of the USC Center for PR board of advisers.

content, but this takes effort and doesn’t create the dopamine effect that the algorithm does.

⇒ Generative AI poses a two-fold challenge. It can supercharge the algorithm and greatly amplify the funnel effect. And it brings with it the ability to create fake content that increases engagement and perceived credibility. The rapid spread of the pets-asdinner rumor that arose in Springfield, Ohio in September was no doubt fueled by the proliferation of AI-generated memes featuring Donald Trump as the leader of an army of cats.

⇒ Here’s another example: a new social platform, SocialAI, presents itself as a variation of X, but it’s entirely populated by chatbots. The user chooses what types of bots they want to interact with, like “fans,” “supporters,” “haters,” “doomers,” and so forth. The implications for building deep, personal, and polarizing echo chambers are obvious.

⇒ On the other hand, a new application called DepolarizingGPT aims to steer users to the radical center of fractious debates. Type in a subject or question and the app serves up a left-wing response, a right-wing response, and a “depolarizing” response.

⇒ But in these early days of generative AI, while society grapples with the new technology, the hazards may be just as great as the benefits, as they were during the rise of previous paradigm-shifting technologies. According to a Pew survey of 300-plus experts (tech innovators, business and policy leaders and academics), 37 percent said they were more concerned than excited about the potential of AI, with only 18 percent saying they were more excited than concerned; 42 percent said they were equally excited and concerned.

⇒ At The Dialogue Project at Duke University, where we are focused on enabling businesses to help bridge society’s polarization, this is a vital and emerging issue. Over time, we believe the world will create ethical and operating guidelines for generative AI to deliver benefits responsibly and equitably, but these early days of rapid innovation and minimal guardrails compel us all to be aware of the potential hazards as well, and do what we can within our respective realms to counteract them. ■

IS AI COLOR-BLIND?

“Power shadows” are systemic biases in AI arising from the lack of diversity in the data used to train the systems. The term was coined by MIT graduate researcher Joy Buolamwini in a national contest inspired by the film “Hidden Figures.” According to her research, data collection methods result in skewed AI algorithms and inequitable outcomes for people of color — particularly Black people.

⇒ As AI becomes further intertwined in business, including healthcare management, mortgage loan evaluations and identity verification, we must address AI system biases to ensure more equitable treatment for people regardless of race, culture or ethnicity. AI systems are only as good as the data we put into them.

Healthcare

AI-powered systems are being designed to support medical activities ranging from patient diagnosis and triaging to drug pricing. Adriana Krasniansky, PhD, a graduate student at Harvard Divinity School examined how AI systems were trained to recommend the length of patient stays in healthcare facilities. She found the training data inadequately represented communities of color, which led to skewed outcomes. AI’s recognition of zip codes and regions exacerbated the issue.

“The underrepresentation of minority communities in training AI data results in bias algorithms that fail to adequately address healthcare issues for people of color, especially for Black populations,” Krasniansky writes.

In 2016, AI researchers in Heidelberg, Germany, developed a model to detect various forms of melanoma using digital imaging. However, 95 percent of the training images used were of white skin (Dutchen, 2019). The lack of diversity in the imaging impacted the accuracy of detecting melanoma in darker skin tones.

Facial Analysis & Recognition

“Facial analysis algorithms often fail to detect darker skin tones due to lack of diversity in training AI data systems,” writes Buolamwini, the founder of the Algorithmic Justice League. Her research notes the unconscious discriminatory consequences that emerge when predominantly white and male demographic data is used to train AI systems.

In 2022, Dr. Abiba Birhane, a scientist researching ethical AI at Trinity College in Dublin, found that the omission of diverse skin tones perpetuates racial stereotyping, societal inequalities, and power dynamics that reflect the injustices in our society. According to her research, the inaccuracies inherently lead to

Julia A. Wilson is the CEO & founder of Wilson Global Communications, LLC, an international public relations consultancy in Washington, D.C., and Dean of the Scripps Howard School of Journalism and Communications at Hampton University in Virginia. A USC Alumna, Wilson was selected by the Television Academy as a 2023 Alex Trebek Legacy Fellow. She is a member of the USC Center for PR board of advisers.

racial biases in facial analysis algorithms, illustrating the need for more inclusive data collection methods and AI system training.

Lending Practices

AI also impacts whether a bank approves loan applications. Michael Armstrong, CTO of AI technology company Authenticx, investigated this. He reviewed various AI “language models,” the basis for determining approvals or denials, and found that Black applicants are more likely to be denied loans and, if approved, are charged higher interest rates than white applicants. (Armstrong, 2024). While he also found various gradations of racial bias, he notes Black applicants bear the brunt of this discrimination.

Recommendations

To alleviate these and other racial biases in AI systems, or “Power Shadows,” data training must become diversified and crucial steps taken to rectify the defective algorithms:

1) Racial biases within AI algorithms must be acknowledged.

2) Monumental change must be made in the ways data is collected.

3) AI systems must be re-configured to include people with darker skin tones. 4) AI systems must be trained with diverse data to accurately detect and reflect the uniqueness of populations applicable to specific situations.

⇒ While we remain enthusiastic about the emergence and increased usage of life-altering AI technology tools, credible research has shown us that AI is not colorblind. Immediate attention must be given to correcting and accurately reflecting our diverse and kaleidoscopic communities. AI systems must avoid repeating historic racial biases in critical sectors of our society that negatively impact lives.

⇒ America is browner, more culturally diverse and its peoples are seeking the same American dream. A desire by technology companies to act and correct AI systems is paramount. There’s still time to get it right. More diverse participants must be included in data collection. Black researchers and entrepreneurs must become involved in the research and training of AI systems. We must work together to ensure the AI tool works for all of us. ■

Krystian Fernandez contributed to the research of this essay.

REPUTATION AND CRISIS MANAGEMENT IN THE AGE OF AI

Reputation management has always been a complex field, but the sheer volume of data now being generated on digital platforms has made it even more challenging. Whether it’s a social media post, a news article, or a user-generated video, content moves rapidly and can trigger a chain reaction that impacts public perception. For brands, the challenge lies in tracking what is being said and distinguishing genuine sentiment from noise.

⇒ Artificial intelligence (AI) has emerged as a critical tool in addressing the challenges posed by this new communications landscape. By harnessing the power of data, AI provides significant opportunities to detect, analyze, and respond to reputational issues before they escalate into full-blown crises. At Stripe Reputation, we have built a structure and process that leverages data but more importantly, the use of AI to make it more effective.

⇒ AI changes the game by automating much of the data collection and analysis that used to require manual effort. In years past, organizations had to rely on teams of analysts to monitor conversations and provide recommendations based on their observations. This process was slow and reactive, leaving room for issues to spiral out of control before a strategic response could be implemented.

⇒ With AI, we can now collect and analyze data in real-time, tracking the evolution of a reputational issue as it unfolds. Our AI applications are trained to identify the early signals of a crisis, allowing us to act quickly and decisively. Whether detecting a trending topic that could harm a brand or identifying the origin of misinformation, AI helps us move from a reactive to a proactive stance in managing reputations.

Understanding the Misinformation Landscape

One of the biggest challenges in reputation management today is the prevalence of misinformation. False information can spread rapidly online, sometimes deliberately, and can cause significant damage before it is corrected. The ability to distinguish between real and fake conversations is critical in managing reputational risks effectively.

⇒ AI provides us with the tools to map and identify misinformation, tracing it back to its source and determining how much of the conversation is organic versus artificially manipulated. This insight allows organizations to focus their resources on addressing the real issues rather than getting caught up in a flood of fabricated narratives.

Isys Caffe-Horne is the senior vice president of strategy at Stripe Theory, a data-driven digital marketing agency founded in 2015. After climbing the ranks at Edelman, she served in a dual role as the senior account lead for diverse corporate clients, providing strategic public relations, and reputation management counsel and managing the associated account teams. At Stripe, she serves as the SVP of Communication Strategy, supporting clients like Target, Pepsi, and Blue Cross Blue Shield Association.

⇒ For example, in our work, we’ve trained AI systems to capture data from multiple platforms, analyze the sentiment and tone of the conversation, and assess the credibility of the sources involved. By doing so, we can determine whether an emerging issue is the result of genuine public concern or the product of coordinated misinformation campaigns. This distinction is essential in crafting the right response, whether that means correcting the misinformation or addressing the underlying issues driving public sentiment.

Predicting and Preparing for Crisis

AI’s ability to analyze vast datasets also allows us to predict how reputational issues may evolve. By building scenarios based on real-time data, AI helps us anticipate what could happen next and develop strategies to prepare for multiple outcomes. This is particularly useful in crisis management, where quick, informed decisions can make the difference between successfully navigating a crisis or suffering long-term damage.

Measuring Impact and Public Perception

Another area where AI plays a crucial role is in measuring the impact of a response

to a reputational issue. By analyzing public perception, tone, and sentiment, AI allows us to see whether our strategies are making the desired impact. This insight is invaluable in adjusting tactics in real-time and ensuring we are on the right track.

⇒ AI provides a deeper understanding of how the general public feels about a brand, individual, or organization. By capturing and analyzing public sentiment across different channels, we can assess whether our messaging is resonating and adjust our approach accordingly. This ability to measure impact in real time is essential in today’s fast-paced communications environment.

The Critical Role of AI in Reputation and Crisis Management

As the world of communications continues to evolve, AI will play an increasingly important role in reputation management and crisis response. From identifying early signals of a crisis to predicting outcomes and measuring impact, AI offers powerful tools that allow organizations to stay ahead of the curve. By leveraging AI to process and analyze data, we can move beyond reactive strategies and embrace a proactive, data-driven approach to managing reputations in the digital age. ■

THE PRICE OF A PIXEL

My phone buzzes before the first glimmer of daylight. Breaking news never sleeps in the streaming age. Another compelling video crosses our desk - dramatic, newsworthy, a perfect and fresh new element to an evolving story. But is it too perfect? In the rapid-fire world of 24/7 news streaming, every second counts. Yet one wrong pixel could instantly shatter decades of hard earned trust.

⇒ Through seventeen years at ABC News, from Field Producer to Executive Director of ABC News Live, this truth echoes louder each day: AI technology brings unprecedented power to fabricate reality. My team encounters this challenge daily as we pioneer streaming news programming. Each decision carries the weight of our network’s straightforward reputation.

⇒ The technology moves fast. Last month’s detection tools might miss today’s synthetic content. Leading a streaming news operation means constantly evolving our verification

processes. Every image, every video clip, every piece of user-generated content demands deeper scrutiny than ever before. Perhaps we now do live in a world where we can’t believe our lying eyes.

⇒ Professional journalism thrives on speed and accuracy. 24/7 Streaming platforms amplify both opportunities and risks. My experience initially launching ABC News Live Prime, our flagship program, taught me this fundamental principle: maintaining trust requires more than good intentions. It demands rigorous systems, cutting-edge detection tools, constant vigilance and a healthy dose of skepticism.

⇒ Social media multiplies these challenges exponentially. From the purported front lines of war, to the manufactured horror of extreme weather, to the alleged haunts of the global elite, a fake synthetic image of these moments can circle the globe before our team completes verification. Our newsroom focuses intensively

VISUAL NEWS VERIFICATION NOW REQUIRES NEW MUSCLES.

OUR TEAM DEVELOPS EXPERTISE IN DIGITAL FORENSICS AND GEOGRAPHICAL MATCH FRAMING ALONGSIDE TRADITIONAL JOURNALISM SKILLS.

Seni Tienabeso is a multi award-winning executive director overseeing all programming, content, and editorial for America’s #1 streaming news network ABC News Live.

on developing protocols to identify AI generated content without slowing our ability to deliver breaking news.

⇒ The promise of AI technology beckons. Smart newsrooms must explore its potential while building robust safeguards. Through my lens as a streaming news executive, this balance shapes our future. We channel resources toward visual verification and AI detection while simultaneously pondering how this technology might enhance our storytelling capabilities.

⇒ Visual news verification now requires new muscles. Our team develops expertise in digital forensics and geographical match framing alongside traditional journalism skills. ABC News invests heavily in training and tools to detect synthetic content. Each day brings fresh challenges as AI capabilities advance.

⇒ The audience demands reliable information, delivered with context and transparency. My mission centers on cementing systems to ensure every story we stream meets these standards. This means educating our viewers about how misinformation spreads, alerting them to new threats while maintaining their trust in legitimate news coverage.

⇒ Looking ahead, streaming news platforms must lead in establishing new standards for

AI content verification. My focus remains on ensuring brand protection while embracing beneficial technological advances. This requires constant adaptation, unwavering vigilance, and deep commitment to journalistic integrity.

⇒ We’ve debunked AI-generated video that has gone viral. In these moments, our challenge becomes how to correct the record without amplifying misinformation. Millions of viewers tune into ABC News Live expecting truth. They’ll never hear most of the synthetic stories we catch, the manipulated images we don’t air, the endless stream of fabricated content we block every day. They shouldn’t have to.

⇒ But here’s what keeps me up at night: It takes eight minutes and a few dollars to create a deepfake. Eight minutes to fabricate something that could fool millions. Eight minutes to craft content that could slip past even the most sophisticated detection tools.

⇒ Those eight minutes represent everything we fight against. Every hour. Every day. Not because technology demands it. Not because our brand requires it. But because democracy runs on truth — and truth, measured in pixels and seconds, has never been more fragile.

⇒ We’re on the front lines. The future is now. ■

AI PROVES ‘THOSE WHO KNOW, KNOW’

Misinformation swirls through our online feeds like invisible currents, shaping what we believe before we even notice. In a media-saturated world we’re faced with increasingly advanced content shown across various platforms, from advertisements to politics to news. At times, bad actors exploit media by using artificial intelligence to create fake content, including deepfakes — AI-generated or manipulated videos or images that make it look as though someone is saying or doing something they never did — planting seeds of doubt.

⇒ We set out to test the limits of perception with an experiment to see how communication professionals would fare against AI-driven falsehoods. The results do not merely expose the limits of perception — they show how knowledge, deep and specific, stands as the only shield in battles over issues like climate change.

⇒ Our experiment, a small-scale survey conducted electronically, revealed critical insights into how knowledge serves as both sword and shield against the complexities AI introduces into the media ecosystem. In the survey, the participants — media experts, journalists, educators — faced headlines that challenged their understanding. Some

headlines reflected scientific consensus, while others, crafted by algorithms, carried deliberate falsehoods.

⇒ Real headlines came with authentic images, while doctored ones were paired with manipulated visuals. Participants were also asked to refrain from using news sources or search engines like Google while taking the survey, and to only use their instincts and current knowledge to make their decisions.

MANY FIND THEMSELVES UNEQUIPPED TO NAVIGATE THE COMPLEXITIES OF AI-DRIVEN MISINFORMATION. THE CHALLENGE LIES NOT IN DECEIVING ALL BUT IN MISLEADING JUST ENOUGH.

⇒ The task appeared simple: decide what to trust. Yet simplicity evaporates in the fog of today’s media landscape. Those versed

Allison Agsten leads USC Annenberg’s Center for Climate Journalism and Communication, leveraging her diverse experience from CNN and LAC MA to shape the future of climate communication. She pioneers art-focused climate discussions as the first curator of the USC Wrigley Institute for Environmental Studies.

Olivia Smith is an Emmy Award-winning journalist and media consultant based in Los Angeles. She is a versatile content creator with extensive experience in print, broadcast, multimedia, and digital journalism.

in climate discourse, particularly journalists, sliced through the deception with confidence. Those less informed stumbled, some unable to identify a single truth.

⇒ Scores revealed a divide. Experts, well-versed in the rhythms of climate discourse, identified deceptions with ease. Their knowledge cut through the fog of the false information, allowing them to see clearly. Seven correct answers out of seven. They held the tools necessary for the fight. Others, lacking that familiarity, struggled in confusion — zero correct answers, or two at best. The takeaway became clear: those less informed about the news and climate science were the most vulnerable to misinformation.

⇒ Yet, even expertise doesn’t guarantee invulnerability. Fake media doesn’t have to be flawless to create doubt. A single fabricated headline can spark uncertainty, casting suspicion on everything that follows. Even those with experience sometimes wavered when faced with subtle manipulations. The boundaries between confidence and uncertainty blurred, revealing just how fragile the line becomes under pressure.

⇒ Our experiment illuminated AI’s ability to target gaps in understanding. Climate

change, a polycrisis wrought with policy and economic complexities, offers fertile ground for exploitation. Bad actors manipulating media with AI can infiltrate these gaps, planting counterfeit truths where understanding runs shallow.

⇒ We noticed the participants who excelled saw the patterns, recognized the traps, and sensed when something looked wrong. Their advantage lay not in critical thinking alone, but in experience — years of dissecting narratives and sharpening instincts. Journalists, in particular, navigated the falsehoods with precision. But this edge belongs to the few.

⇒ Many find themselves unequipped to navigate the complexities of AI-driven misinformation. The challenge lies not in deceiving all but in misleading just enough. Without expertise, individuals can be susceptible to the careful distortions AI can produce. Climate change may be just one front, but the implications stretch across every issue.

⇒ The solution? Depth. The experiment revealed that those with a profound grasp of climate science consistently identified the deceptions. Superficial knowledge proves fragile, unable to withstand sophisticated

AI PROVES ‘THOSE WHO KNOW, KNOW’ CONTINUED...

falsehoods. Developing this clarity comes from engaging with trusted sources, using available tools, and questioning the surface of what’s presented. By refining how we process information, we sharpen our ability to cut through misinformation.

⇒ Trust, too, will transform. Reliance on traditional institutions may give way to a more discerning approach. We predict that people will seek out experts — those whose immersion in specific fields offers unmatched clarity. Credibility will belong to those who understand at the deepest levels, rather than those who merely convey information.

⇒ Moreover, AI may contribute to the spread of misinformation, but it can also be part of the solution. Tools that detect deepfakes and false information already exist, and they’re only getting better. In the future, AI can play a crucial role in rooting

out the very falsehoods it helps create. Our experiment underscores the need for both sides of this equation: expertise to spot the lies and AI-driven tools to expose them before they do harm.

⇒ Knowledge gleams. Our minds navigate swirling currents of truth and fabrication. AI, our creation, our tool, our challenge, amplifies both clarity and confusion. Yet through this maelstrom, one beacon shines unwavering — deep, hard-won understanding. Expertise, honed through years of immersion and critical thought, cuts cleanly through the noise. AI has not weakened this truth, rather sharpened it. As complexity surges, so must our understanding. Surface skimmers sink; deep divers thrive. ■

Michael Kittilson contributed to this essay.

THE WAY FORWARD

AI STUDY SNAPSHOT

In what ways, if any, are you and/or your colleagues experimenting with each of the following AI-driven applications?

Have or are looking for ways to incorporate into work

Not experimenting with this

Chinchilla

STUDY SNAPSHOT

Please select up to three words or phrases that, when used in the below sentence, reflect your view on AI and its impact on the work you do. AI will the work I do.

37 %

value to 33% be a partner/collaborate with me on 28% help me be more creative in 27 % streamline 24% help me be smarter in 22% inspire 20% remove tedium from 13% replace 11% complicate 10%

AI AS ‘TOOL FOR THOUGHT’

Asking GPT4o to “write an essay evaluating the life of Thomas Jefferson” results, in seconds, in a highly credible, well-structured, 6-page document outlining this historical figure’s life and impact. This demonstrates one of many provocative linguistic capabilities of generative AI (genAI) which has emerged over the last few years. These systems can quickly, and effortlessly, produce voluminous amounts of text on our behalf, directed by our needs at work and home.

⇒ Writing, however, is about more than just content creation. As Jane Rosenzweig at the Harvard College Writing Centre put it, “we figure out what we think through writing.” 1 Writing can be seen as a precursor to, a substrate for, and a by-product of, thinking. It can even be seen as a means of learning (the “Writing to Learn” paradigm). Through the process of writing, insights and ideas emerge that we would not have had otherwise. By externalizing our thought, we crystalize what is on our mind, making what we are thinking clearer and more explicit. Creating this external representation also gives us something we can iterate over.

⇒ Writing enables us to think about what we think, and craft a way to say it. If metacognition is the process of “thinking about thinking”2, writing is metacognition

in action. Writing is one of the oldest and most durable “tools for thought.”

⇒ If generative AI is now doing our writing, and writing is a process through which people think, then when and where in the writing process will our thinking now happen? As a research objective, how might we better understand the impact that AI systems are having on human cognition, on the processes through which we think, learn, and understand, and what might we do about it?

⇒ Understanding the consequences of outsourcing cognition to genAI is an urgent and important area of study in the domain of Responsible AI (RAI). The integration of these technologies into our tools is already underway and accelerating. The importance of this issue cannot be understated, as it has ramifications that span “formal” thinking environments like schools and universities, to informal ones, like the workplace and home.

⇒ While we have these concerns, we are optimistic that genAI can offer solutions to the issues that it creates. It has the potential to complement, enhance and augment people’s cognitive capabilities in meaningful ways. For example, while Large Language Models (LLMs) can swiftly respond to

The Tools for Thought Team is in Microsoft Research aims to put human cognition at the heart of AI systems. Their goal is to help researchers and systems builders focus not just on automation, but on augmentation, imagining how AI might help people to think better. To this end, they research the impact of AI on aspects of cognition and use this to design and build systems that support individual and collective intentionality, enhance skills in critical thinking, and develop cultures of creativity and experimentation.

IF GENERATIVE AI IS NOW DOING OUR WRITING, AND WRITING IS A PROCESS THROUGH WHICH PEOPLE THINK, THEN WHEN AND WHERE IN THE WRITING PROCESS WILL OUR THINKING NOW HAPPEN?

requests by providing elaborate outputs, they are also able to ask meaningful questions to help us explore and consider something new. Leveraging these kinds of capabilities, genAI can help us think more deeply, communicate better, and solve larger, harder problems than we currently can.

⇒ How might we take this approach when writing the article on Thomas Jefferson that we described at the outset? Rather than writing the essay on our behalf, genAI could instead support us in approaching the task with a more critical eye3. It could push us to think of alternate perspectives on Jefferson’s

life, for example, or to be more curious about the accounts written by others, thinking about their motivations and perspectives.

⇒ It is essential that we create a world in which these new technologies do more than improve our productivity. They should make us better thinkers than we would be without them. We believe the time is right for the creation of a new generation of ‘Tools for Thought’. These will unlock our capacity for creativity, provoke us to deeper insight, and ultimately accelerate the ways through which we learn and understand. ■

1Four Rules for Writing in the Age of AI, Jane Rosenzweig, December 2023

2The Metacognitive Demands and Opportunities of Generative AI, Tankelevitch et al, 2024

3AI Should Challenge, Not Obey, Advait Sarkar, CACM, 2024

ENTER THE AI AGENT SWARMS

Over the past year, sparkly new buttons have washed across the toolbars of our favorite apps like a technological tsunami. As applications updated on our devices, users witnessed the real-time rollout of AI integration interfaces, confirming the advent of the AI era. These integrations have made AI ubiquitous and are setting the stage for something even bigger in 2025: an era of automation.

⇒ Integrations are critical to AI adoption, enabling mass impact. About half of users are late adopters or laggards, unlikely to embrace new technology if it’s not proven valuable or required. By reducing friction through integrations — like adding an AI magic button that taps into ChatGPT’s LLM via API without requiring a separate account — users in 2024 have explored and been amazed by AI’s proven capabilities.

⇒ Examples of these in-app integrations abound, boosting user engagement in programs they have native familiarity with. Tools like Fireflies.ai and Zoom’s AI Companion transform meetings by capturing and summarizing notes in real time. Creating visuals through text is now accessible to Photoshop geeks and Canva superfans alike. Microsoft’s Copilot and Google’s Gemini are integrated across their

vast productivity suites, enabling instant pitch outlines, task tracking, and preliminary data analysis.

⇒ While flashy text-to-video technologies like OpenAI’s Sora grab headlines, it’s these easy-to-use integrations in our everyday tools for everyday tasks that represent the most widely adopted and proven valuable use of AI.

⇒ From office workers to large corporations, experimenting with and then adopting AI has become the new norm. Consulting firms now use AI for data insights and strategic ideation. Marketing firms employ AI for mockups and pitch refinements. Half of Fortune 100 companies use AI-powered virtual avatar presenters for onboarding videos.

⇒ While most users aren’t advanced AI users yet, exploration is ubiquitous, fueling both fear and appetite. In my research and consulting work with communication clients, I repeatedly see a new hunger to use the tools more robustly now that everyone’s had a taste.

⇒ After this past year of normalizing, across industries the interest now lies on how to seize the potential of AI at the next level. That next more powerful phase will be one of AI-powered automation.

⇒ From small agencies to multinationals, users are increasingly interested in what

Stephen Lind is an associate professor of clinical business communication art USC’s Marshall School of Business. His teaching encompasses strategic messaging, technology in communication, consulting, and refining speaking and writing skills for business contexts. He received his PhD, with distinction, from Clemson’s transdisciplinary Rhetorics, Communication, and Information Design Program.

AI can do autonomously. Configured properly, AI can routinely perform tasks without human intervention, streamlining processes and boosting efficiency — from automating humanlike customer service responses to creating bespoke marketing campaigns.

⇒ Automation tools like Zapier have been around for years, but they’ve never been as potent as they are now, nor have there ever been as many competitors, like Make and Tray and n8n.

⇒ Imagine you could deploy AI-powered agents to scrape LinkedIn and industry websites for real-time market data on potential clients. The agents cross-reference leads with past success patterns in found in your Gmail threads, then personalize outreach messages using ChatGPT. You set this to repeat daily, activating sophisticated n8n workflows that connects to your SMS system to send texts, your Asana boards to update team leads when you get a bite, and then syncs with your Google Calendar to set meetings.

⇒ With the training from a few YouTube tutorials, any business can now make AI-driven automations a reality.

⇒ Large firms are already setting up AI-powered automations. Meanwhile, a new field of freelance digital consultancy has emerged. Ambitious startups like Custom AI Studio, led by Devin Kearns, and Cassidy, founded by social media success story Justin Fineberg, are meeting this demand. They leverage an array of platforms to get robots to work on your behalf — not just one agent, but perhaps an agent swarm working in unison.

⇒ What was once available only to large corporations is now accessible to boutique agencies willing to explore AI’s potential. Because we’ve seamlessly experimented with AI, both the advantages and dangers have been democratized profoundly. What will be negotiated in the months to come will be how much the risks are worth and how new norms, such as those around disclosure, will be negotiated.

⇒ As we move into 2025, we will witness the shift toward a new era in which automation becomes integral across business operations. Perhaps we’ll find ourselves clicking those sparkly AI buttons less in the near future — only because the buttons will click themselves for us. ■

AI IN PR: REFLECTION, REALITY AND RELEVANCE

In the 2020 Relevance Report, I made a few predictions about the impact Artificial Intelligence (AI) would have on the PR profession. Some have materialized — some have not. What I could not have predicted was the mind-boggling speed with which OpenAI’s GPT models would evolve and how profoundly they would impact all of us.

Looking Back: What Came to Pass

In 2020, I emphasized the efficiency AI would bring to repetitive tasks. Today this is evident across the PR industry. AI tools have become integral to everything from automating social media monitoring and trend analysis to enhancing campaign measurement. Platforms like Sprinklr and Meltwater are empowering PR teams more than ever to work faster, smarter, and with greater precision.

⇒ I also predicted AI would play a major role in creating content. Tools like ChatGPT are now writing press releases, blogs and every type of written material, while applications like DALL-E and Midjourney crank out original artwork within seconds of being prompted. These generative AI applications have far exceeded my expectations in streamlining content development and improving editorial process efficiency. When I became the CEO of Secondmind three years ago

I relied on ghost writers to craft bylines for me due to bandwidth constraints and I rewrote every single one, much to the dismay of my head of PR. After its release in 2023, ChatGPT became my co-author for just about everything, from expert opinion pieces like the one you are reading now, to investor pitches, and more.

⇒ The ability to predict and defuse crises has also developed significantly. By leveraging AI for advanced sentiment analysis and real-time social listening, companies like Dataminr and Zignal Labs are helping brands better detect emerging issues and intervene when a crisis is unfolding. However, as I suggested five years ago, human decision making remains critical in managing these situations. AI can provide valuable insights with exceptional efficiency, but it cannot replace the nuance, care and out-of-the box thinking of human experts.

Don’t believe the hype…

After 10 years of commercializing AI technologies in consumer and enterprise applications, I’ve experienced multiple cycles of AI hype. We are in the midst of one right now, fueled by the popularity of generative AI, and the pattern is predictable: lofty expectations will lead to inevitable disappointment when AI fails to deliver on every promise.

⇒ Generative AI reached the ‘Peak of Inflated

Gary Brotman is the chief executive officer and director at Secondmind, an AI company focused on optimizing complex systems in the automotive industry. He has 20+ years of experience leading teams responsible for building and commercializing product and service lines for leading Fortune 500 companies and startups with machine learning, mobility, and media at their core.

Expectations’ on the 2023 Gartner Hype Cycle and is now sliding toward the ’Trough of Disillusionment’ in half the time of its predecessor, Deep Learning. We can expect a cooling period in 2025 as initial results fall short of expectations — something I welcome because it signals a shift to practicality and pragmatism. Real value will emerge on the ‘Slope of Enlightenment,’ and scale will be

reached at the ‘Plateau of Productivity’ as early as 2026 when companies unlock sustainable applications that deliver real value.

⇒ For PR this means that while existing tools may not meet all the expectations, they will quickly mature into valuable assets for content creation, personalized messaging, and audience engagement.

AI IN PR: REFLECTION, REALITY AND RELEVANCE CONTINUED...

The Next Five Years…

AI will continue to evolve rapidly and eventually enhance just about everything we see and do, and the next five years will be defined by a partnership forged between PR experts and technology.

⇒ Hyper-Personalized Communications:

AI will analyze vast amounts of data and extract patterns to enable hyper-personalization at scale. Brands will tailor every piece of content, from press releases to social media posts, to individual stakeholders. PR professionals will leverage AI to create highly targeted messaging using real-time behavioral data, driving stronger engagement and more meaningful relationships with their audiences.

⇒ Predictive Content Creation: AI tools will analyze audience preferences, media consumption patterns, and social trends to predict what types of stories, themes, and angles will resonate best, before they are written. This will allow PR teams to anticipate the next viral campaign or crisis and be proactive with messaging and risk management.

⇒ AI-Enhanced Media Relations: The future of media relations will be AI-driven. We will see the emergence of AI tools that identify and help build relationships with journalists and influencers based on coverage patterns and interests. These tools will help PR professionals pitch stories that are hyper

relevant and timed perfectly, ensuring the right content reaches the right contacts at the most opportune moments.

⇒ Greater value for PR: I think the most profound thing AI will do for PR is erode knowledge silos within companies. Historically, PR professionals have been communicators first, relying on domain experts with specialized knowledge to develop effective campaigns and collateral. With AI, PR professionals will easily develop a comprehensive understanding of important domains like product development, research and customer service. Being armed with more knowledge will enhance collaboration and cement the role of PR as essential and integral to the operation and success of any business.

⇒ I often refer to AI as ’Augmented Intelligence’ — a more appropriate term from my perspective. AI is a toolbox that gives us superpowers to learn and do stuff more efficiently, and generative AI is one AI tool among many. AI alone will not replace human expertise in PR or any profession because it cannot create new things, innovate new ideas, or make decisions independently. Only humans can do these things. Those who fail to harness the power of AI will struggle to remain relevant. Those who do will shape the future PR and thrive. ■

ARE YOUR AI INVESTMENTS HITTING THE MARK?

The AI revolution in public relations isn’t living up to its promise—at least not yet. Despite the buzz and billion-dollar tech investments, AI adoption in our field has been slower and less impactful than many predicted. Why? Because we’ve been dazzled by shiny new tools and mesmerized by potential, while neglecting the most critical component of any transformation at work: our people.

⇒ A recent McKinsey report suggests that just 3 percent of service businesses have scaled AI across their operations. This low adoption rate stems from an overemphasis on technology and too little attention to change management and communication.

⇒ Here’s the truth: investments in training and change-related communication that leave no practitioner behind are more important than buying data or licensing AI-powered tools. Without effective change management, clarity on your transformation journey, and each person’s vital role in it—which builds trust and helps people on our teams embrace new ways of working—we won’t see the team or organization-wide transformation needed to achieve the best outcomes.

Building Trust Through Transparent Communication

The foundation of successful AI integration lies in building trust, as many of us are inherently

skeptical about technology, especially when we think it could impact our livelihoods. This starts with clear, open communication about the role AI will play in your organization — something that’s all-too-often absent from PR team or organization-wide AI transformation plans.

• Explain why you’re embracing AI in your organization and how you plan to integrate it

• Provide regular updates and celebrate successes

• Create channels for two-way dialogue

• Address concerns and misconceptions head-on By fostering an environment of transparency and open communication, we can allay fears and build enthusiasm for AI-assisted workflows.

Empowering Through Education

A crucial aspect of change management in the AI era is comprehensive education and training. This goes beyond simple tool tutorials about how to write a prompt. It’s about fostering a deep understanding of how AI can augment and enhance PR work, in the context of each person’s unique role. Create a multi-tiered AI education program including:

• Basic AI literacy for all team members

• Advanced training for "AI champions" who can cascade specific techniques

• Ongoing learning opportunities to keep pace with evolving AI capabilities

⇒ Remember, the goal isn’t to turn PR professionals into the world’s best prompt

ARE YOUR AI INVESTMENTS HITTING THE MARK? CONTINUED...

engineers. Instead, it’s about making them more effective in their roles as trusted communications experts. Focus on building “fusion skills” — the ability to work effectively alongside AI systems, leveraging technology’s strengths while applying uniquely human skills like creativity, empathy, and strategic thinking.

IA > AI: Focus on Augmentation, Not Automation

Growing research proves that humans assisted by AI produce dramatically better results than machines or humans operating independently. Embrace the concept of Intelligence Augmentation (IA) over full automation. IA represents a symbiotic relationship between PR practitioners and AI, where technology enhances rather than replaces human capabilities.

⇒ Show concrete examples of how AI can support routine tasks, freeing up time for more strategic, creative work. This approach helps shift the perception of AI from a threat to an invaluable partner in the PR process.

INVESTMENTS IN TRAINING AND CHANGE-RELATED COMMUNICATION THAT LEAVE NO PRACTITIONER BEHIND ARE MORE IMPORTANT THAN BUYING DATA OR LICENSING AI-POWERED TOOLS.

Jeff Beringer is the chief AI officer at Golin, leading AI innovation, workforce upskilling, and the development of AI-driven client service models. A 20-year Golin veteran, he has introduced AI-powered tools to enhance content testing, campaign measurement, influencer identification, and misinformation detection.

Fostering a Culture of Continuous Adaptation

The integration of AI into PR isn’t a short-term project with a finite end date — it’s an ongoing journey of innovation and refinement. Your change management strategy should foster a culture of continuous learning and experimentation.

• Encourage a growth mindset among your team members

• Create safe spaces for experimentation with AI tools and new techniques

• View short-term failures as valuable parts of the long-term learning process

Leading by Example

Effective change management starts at the top. Leaders must not only champion AI integration but actively participate in the learning process. When leaders demonstrate their own willingness to adapt and learn, it sends a powerful message throughout an organization.

⇒ Consider initiatives like "AI Office Hours" where leaders discuss their own experiences with AI tools, or cross-functional AI task forces led by senior management.

Measuring Success Beyond Efficiency

While time and money saved by AI use is important, the success of your AI transformation should be measured by more than just financial metrics. Develop success measures that include:

• Employee engagement with AI tools

• Increases in time spent on high-value activities

• Growth in AI literacy

• Improvements in work-life balance

⇒ As we navigate the AI revolution in PR, let’s remember that our greatest asset is not our technology, but our people. By prioritizing change management and communication, we can create an environment where AI and human expertise work in harmony, elevating the value and impact of PR.

⇒ The future of PR isn’t about humans versus AI — it’s about creating a powerful synergy between human creativity and AI capabilities. And the key to unlocking this future lies not just in algorithms and datasets, but in our ability to lead our teams through this transformative journey with empathy, clarity, and vision.

⇒ As professional communicators, we’re up to the task. ■

OUTSMARTING AI CLUTTER WITH HUMAN CREATIVITY

In the two years since ChatGPT’s public release, generative AI has cycled through distinct seasons of public perception and adoption. What began with intrigue and mystique quickly shifted to fear and doom, eventually evolving into today’s widespread acceptance and usage. Now we find ourselves navigating AI’s internet clutter — the stream of buzzy AI headlines, and brainrot incessantly appearing in our feeds.

⇒ AI slop — synthetic, spam content that lack any sort of human touch — is flooding the internet. Of course, AI poweredalgorithms incentivize and prioritize this artificial slop. It’s no mystery why studies predict that by 2026, 90% of online content will be “synthetic” — platforms reward AI-generated content.

⇒ As this new hyper-personalized AI season approaches, we’re already seeing platforms like Meta release AI-generated posts, while Apple Intelligence tailors our everyday phone behaviors. This combination of similar AI content and expected personalization means only one thing: human creativity is the real advantage. Like we’ve grown tired of the same repurposed TikTok sounds, we’ll soon tire of AI’s repackaged and overused ideas.

⇒ Original storytelling and creativity will always offer lasting resonance over fleeting

trends. What follows are three principles to navigate AI’s changing seasons, all powered by our human creativity.

1) Get Your Audience Involved

What we have over AI: The ability to bring people into the story

⇒ Consumers are no longer passive. They are dedicated, engaged fans of people and brands, actively shaping communities and conversations, even products.

⇒ This two-way user phenomenon will only get stronger with technological advancements like AI. In-app AI creator tools — like Instagram and Canva’s recent AI platform features — democratize content idea creation. It’s in a brand’s best interest to recognize their superfans and give them the space to co-create.

⇒ Fandoms were some of the first to adopt and embrace AI tools for their creation expression. Last year it was Nicki Minaj’s Gag City and the Barbie movie’s Selfie Generator that took over the internet, and now brands are learning from fandoms. Google Pixel recently launched their Best Phones Forever campaign, encouraging fans to suggest travel destinations in their Instagram comment section. Using AI tools to spotlight each fan, their team created real-time personalized video responses.

Josh Rosenberg is co-founder and CEO of Day One Agency. Josh is a communications strategist and digital me-dia authority with extensive experience shaping marketing communications programs for some of the world’s leading brands, including American Express, Chipotle Mexican Grill, Facebook, Nike, Comcast, Abercrombie & Fitch, Motorola and Ferrara. He is a member of the USC Center for PR board of advisers.

⇒ Our team at Day One Agency recently gave Knee-High superfans the spotlight in the re-release of Converse’s beloved shoes. The brand’s comment section was already filled with consumers repeatedly asking for the return of the discontinued sneakers, leading to a clear decision for the Converse team: bring them back. To honor these fans, we created content that celebrated individual user comments, making our fan community feel heard and seen.

2) Put The ’Personal’ Into Personalization

What we have over AI: Making organic connections

We’re seeing a shift where platforms are rushing to automate social interactions, stripping the social out of social media.

⇒ The push to overload platforms with AI-interactions, like Meta’s failed celebrity chatbots or AI influencers’ unrealistic beauty standards, highlights that people crave organic interactions with creators and real individuals.

⇒ AI is a tool for streamlining the retail and purchase experience, not for building connections with automated responses. According to Adobe Research, 44% of U.S. consumers admit that retail Al isn’t as helpful as it could be. It’s time to refocus our efforts on where AI is actually helpful — in the

purchase experience — without sacrificing genuine relationship building.

3) Don’t Let Laziness Win

What we have over AI: Thinking beyond the obvious, connecting disparate ideas and experiences

⇒ A quick fix isn’t the best fix. What makes our life experiences rich and interesting are the unique connections we draw, the blending of niche references, and universal human truths — things technology can’t grasp. While AI can make our work more efficient, we should never allow AI to take the driver’s seat.

⇒ This technology takes practice and should be approached with curiosity and collaboration.

⇒ AI chatbots are great for gutchecks — a quick way to explore what’s been done before, brainstorm initial ideas, and support the early stages of ideation. At Day One Agency, we meet monthly to share insights and real word use-cases for AI. We’ve found that generative AI tools are especially useful at the start — helping visualize the spark of a creative idea.

⇒ AI doesn’t have emotional depth, so its storytelling will always fall flat. It’s our original experiences, ability to resonate with empathy, and our curiosity to ask the right questions that make storytelling our real superpower. ■

IMPROVING OUR RISK ANTENNA WITH AI

It’s 8 a.m. on a crisp autumn day in Manhattan in 2017. I nearly drop my scalding hot cappuccino to field a call from Bloomberg asking for an immediate reaction to the arrest of an employee.

⇒ It’s 6 p.m. on Christmas Eve 2019 in Hong Kong, where the smell of tear gas is seeping into our building as protestors are directing their frustration toward several institutions. News crews are descending upon these businesses seeking comment.

⇒ It’s the height of COVID-19 lockdowns in Australia in 2021. Brick and mortar employees have coal being hurled at them by environmental activists while online environmental activism is flooding digital and social channels.

⇒ In all three moments, communication is at the center of the crisis. Gathering audience analysis, social sentiment, and critical influencer data takes time. But we need to respond, quickly and smartly. In these moments of acute pressure, I thoroughly learned the fragility and fleeting nature of reputation management. The immediate response and subsequent months of reputational recovery required curiosity, razor-sharp judgment, and decisiveness alongside a strong caffeine hit of courage.

⇒ Every word I’ve just used to describe ’how’ a crisis was successfully managed has a unifying trait: human judgment. As communicators, we’re lauded for our wise counsel in times of crisis. We collate data, interpret it, and make decisions accordingly. Our discernment helps us earn our keep, whether as in-house executives or consultants.

⇒ Fast-forward to today where AI tools are commonplace in our industry. We should ask ourselves if AI can make our crisis response more powerful and effective. If I reflect on my role personally, could AI improve my understanding of risk for future crises?

⇒ A resounding Yes. We can all do better. I suspect greater fluency in AI within crisis management can improve our depth of understanding, pace of response, and awareness.

⇒ Today, the communications industry is in a fortunate position. There are AI platforms that can identify commentators of a viral online campaign with precision or track how an online conversation unfolds with sophisticated sentiment analysis. This also means AI can play an integral role in sharpening our risk antenna as

Tala Booker is the co-founder of Via, a cross-border marketing and communications consultancy focused on supporting international businesses and financial services firms connect to audiences in Asia-Pacific. A USC Annenberg alumnae, Tala is a 2024 winner of the Campaigns Asia 40 Under 40 Award for the marketing and communications industry. She is a member of the USC Center for PR board of advisers.

communicators to prepare us for these precarious moments better. This is precisely what we’re striving to do at Via.

⇒ Like most agencies, we’re investing in automating workflows and processes with AI tools. We’re avid users of Fireflies, where “Fred from Fireflies” joins us to summarize the client’s narrative workshop to free us up to craft the messaging or capture ideas and actions from our team brainstorm. We also frequently use AI tools to help with our competitive landscape and market analysis. The pace of research is markedly improved.

⇒ For an agency in its early days, speed and improved operational efficiency are welcome, but that’s not what gives us our competitive edge. We pride ourselves on communicating across borders and cultures, so we spend the bulk of our human capital looking over the horizon. We’re peering into our clients’ target audiences and, most importantly, their operating context to better understand how they can differentiate themselves or navigate local nuances.

⇒ In practice, AI is helping us sweep the news and online debate about fast-moving financial and economic developments within a specific country. We overlay the data

capture with human judgment to identify the prevailing trends in that market and sector. This allows us to advise our clients on how to enter the public discourse or circumvent it as the story progresses or takes on new angles.

⇒ Our holy grail will be helping businesses shape the day-one story at the right moment by anticipating where the news cycle is going before their competitors do. Suppose we can better predict outcomes and risks. In that case, we’ll further strengthen our ability to advise business leaders as they enter a new market, widen their supply chain, seek investment, or attract new customers across the Asia Pacific region.

⇒ When it comes to sharpening our risk antenna as communicators, the combination of curiosity and cynicism makes for a potent elixir. These traits allow us to ask the hard questions of our stakeholders and clients or anticipate every possible scenario that could go wrong.

⇒ If we can combine this level of critical thinking with tools that enable crisis preparedness and speed, I am confident we all will be better equipped to detect, prepare for, or take on a crisis as it breaks in our stride. ■

Lucas Pon contributed to the research of this essay.

AI LITERACY IN THE DIGITAL AGE

Four years ago, during a global pandemic and before ChatGPT became a household name, I took on the role of chief marketing officer at Optiv, a leading cybersecurity company. At that time, the world was experiencing a massive increase in cyberattacks, with rapidly rising degrees of sophistication, frequency and devastating financial impact. On top of that, the reliance on artificial intelligence (AI) and machine learning (ML) was growing. Cybersecurity experts were on the front lines (and still are), boosting their defenses. Meanwhile, marketing and communications professionals had to quickly adapt, learn and embrace emerging technologies and find new ways of working in a risk-filled, AI-driven world.

⇒ Now, the roles of marketers and communicators have become more important than ever. AI is at everyone’s fingertips, offering both rewards and risks. My cyber colleagues like to say that while AI helps us do great things faster, it also helps bad actors do bad things faster. With just a few text prompts, anyone can write emails, prioritize tasks, generate code, proofread and build reports in seconds. But this “force multiplier” comes with a catch: without proper monitoring, governance and awareness, the reliance on AI can lead to serious legal, ethical and security risks.

Safeguarding the organization has become a crucial responsibility for today’s marketing and communications teams.

⇒ How can a company maximize the productivity gains of AI while minimizing the risks of data leaks, theft, and privacy violations among others? The answer lies in making sure that employees at all levels are AI-literate. This is not much different from the dilemma organizations faced in the ’90s — should employees have access to the World Wide Web? Now we can’t begin to imagine a world without the internet. AI is enjoying that same moment. And marketing and communications experts, in particular, are perfectly positioned to work alongside cyber= security pros to lead the charge as we did when we helped usher organizations into the digital age.

⇒ To effectively embrace AI tools, marketing and communications teams need more than just a basic understanding of AI. They also need to recognize how it could be used maliciously, whether in cyberattacks, disinformation campaigns or more. By staying ahead of these risks, they can help protect their organizations from brand and reputational damage and ensure their messages remain authentic and credible.

Heather Rim is the chief marketing officer at Optiv, leading efforts to enhance brand visibility, generate de-mand, and engage stakeholders, while also serving as executive sponsor of the company’s ESG program and Optiv Women’s Network. She is a seasoned global marketing and communica-tions executive with over 20 years of experience driving brand growth. Previously, Rim was CMO at AECOM and held senior roles in corporate communications, marketing, and investor relations at Avery Dennison, The Walt Disney Company, and WellPoint. She is a member of the USC Center for PR board of advisers.

⇒ Industry experts say that while AI might not replace our jobs, those who know how to harness its power will. That’s why it’s no surprise that cybersecurity and AI-focused training are in such high demand. Some organizations, including Optiv, even offer free training resources, so there’s no excuse for not being educated on the topic. AI knowledge is quickly becoming table stakes. Here are a few key areas that are especially critical for our profession to focus on:

• Ensure you have a firm grasp of the language — AI vs. ML, hallucination, grounding, over-reliance, data provenance, and more.

• Know how to identify risks and threats specific to AI and best practices for using tools safely and securely.

• Understand how AI systems use data so you can spot AI bias and know how to report and resolve it.

⇒ Looking ahead, as AI continues to evolve, the intensity of the challenges it brings with it will, too — such as more sophisticated deepfakes and misinformation, data manipulation, and corruption of AI-driven automations. At the same time, the incredible opportunity it represents is beyond what any of us can envision. This is a learning journey, and we will never declare mastery as technology is changing by the minute.

AI literacy is the key to overcoming new challenges, building trust and helping organizations succeed in an AI-powered future. ■

(TEAMS) NEED TO RECOGNIZE HOW (AI) COULD BE USED MALICIOUSLY, WHETHER IN CYBERATTACKS, DISINFORMATION CAMPAIGNS OR MORE.

PRINCIPLES FOR HUMAN-FIRST COMMUNICATIONS IN AN AI WORLD

Nearly two years after ChatGPT launched, it feels like we’re still scratching the surface with generative AI — but the impact is already apparent.

⇒ Generative AI is poised to fundamentally transform the way we live and connect, much like the printing press, broadcast media, personal computing, search, and social platforms once did. The potential seems limitless, and the growth rate feels exponential.

⇒ Of course, as we experiment with these new tools, we also need to grapple with their inherent issues — issues like bias, hallucinations, the potential for abuse, and disruptions in the labor market. Many of these issues will ultimately need to be addressed at the societal level — and there’s a long way to go on some of them — but the early progress there is encouraging.

⇒ At Yahoo, we’re taking a consumerdriven approach to generative AI, exploring a variety of new products and features to improve our users’ experience. So far these have included useful and fun features like key takeaways in Yahoo News, email summaries in Yahoo Mail, and recaps for fantasy leagues on Yahoo Sports.

⇒ We’ve also been experimenting with generative AI in our communications work

at Yahoo. For example, we use Axios HQ and Glean for internal comms, ChatGPT and LinkedIn’s writing assistant to generate draft copy, and Otter.ai for transcriptions and summaries.

⇒ Our early tests of these tools have been very successful, but as we’ve explored their potential, it’s become clear that we need to consider how we want generative AI to be used in our field — and how we don’t.

⇒ To help guide that process, we have developed the following principles:

⇒ Principle 1: Use Gen AI For Scale, Not Strategy. All great communications work begins with a deep understanding of your audience, a clear strategy for reaching and influencing that audience, and an ability to articulate the “why” in addition to the “what” in a given situation.

⇒ Generative AI can be very helpful with the “what” — quickly generating draft copy and assets, providing quantitative insights into your audience, and so on — and that, in turn, can help your team scale and move more quickly.

⇒ But everything needs to begin with the foundation of strategy, audience, and the “why” — and building that foundation still requires human-level contextual understanding, judgment, and empathy.

Sona Iliffe-Moon is the chief communications officer at Yahoo where she oversees global corporate, consumer, and internal communications for Yahoo News, Finance, and Sports. With over 20 years of expe-rience, she has led strategic communications for major companies including Facebook, Insta-gram, Lyft, and Toyota. Prior to Yahoo, Iliffe-Moon worked at top PR agencies Weber Shandwick and Hill & Knowlton and served as a foreign affairs officer in the Bureau of Arms Control at the U.S. Department of State. She is a member of the USC Center for PR board of advisers.

Principle 2: Focus On The Communications

That Matter Most. As these generative AI tools become more powerful, they could theoretically be used to communicate about everything to everyone, leaving no detail about your company unexplained, externally or internally.

⇒ But just because we can communicate more — more releases, more social posts, more internal newsletters — doesn’t always mean we should.

⇒ In fact, we believe it’s more important than ever to stay focused on only the most important communications to avoid information overload. We need to help our audiences find the signal in the noise, not create more noise.

⇒ Principle 3: Put Authenticity and Trust First. Authenticity is more important than ever, and trust is one of the most valuable — and one of the most fragile — assets a company can have. One of the uncanny things about generative AI is it can make anyone sound like an expert on anything. The reality is these systems still make mistakes, and while they can often sound authentic, they are not.

⇒ This means it’s more important than ever to have rigorous human review of all communications before they ship or

post and to put people front and center in our communications.

⇒ For key audiences like employees, consumers, corporate stakeholders, press, and policymakers, there is no substitute for hearing directly from the people leading our company and those driving the work every day — and hearing from them in their own voices and perspectives.

⇒ Beyond communicating, AI can help us take some of the monotonous work off our plates and give us more space and time to use our imagination and intuition. In some ways, it can even stimulate the imagination and serve as a ‘bicycle of the mind’ as Steve Jobs once referred to computers — helping amplify imagination, test theories, and explore ideas that will generate even more ideas, angles, exploration, and refinement. Just be sure to keep the principles at heart when tapping into the best of AI in your communications practice. ■

AI IS AN UPRISING, NOT AN UPGRADE

Artificial Intelligence is the centerpiece of nearly every innovation action across the comms profession. Whether organizations are still actively engaged in “let’s find efficiencies” efforts or well into the “imagine if …” phase, the energy masks a larger truth: most organizations lack the internal mechanisms to truly embrace what AI offers. Transformation is required, yet what remains unexplored or unacknowledged is how deeply AI challenges the static nature of our organizations’ and institutions’ design.

⇒ This challenge evokes the societal upheavals of the 20th century, where rigid hierarchies crumbled in the face of rapid technological change. Yet, unlike the visible smokestacks and assembly lines, AI’s transformation often occurs unseen, driven by changes in data and decision-making.

⇒ To bring AI into any organization, no matter how forward-thinking, change management must become part of the foundation. Even the simplest AI application requires a series of changes more profound

than updating policy documents or adjusting workflows. It means restructuring the DNA of the institution itself, not just in how it works but in how it thinks, moves, and redefines the relationship between technology and people.

⇒ AI demands more than isolated pockets of innovation — it requires centralization and a thoughtful, orchestrated reimagining of the organization. Vision, values, and collaboration must evolve alongside this transformation. Yet, organizations rarely focus on updating these softer aspects of their identity.

⇒ Technology changes often begin rolling out with policy updates, but to truly integrate AI, institutions must go deeper. It’s not enough to update the operational rules; the spirit of the organization must shift. Without a culture that embraces experimentation, risk, and the unknown AI remains a hollow promise. The workers on the ground must be more than passive users of new tools. They must be the creators, the experimenters, and the voices that shape how AI evolves within their environment.

Grant Toups is first global chief digital and intelligence officer at Burson, which won a 2024 SABRE award for Global Digital Agency of the Year. He leads a global team of professionals managing a suite of offerings including digital and social communications; reputation enhancement and defense; technology and AI advisory; and advanced intelligence, research, measurement and analytics. He is a member of the USC Center for PR board of advisers.

We can structure our exploration as AI might process information:

Prompt: Can you describe the current orientation of corporations and institutions around AI?

• Static hierarchies

• Resistance to change

• Focus on efficiency over adaptation

Prompt: What does overcoming these challenges require?

• Restructuring institutional DNA

• Centralizing innovation strategy

• Evolving organizational culture

• Redefining employee roles and expectations

Prompt: What would an AI-transformed organization look like?

• Fluid, adaptive structures

• Continuous learning and experimentation

• Symbiosis of human creativity and AI capabilities

⇒ This framework reveals the cyclical nature of progress: as AI analyzes and optimizes our systems, we must analyze and optimize our relationship with AI.

⇒ Yet, many organizations approach this revolution with outdated tools, attempting to patch new technologies onto crumbling foundations. They risk myopia, seeing AI

as a mere productivity booster rather than a catalyst for wholesale reinvention. Workers should actively engage with AI at a level that allows them to innovate, create, and push boundaries within their own fields.

⇒ Most recruitment strategies mirror what has always been done, asking people to fit into roles designed for a pre-AI world. Job descriptions stay static while the world transforms. To truly integrate AI, organizations must rethink what they ask of their employees.

AI INTEGRATION

DEMANDS

A HOLISTIC APPROACH TO INSTITUTIONAL CHANGE. IT CALLS FOR A REIMAGINING OF HOW ORGANIZATIONS

FUNCTION, ADAPT, AND GROW.

AI IS AN UPRISING, NOT AN UPGRADE CONTINUED...

Prompt: What must we do to stay ahead even as everything changes rapidly around us?

• Embrace experimentation and calculated risk-taking

• Foster a culture of lifelong learning

• Develop agile decision-making processes

• Cultivate trust between leadership and workforce

⇒ Few organizations are channeling the imagination and courage to make this leap. Budget constraints, fear of failure, and an aversion to risk paralyze even the most innovative industries. The only way to truly see AI integrated into the DNA of an organization is through leaders who trust their workforce and a workforce confident enough in itself to experiment, fail, and try again. The tools may evolve, but without a change in mindset, AI becomes a missed opportunity — a potential never fully realized. And we can all agree that collaborative organizations with magnetic cultures deliver outstanding performance and a great place to work.

Prompt: What will institutions look like when we’ve made the transformation?

• Organizations as living, learning entities

• Work as a canvas for human creativity and AI augmentation

• Innovation driven by a synergy of diverse perspectives

• Institutions that amplify human potential and societal progress

⇒ In essence, AI integration demands a holistic approach to institutional change. It calls for a reimagining of how organizations function, adapt, and grow. By embracing this challenge, we pave the way for a future where AI enhances human potential, driving innovation across all sectors of society.

⇒ In embracing this challenge, we have the opportunity to create institutions that are not just more efficient, but more human — organizations that amplify our creativity, nurture our growth, and expand the boundaries of what we believe possible. The future promised by AI will be realized not through technological determinism, but through a conscious, collective reimagining of our shared human enterprise. ■

Michael Kittilson contributed to this essay.

HUMAN WRITER WEIGHING IN HERE…

The Relevance Report project is one of my favorite undertakings here at USC Annenberg, and this is our ninth annual edition (my eighth). We are fortunate to have a cadre of industry leaders who take the time to share their written thoughts with us for this project. Their participation in our research projects and ideas and engagement with our student teams are part of what makes the experience here at our Center unique on our campus and, certainly, in the public relations industry.

Michael Kittilson was our student leader on this project, encouraging CPR members to contribute essays, conduct research for our board members, and invest hours copy editing. He also outlined or co-wrote several pieces in this year’s book.

The team used some AI tools to help with editing and organizing — mostly Microsoft Copilot (USC’s firewalled subscription), with some added help from Grammarly’s paid version. Other use of AI tools are noted by each individual author.

Thanks to USC professors Lind and Young for contributing thoughtful pieces. And to Dean Willow Bay and the Annenberg team for supporting and promoting our work.

Finally, thank you again to the Microsoft team for supporting and underwriting another edition with us. Your continued partnership with the Center is invaluable, and we look forward to more successful collaborations.

USC Center for Public Relations

Adjunct Instructor, USC Annenberg

THE RELEVANCE REPORT 2025

EDITORS CONTRIBUTORS

Ron Antonette ’90

Michael Kittilson MA ’25

LEADERSHIP

Fred Cook, Director fcook@usc.edu

Burghardt Tenderich, PhD, Associate Director burghardt.tenderich@usc.edu

Ron Antonette, Chief Program Officer ron.antonette@usc.edu

BOARD OF ADVISERS

Jennifer Acree* JSA+Partners

Jonathan Adashek IBM

Jessica Adelman Mars Wrigley

Christine Alabastro* Prenuvo

Vanessa Anderson AMPR Group

Dan Berger Amazon

Clarissa Beyah Union Pacific

Tala Booker* Via Group

Faryar Borhani* Encore Capital Group

Judy Gawlik Brown The Downtown Group

Adrienne Cadena*† Havas Street

Dominic Carr Starbucks

Jessica Chao* Din Tai Fung

Vijay Chattha VSC

Janet Clayton* Vectis Strategies

Stephanie Corzett* Nordstrom

Carrie Davis CD Consulting

Doug Dawson Microsoft

Chris Deri Weber Shandwick

Cristal Downing Merck

Dani Dudeck* Instacart

Bob Feldman† FHS Capital Partners

Beth Foley Edison International

Catherine Frymark Mattel

Matt Furman ExxonMobil

Robert Gibbs Warner Bros. Discovery

Brenda Gonzalez* USCIS

Cynthia Gordon† Nintendo of America

Jennifer Gottlieb Real Chemistry

Simon Halls* Slate PR

Matthew Harrington Edelman

Jon Harris Conagra Brands

Sona Iliffe-Moon* Yahoo

Bill Imada† IW Group

Megan Jordan* USC

Nina Kaminer

Nike Communications

Krystian Fernandez MA ’26 Van Luu ’26

Lucas Pon MS ’25

India Starr MA ’26

Janine Hurty, Director of Development hurty@usc.edu

Tina Vennegaard, Senior Strategic Advisor tinaven@usc.edu

Ulrike Gretzel, PhD, Senior Research Fellow gretzel@usc.edu

Seema Kathuria Russell Reynolds Assoc.

Megan Klein* Warner Bros. Discovery

Chris Kuechenmeister American Red Cross

Maryanne Lataif* AEG Worldwide

Elizabeth Luke* Pinterest

Kelly McGinnis Levi Strauss

Jamie McLaughlin Monday Talent

Gulden Mesara

Josh Morton Nestlé North America

Torod B. Neptune† Medtronic

*†Glenn Osaki* USC

Erica Rodriguez Pompen Micron

Ron Reese† Las Vegas Sands

Heather Rim* Optiv

Melissa Robinson Boingo Wireless

Josh Rosenberg Day One Agency

Michelle Russo* US Chamber of Commerce

Kristina Schake The Walt Disney Company

Barby Siegel Zeno Group

Charlie Sipkins FGS Global

Hilary Smith NBCUniversal

Don Spetner† Weber Shandwick

Kirk Stewart*† KTStewart

Michael Stewart* Hyundai

Susie Tappouni Amgen

Grant Toups* Burson

David Tovar Grubhub

Gerry Tschopp Experian

Mya Walters Bath and Body Works

KeJuan Wilkins Nike

Julia Wilson* Wilson Global/ Hampton Univ.

Deanne Yamamoto*†

Melissa Waggener Zorkin

WE Communications

IN PARTNERSHIP WITH

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.