Facing the Digital Age of AI in PR and Avoiding its Potential Evils
By Thomas Mustac, Publicist — Otter PR
ChatGPT and other artificial intelligence tools have skyrocketed in popularity in recent years, but they aren’t without their challenges. While supporters have praised the technology’s ability to make people’s jobs easier, critics have challenged its frequent inaccuracies.
Although the AI revolution seems here to stay, embracing this paradigm shift remains a matter of doing so responsibly and with an informed mindset.
AI and misinformation
As artificial intelligence technology becomes more advanced, it becomes more difficult to distinguish between what is authentic and what has been artificially generated. Whether it’s images or written content like articles or emails, we have a real crisis on our hands where AI technology can be abused by wrongdoers for malicious purposes.
The consequences of these developments in AI are tremendous. Now that deepfake imagery is becoming more convincing, it could have more damaging consequences beyond affecting someone’s reputation. AI-generated falsehoods could contribute to the perpetuation of misinformation on a massive scale. Worse yet, if these falsehoods are used to provide more data to the AI model as part of the “training” process, that could create even more misinformation.
AI models’ ability to replicate human language is also dangerous if used maliciously. For example, a wrongdoer could prompt an AI chatbot to create a written message in the tone and voice of a well-known individual, which could then be used for scams, propaganda, or other harmful applications.
As such, it’s essential for all media consumers to keep themselves informed and stay vigilant of possible misinformation. Although it can be tempting to take everything at face value, research and ensure that the data is accurate. From a social media standpoint, ensure you read everything thoroughly before sharing, as you never know where misinformation may occur.
For publicists, it’s your responsibility to vet your sources and ensure that you are not accidentally spreading misinformation. As someone who has some control over the story, spreading misinformation — even unintentionally — could lead to thousands or millions of people being misinformed. This can also wreak havoc on your reputation if wind is caught that you did not do your due diligence.
On the other hand, it’s necessary to be proactive and prepared for AI-powered misinformation to negatively affect you. If you represent a high-profile client — such as a celebrity or politician — chances are this technology could be weaponized against them. Come up with a game plan as to how you will respond to a potential “deepfake crisis” if one arises. The more advanced the technology becomes, the more believable the images they produce will be, and it will be less effective to simply dismiss the images as AI-generated.
Embracing AI as a tool, not a replacement
The fact is that AI is simply not reliable during times of crisis, since the technology lacks the emotional intelligence and human insight to perceive human emotions. Although an AI can analyze data from past crises, it’s important to remember that no two crises are the same, as these situations are constantly evolving. Thus, it’s crucial to have the human touch in crisis responses, as adaptability is the key to making it out safely from these situations.
On a similar note, artificial intelligence is unable to think creatively about strategy. Although AI programs have been developed that synthesize “creative” output — such as images, poems, and song lyrics — their output is completely unoriginal. As AI is effectively a tool that analyzes data, any synthesis is based on pre-existing data. In other words, plagiarism is likely, if not guaranteed.
The same principles can be applied to public relations strategy. An artificial intelligence will never be able to think creatively or originally — basing its output only on archived data.
As a result, it’s best to look at AI as a tool to supplement human work — not replace it — particularly in fields like public relations. There is no denying that artificial intelligence tools can significantly improve the efficiency of publicists’ jobs. For example, functions like research, creating initial drafts of pitches and press releases, and writing email templates can be done more quickly through AI before a human publicist can revise them to ensure they have the all-important human touch.
Human workers should also plan for AI to go wrong. Because artificial intelligence relies on pre-existing data, it’s only as reliable as the data upon which its responses are based. If an AI bases its responses on inaccurate data, those inaccuracies will translate to any responses it gives, so it’s still vital for a human to “fact-check” any outputs synthesized by an artificial intelligence program.
AI can (and will) do a lot of good in this world, but it’s also essential to be prepared for the reality that it could cause some harm. However, being informed about the potential shortcomings and abuses of AI technology will ensure that you do not only fall victim, but also do not unintentionally victimize others by spreading its misinformation.