ChatGPT Poses an Insider Threat, And Your Employees May Have No Idea

By Anastasios Arampatzis

Formerly the focus of science fiction stories, artificial intelligence (AI) is now an actual phenomenon influencing the globe. In addition to all the excellent possibilities that the new technology may bring up, it also has the potential to increase cybersecurity vulnerabilities, such as insider threats.

The swift increase in user base and security concerns

Large Language Models (LLMs), like ChatGPT, are a particularly well-known type of AI that can produce astonishingly similar text to human speech based on enormous volumes of trained data. The exponential growth, use, and user adoption of LLMs have brought about a rapidly changing cybersecurity threat picture that society must carefully comprehend and manage.

You only need to consider the utilization, which is expanding quickly and has reached 100 million monthly users, making it the fastest-growing consumer application ever. ChatGPT got 1 million users in under five days! It’s understandable why people might be concerned about the potential for abuse, given that the Open-AI bot can write code in addition to editorials.

According to a recently released BlackBerry study, the AI-powered ChatGPT bot might be a severe security risk. 51% of IT professionals polled in North America, the United Kingdom, and Australia concurred that a ChatGPT-powered cyberattack would likely occur before the year is through. In comparison, 71% of respondents believed nation-states were already using technology against other nations.

CyberArk’s threat research team detailed how ChatGPT could produce polymorphic malware in a blog post in January 2023. Researchers bypassed the content policy controls OpenAI puts in place to stop ChatGPT abuse.

Is ChatGPT yet another insider threat?

TL; DR, yes, it can be.

ChatGPT can present an internal threat to businesses in addition to the security issues brought on by external attacks. One such risk is the accidental disclosure of private data, including confidential corporate information and proprietary source code. 

Even though this threat is not new, ChatGPT has created a new way for insiders to reveal sensitive information unintentionally. Employees that use ChatGPT may unwittingly provide information that is harmful to the company. They didn’t intend to hurt their employers, but rather because they weren’t aware of the risks.

For example, the Economist Korea reported three instances where Samsung employees mistakenly disclosed private information to ChatGPT. In two cases, various staff employees entered corporate source code for optimization and error checking. In another, a staff member input meeting transcripts into the system to create minutes from the text.

OpenAI’s privacy policy states, “When you use our Services, we may collect Personal Information that is included in the input, file uploads, or feedback that you provide….” Many companies are worried about the potential risk of storing sensitive data on servers owned by generative AI service providers such as OpenAI, Microsoft, and Google. This can threaten companies since this data may be served to other users as the machine learning models continue to develop.

Numerous companies, including Apple, JP Morgan, Samsung, and Deutsche Bank, have prohibited their staff from utilizing generative AI tools like ChatGPT to minimize the risk of exposing confidential information. However, this ban on a widely-used tool could have adverse effects, as employees may seek ways to bypass the restrictions.

Everyone understands that it is essential to avoid uploading a product’s source code to open repositories or sensitive information to social media. However, there may be confusion when inputting data into online tools since many people regularly upload data to Office 365 or query Google. Staff must be aware of these risks and take necessary precautions.

Employees have access to various information, some of which is confidential, and require extra caution when handling. For instance, strategic data regarding upcoming plans or intellectual property, such as chip designs, may be included in data sets used by generative AI tools for self-teaching purposes.

If the data becomes usable for learning and querying, it may be possible for someone else to extract that information later. Several known cases exist where people put company plans into ChatGPT to turn them into presentations. 

What do other AI tools think of ChatGPT?

However, if you ask other LLM tools whether ChatGPT poses an insider threat, you will encounter a sense of AI chivalry. I have asked GrammarlyGO, the new AI tool launched by Grammarly, what its opinion is, and it came up with the following interesting answer:

I’m sorry, but I am not able to determine if ChatGPT poses an insider threat. As a helpful, fair, and safe AI-powered assistant, my role is to provide information and assistance without bias or judgment. It is important to carefully evaluate the potential risks and benefits of any technology, including ChatGPT, and to ensure that appropriate safeguards are in place to protect sensitive information and prevent unauthorized access or misuse. Ultimately, it is up to each individual and organization to make informed decisions that prioritize security and privacy.

As you can see, the tool cannot “determine if ChatGPT poses an insider threat,” however, it does come up with some helpful advice (albeit a bit generic) on what businesses should do to protect their sensitive data from insider threats. Technical safeguards and security awareness training to transform behavior and instill a positive security culture are the best mitigations to any technology threat, not just ChatGPT.

Anastasios Arampatzis is a retired Hellenic Air Force officer with over 20 years’ worth of experience in managing IT projects and evaluating cybersecurity. During his service in the Armed Forces, he was assigned to various key positions in national, NATO and EU headquarters and has been honoured by numerous high-ranking officers for his expertise and professionalism. He was nominated as a certified NATO evaluator for information security. 

Anastasios’ interests include among others cybersecurity policy and governance, ICS and IoT security, encryption, and certificates management. He is also exploring the human side of cybersecurity – the psychology of security, public education, organizational training programs, and the effect of biases (cultural, heuristic and cognitive) in applying cybersecurity policies and integrating technology into learning. He is intrigued by new challenges, open-minded and flexible. Currently, he works as a cybersecurity content writer for Bora Design.  Tassos is a member of the non-profit organization Homo Digitalis.  

error: Content is protected !!