AI is Everywhere: Tips for Mitigating Risk in Enterprise Generative AI Deployment
By Anurag Lal is the President and CEO of NetSfere
Generative AI, artificial intelligence (AI) technology that can generate high-quality text, images and more, is poised to grow in market size from $43.87 billion in 2023 to $667.96 billion by 2030, according to Fortune Business Insights.
The promise of generative AI for optimizing workflows and injecting efficiency into business operations is fueling the rapid adoption of this technology. When ChatGPT was introduced, the chatbot reached an estimated 100 million monthly active users just two months after launch, setting a record for the fastest growing user base.
Enterprises are using generative AI for automating responses to common questions, writing marketing content and emails, personalizing and improving customer service, and generating source code for software applications.
While many organizations are excited about the power of this technology to optimize workflows and inject efficiencies into their operations, there are concerns about a range of security, privacy, compliance and other risks associated with generative AI.
As the adoption of generative AI continues to grow in tandem with emerging applications and use cases, enterprises will need to take steps to mitigate risks to safely and securely harness the power of this transformative technology.
Enterprise risk
A recent Gartner webinar poll of 2,500 executives found that 38% of these executives are focusing AI investments on customer experience and retention. Other key areas of generative AI investment included revenue growth (26%), cost optimization (17%) and business continuity (7%).
As many enterprises leap into deploying generative AI to improve operations and performance, some companies are pumping the brakes on this technology. Research by Blackberry revealed that 75% of organizations worldwide are currently implementing or considering bans on ChatGPT and other Generative AI applications within the workplace. The research also revealed that 83% voiced concerns that unsecured apps pose a cybersecurity threat to their corporate IT environment.
Questions surrounding the accuracy of the technology and concerns about cybersecurity, data privacy and intellectual property risk are why organizations like Apple, Samsung, Verizon and some Wall Street banks are limiting or banning employee use of generative AI technology like ChatGPT.
Cybersecurity risks
Generative AI platforms store massive amounts of data, making them attractive targets for cybercriminals intent on stealing confidential and sensitive data such as personally identifiable information, financial data and health records.
Data like this is often fed into AI applications by employees. Research by data security company Cyberhaven found that the average company leaks sensitive data to ChatGPT hundreds of times each week with employees sharing company intellectual property, sensitive strategic information, and client data.
Generative AI technology also increases cyber risk by helping bad actors up their game in wreaking havoc on organizations. Using this technology, cyber criminals can quickly develop and deploy new variants of malware and seamlessly generate phishing scams without spelling, grammatical, and verb tense mistakes, making it easier to dupe people into believing the legitimacy of the communication. A 2023 report by Perception Point found that advanced phishing attacks grew by 356% in 2022. The report noted that “malicious actors continue to gain widespread access to new tools and advances in AI and Machine Learning (ML) which simplify and automate the process of generating attacks.”
Privacy risks
When employees feed personally identifiable information (PII) into generative AI applications, this information can be used to train the application’s AI models and potentially shared with other users. This practice can increase the risk of unauthorized access to or misuse of personal data.
Further elevating privacy risk is lack of visibility. Organizations using external generative AI apps often lack visibility into how these apps collect, use, share and delete data.
According to PwC, “employees entering sensitive data into public generative AI models is already a significant problem for some companies. Gen AI, which may store input information indefinitely and use it to train other models, could contravene privacy regulations that restrict secondary uses of personal data.”
Intellectual Property
Information entered into a generative AI tool may also become part of its training set which can put users of this data at risk of intellectual property (IP) infringement.
Gartner highlights that tools like ChatGPT, which are trained on a large amount of internet data, likely include copyrighted material. The analyst firm warned that it’s outputs have the potential to violate copyright or IP protections.
The federal government has also weighed in on this issue with the US Copyright Office recently issuing guidance on works containing material generated by AI.
Tips for mitigating AI risk
Generative AI is everywhere today and with the proliferation of this technology comes real business risk for enterprises. To effectively and safely implement generative AI solutions to mitigate risk, organizations should take the following steps:
Establish use cases
Establishing use cases can help organizations minimize the risks of deploying AI tools and provide an opportunity to implement proper controls. A lack of transparency about what is happening on the back end of this new technology can make it difficult to determine the best use cases, however, IT leaders should take the time to examine AI tools to fully understand how and where these solutions could be most useful to their enterprise.
Stay updated on compliance requirements
Compliance regulations are always evolving and, as adoption of AI becomes more widespread, organizations can expect to see more compliance requirements related to generative AI. This makes it critical for enterprises to keep up with new regulations that apply to generative AI and assess each AI solution to ensure that it meets industry specific regulatory requirements and adheres to data privacy and security laws.
Set usage policies
Organizations can also mitigate AI risks by setting usage policies that clearly outline acceptable use. This policy should address what generative AI tools are permissible for use in the workplace, what tasks these tools can be used for and what queries and data can be entered in these tools.
Provide employee training
Employee training is critical for mitigating the risk of generative AI. Enterprises should provide information on the risks associated with this technology, the implications of sharing sensitive and confidential information with chatbots and how to use AI tools responsibly and securely. This training should also keep employees informed on any updates to usage policies and educate employees on industry best practices.
Vet generative AI solutions
Enterprises should not only be looking at generative AI applications for their business benefits but should also closely examine their security and data practices. Before selecting any AI tool, IT decision makers should vet these tools to ensure these security and privacy practices meet or exceed the standards of their organization.
AI is a transformative technology that offers exciting opportunities for organizations but as with any new technology there are uncertainties and risks. By understanding these risks and taking steps to mitigate them, enterprises can more safely and securely deploy this technology to gain a competitive edge.
Anurag Lal is the President and CEO of NetSfere. With more than 25 years of experience in technology, cybersecurity, ransomware, broadband and mobile security services, Anurag leads a team of talented innovators who are creating secure and trusted enterprise-grade workplace communication technology to equip the enterprise with world-class secure communication solutions. Lal is an expert on global cybersecurity innovations, policies, and risks.
Previously Lal was appointed by the Obama administration to serve as Director of the U.S. National Broadband Task Force. His resume includes time at Meru, iPass, British Telecom and Sprint in leadership
positions. Lal has received various industry accolades including recognition by the Wireless Broadband Industry Alliance in the U.K.Lal holds a B.A. in Economics from Delhi University and is based in Washington, D.C.