AI in the Enterprise: Employee Usage Risks
By Anurag Lal, President and CEO of NetSfere
Artificial intelligence (AI) is transforming businesses across sectors. Today, enterprises are deploying the technology to become more efficient, make better business decisions, automate routine tasks, enhance productivity, and improve customer service.
While the technology has a wide range of use cases and benefits for organizations, it is not without risk. Growing employee use of AI is introducing unique risks to enterprises. A survey by The Conference Board found that 56% of workers are using generative AI on the job, with nearly 1 in 10 employing the technology on a daily basis.
As more and more employees use the technology in their day-to-day work, enterprise concerns about the inherent risks of AI are growing in tandem. According to data from 1Password, 92% of security professionals have security concerns about generative AI.
The unique risks associated with employee use of AI that most concern enterprises and their IT teams include:
AI hallucinations
AI-powered tools can hallucinate, misinterpreting training data which results in false or misleading responses to user requests and queries. Presented as fact, it is often difficult for users to spot inaccuracies in these responses.
Inaccurate outputs can have a cascade of negative impacts on enterprises including harming the decision-making processes, leading to the spread of disinformation, causing reputational damage, creating legal repercussions, and resulting in the loss of customer trust. With these types of adverse impacts hanging in the balance, it’s not surprising that 68% of security and privacy professionals cited AI hallucinations as top concern in a recent Cisco survey.
Data security and privacy
AI systems are trained on large volumes of data including data entered by users. With increased use of the technology, the risk of employees entering sensitive data that can be collected, stored, and used by these systems grows. Data from CybSafe reveals that 64% of U.S. office workers have entered work information into a generative AI tool, and a further 28% aren’t sure if they have. According to this research, a total of 93% of workers are potentially sharing confidential information with AI tools. Sensitive data entered into these AI systems includes customer information, financial data, trade secrets, and personally identifiable information, such as email addresses and phone numbers.
Bad actors are working to intercept this high value information, making enterprises vulnerable to data leaks that put them at risk of violating regulations such as GDPR and HIPAA.
Shadow AI
Employees using generative AI tools outside the purview of IT teams can open the door to cybersecurity risks in enterprises. A survey by Salesforce found that 55% of employees have used unapproved generative AI tools at work and 40% of workplace generative AI users have used banned tools at work. This unapproved AI app usage reduces IT visibility and creates security blind spots that represent a serious threat vector for organizations. These blind spots increase the risk of data breaches, privacy violations, and non-compliance with regulations.
Copyright infringement and intellectual property
Violations of copyright laws and intellectual property infringement issues are two other risks associated with employee use of AI. The data AI systems are trained on could include copyrighted materials such as photos, books, computer code, videos, and other content. Employee usage of this AI output exposes enterprises to the risks of legal action and fines.
Data bias
Bias can creep into AI systems when the training data these systems use to make decisions is skewed by human biases. Harmful bias and errors can go undetected as a result of AI’s “black box” decision-making which often lacks transparency and explainability. This can lead to discriminatory outcomes, erosion of customer trust, and reputational damage.
Regulatory compliance
Employee use of AI also elevates compliance risks for organizations. This risk is increasing with a growing landscape of laws related to the use of AI. According to the National Conference of State Legislatures, last year at least 25 states, Puerto Rico and the District of Columbia introduced artificial intelligence bills, and 18 states and Puerto Rico adopted resolutions or enacted legislation. Legislation like the California Privacy Rights Act (CPRA) impact AI with limitations on data retention, data sharing, and use of sensitive personal information.
The patchwork of state laws is expected to expand in 2024. According to the LexisNexis® State Net® legislative tracking system, 89 bills referring to “artificial intelligence” had been pre-filed or introduced in 20 states as of January 11, adding to the more than 100 AI bills that are being carried over from last year. LexisNexis notes that the majority of these new measures “seek to study, regulate, outlaw or okay critical aspects of the technology’s use in society”.
Unauthorized or irresponsible use of AI by employees can lead to violations of regulations, potentially resulting in hefty fines, lawsuits, loss of revenue, reputational damage, and loss of consumer trust.
Today, there are some very real risks to enterprises stemming from employee use of AI. Banning AI is not the answer. However, taking a secure approach to deploying the technology is a more viable way for organizations to mitigate these risks. This approach should include developing clear company-wide AI usage policies, educating employees on safe and unsafe AI usage practices, and vetting generative AI tools to understand how data is collected, stored, and used and to ensure these tools align with company data security and privacy standards.
About The Author
Anurag Lal is the President and CEO of NetSfere. With more than 25 years of experience in technology, cybersecurity, ransomware, broadband and mobile security services, Lal leads a team of talented innovators who are creating secure and trusted enterprise-grade workplace communication technology to equip the enterprise with world-class secure communication solutions. Lal is an expert on global cybersecurity innovations, policies, and risks.
Previously, Lal was appointed by the Obama administration to serve as Director of the U.S. National Broadband Task Force. His past experience includes time at Meru, iPass, British Telecom and Sprint in leadership positions. Lal has received various industry accolades, including recognition by the Wireless Broadband Industry Alliance in the U.K. Lal holds a B.A. in Economics from Delhi University and is based in Washington, D.C.