AI has caused significant consternation across workforces, with fears the technology will replace employees. Even so, many workers are happy to use generative AI in their jobs – but perhaps not in ways their managers would like.
Some bosses claim to encourage innovative uses of AI to streamline workflows. However, too many employees are using the technology in ways that are not sanctioned by their employers, in a phenomenon known as 'shadow AI'.
According to Deloitte, just 23% of those who have used GenAI at work believe their manager would approve of how they’ve used it. Crucially, the unofficial use of AI in the workplace is putting organisations at legal, financial and reputation risk.
So why are so many workers using AI on the sly? And what can employers do to clamp down on the practice?
The University of Sussex and marketing firm Magenta are undertaking a research project into how GenAI is being used in the communications industry. The aim is to produce a best practice framework for businesses, which can be adapted and tailored to their own needs.
According to the survey’s preliminary findings, workforce IT is rife with shadow AI. One in 20 workers told the researchers that they're using AI in total secrecy – and many more are at least partially secretive.
"More people talk about it with their co-workers than their managers and they aren’t always open about the extent to which they are using it," says Greg Bortkiewicz, a senior consultant and digital lead at Magenta. "In many cases, their employer simply hasn’t asked if they are using it."
The main reasons for using GenAI in secret include fears of accusations of laziness or incompetence and embarrassment over needing help, according to the survey.
Moreover, many workers say there’s no need to tell their employer about their use of AI because it doesn’t really matter. Managers must take notice of this sentiment, because the use of shadow AI does matter – a lot.
Shadow AI can pose significant legal, ethical, operational and security risks. It can also expose companies to substantial fines under regulations such as the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) and Health Insurance Portability and Accountability Act (HIPAA) in the US.
Security threats are a particular concern. Only a quarter of ChatGPT accounts used in the workplace are corporate accounts, according to research from data security firm Cyberhaven. These non-enterprise accounts feed information into public models, posing considerable risk to sensitive data, explains Cyberhaven CEO Howard Ting.
"Alarmingly, a substantial portion of sensitive corporate data is being sent to non-corporate accounts. This includes roughly half of source code, research and development materials and HR and employee records," he says.
Indeed, 38% of UK office workers admit that they or a colleague have fed sensitive information – such as customer, financial or sales data – into a GenAI tool, according to research from data management firm Veritas Technologies.
“The financial implications are also significant,” says Luke Dash, CEO of compliance platform ISMS. online. “Misuse of AI can lead to unexpected costs for damage control, compliance fines and compensatory damages, diverting resources from sanctioned initiatives and affecting overall profitability."
Despite these risks, organisations are failing to impose strict policies on the use of AI in the workplace. Market research firm Sapio Research recently found that fewer than half of businesses have restrictions on the information that can be submitted to an AI, limitations on which roles can use GenAI, guidance on acceptable use or strict limitations on access.
The ISO 42001 standard for AI management systems is a good starting point for a shadow AI policy. This outlines best practices for the management of secure and ethical AI systems, including elements such as data privacy compliance, security protocols and continuous risk assessments.
More broadly, policies on shadow AI should address the types of AI tools that may be used, any approvals that should be sought before using the technology and any limitations on using AI-generated copy or outcomes, says Chris Hogg, a partner at Bloomsbury Square Employment Law.
The policy should also include directions on oversight and due diligence. It should clearly state who has responsibility for evaluating and approving the use of new AI technologies and spell out the consequences for breaches of the policy.
"A clearly worded policy reduces the risk of employees using shadow AI as there are clear parameters for them to follow," Hogg says. "It also makes it easier to take action against employees who continue to use shadow AI to the detriment of the business."
Nicholas Le Riche, a partner in the employment practice at law firm BDB Pitmans, agrees. "Crucially, the policy should expressly confirm that it applies to the use of AI on both an employee’s own device as well as work devices and it should also explain that content generated by AI applications will be monitored," he adds.
Regular reminders can help keep staff on track. Cyberhaven Labs found that when workers are presented with a pop-up message warning them when they do something potentially dangerous, such as pasting source code into a personal ChatGPT account, ongoing risky behaviour falls by 90%.
Global digital product studio and B Corp Ustwo recently developed a policy covering shadow AI. "Our AI control serves as a formal document within our ISMS (information security management system) and across our company policies, providing clear rules and transparency for anyone who wishes to understand our approach to AI," says head of IT Greg Rochford.
The policy details everything from data collection and training to innovation strategies, Rochford explains, along with the deployment of AI within Ustwo and for the company's clients.
Such clear policies not only strengthen internal security, but they also reassure clients that their data is being handled responsibly, says Rochford.
As organisations ramp up the use of AI across their workforce, managers would be wise to ensure that employees using this technology are doing so out in the open, not hidden in the shadows.