Are the security risks of AI productivity tools worth the reward?

While AI tools like ChatGPT can be a boon for productivity, they also raise security and privacy concerns. What can IT and security teams do to minimize the risks?

With the record-setting growth of consumer-focused AI productivity tools like ChatGPT, artificial intelligence—formerly the realm of data science and engineering teams—has become a resource available to every employee. 

‍

From a productivity perspective, that’s fantastic. Unfortunately for IT and security teams, it also means you may have hundreds of people in your organization using a new tool in a matter of days, with no visibility of what type of data they're sending to that tool or how secure it might be. And because many of these tools are free or offer free trials, there’s no barrier to entry and no way of discovering them through procurement or expense reports. 

‍

Many organizations have struggled with what to do about this shift, with reactions ranging from outright exuberance to extreme caution. From news outlets like WIRED to the major search engines, many are embracing the new technologies. In contrast, some large enterprises, like JP Morgan and Amazon, have blocked usage entirely, citing privacy concerns. Others are searching for a middle ground. Walmart, for example, recently backtracked on an outright ban of ChatGPT after developing official guidelines to help employees understand how to engage with AI tools safely. 

‍

New benefits come with new risks

Organizations need to understand and (quickly) evaluate the risks and benefits of AI productivity tools in order to create a scalable, enforceable, and reasonable policy to guide their employees’ behavior. Let’s take a look at some of the major concerns associated with using AI tools for business purposes. 

‍

Competitive risks of AI productivity tools

AI models produce their results by sourcing content from the material used to train them. Consider whether your data might be used to help with training the model, and what that means for your organization’s intellectual property. Could your data show up in outputs generated by the tool for other users? For example, is the source code your developers feed into the tool going to appear in someone else’s output?

‍

Legal risks of AI productivity tools

On the other side of the same coin, organizations should ask themselves, “Are we concerned about models and tools that have been trained with copyrighted material? Is there any legal risk of using outputs generated using those materials?” There’s limited legal precedent at this point, which means it remains unclear whether materials created using these tools will be subject to copyright claims and how successful those might be. However, there are already legal cases against major companies like Microsoft, Github, and OpenAI, and we expect to see more cases going forward.

‍

Security risks of AI productivity tools

Like any third-party software product, organizations will need to evaluate the security implications of the AI tools their employees are using, including who is behind the tools and what security controls have they implemented. Does the product store the data that is shared with it? Where does the data go? How is this data secured? How and where is it stored? Who is it shared with? If you’re putting PII and other confidential data into them, where might they end up? 

‍

Part of the issue is that a lot of these new tools are small-scale projects. Anyone can create a product that solves a specific problem leveraging the OpenAI APIs in the course of a weekend. While a tool that solves a compelling problem can become popular quickly, there may not even be a company behind it, in which case most likely there have been no security reviews. Organizations will need to assess which AI tools are appropriate for their workforces to use. 

‍

Another potential risk arises from the interconnections between these tools. A lot of AI productivity tools allow you to connect them with existing resources in other products. OAuth integrations can make it easy for users to inadvertently give AI tools access to, say, their entire corporate Google Drive, calendar, or email without realizing. The ease of connecting these APIs both lowers the barrier to entry and raises the likelihood of introducing security and privacy risks. While it can be difficult to understand and visualize the full security implications of any product’s SaaS supply chain risks, organizations should bear in mind that we’ve already seen supply chain attacks involving AI products and expect to see more. 

‍

Accuracy risks of AI productivity tools

AI models are very good at sounding accurate, even when they aren’t. That means inaccuracies introduced by an AI tool may be difficult to spot, particularly given that people tend to trust the output of these tools blindly. Recent demos from both Google and Microsoft included mistakes introduced by their AI models, showing just how easily basic errors can slip through in common business contexts. For example, if your employee uses an AI tool to summarize information for a customer, the tool might introduce errors that reflect poorly on your organization. Other use cases, such as writing code, can result in even more troubling consequences. In late 2021, researchers estimated that 40 percent of the AI-generated code from Github Copilot introduced vulnerabilities that could be exploited by a bad actor. Organizations and their employees need to understand the limitations of these tools in order to use them effectively and avoid overlooking inaccuracies. 

‍

How can organizations mitigate these risks?

As is often the case, you have two options for managing the risks of AI productivity tools. You can block all access, or you can offer ways for employees to experiment with these tools, with visibility and guidance from IT & security teams.

‍

So, should we ban AI tools?

On the surface, the impulse to block ChatGPT and other AI tools makes sense. It’s impossible to control what sensitive data employees might feed into the AI tools they’re using, so why take on the risk? While that type of policy may be enforceable for large, heavily-regulated enterprises like large financial institutions, where employee activities are already closely monitored and  restricted and IT teams have the resources to chase down outliers, you need to think carefully about the likely consequences of this path.

‍

Blocking popular tools that IT and security teams know to look out for could drive employees to smaller, riskier alternatives, which not only increases the danger for your organization but also complicates enforcement. Given how quickly new tools are popping up (not to mention phishing sites taking advantage of ChatGPT’s popularity), attempting to keep up with blocking the newest services will be an ongoing game of whack-a-mole. Blocking may also encourage users to blend personal and professional online activities, further endangering corporate data and privacy. In essence, blocking AI tools will push the problem further into the shadows rather than solving it. In fact, Nudge Security research shows that 67 percent of users will try to find a workaround when they experience a blocking intervention. Inevitably, those workarounds will carry greater security risks for the organization than the original behavior they were trying to stop. 

‍

Aside from the difficulties and risks associated with blocking, organizations that entirely eschew AI productivity tools will also miss out on their many benefits. Ultimately, this choice will put these organizations at a disadvantage within the marketplace. Competitors using these tools will be able to do more with fewer resources, allowing them to edge ahead and attract more customers. Meanwhile, frustrated employees who see themselves constrained by a policy that they perceive to be unreasonable may disengage with their work, or begin to look elsewhere.

‍

The way forward: Prioritize visibility and guidance over blocking

The key to mitigating the risk posed by AI productivity tools is understanding what's going on and adapting, rather than incentivizing employees to hide what they’re doing. Understanding what your employees are trying to accomplish can help you create a strategy that works for your organization and balances security with productivity. 

‍

Step one is discovery. You need a scalable method of discovering what tools your employees are already using. Be sure to consider how your SaaS discovery method will identify both OAuth grants and email signups, including paid, free, and freemium products, which won’t appear on billing statements. The earlier you can identify these tools, the easier it will be to work with your employees to make secure choices and put tools through procurement and secure reviews as necessary. Meanwhile, security teams should consider ways to accelerate security reviews and discover red flags quickly. (Shameless product plug: join our demo to see how Nudge Security can help.)

‍

Once you know what your users are doing, you can engage with them to find out more information or nudge them towards options that make the most sense for your organization. For example, you might identify some enterprise-ready options for your employees’ most common use cases and encourage users to adopt those rather than tools from less trustworthy providers. You might even consider partnering with some of those vendors to ensure they meet your security guidelines and start taking advantage of a corporate relationship rather than paying for one-off services. 

‍

As the AI landscape continues to shift, educating users will be an ongoing process. Understanding what your users are doing with these tools can help you provide nuanced, timely guidance about how to make secure choices that are appropriate for your organization. In some cases, it may be valuable to create a group within your organization where users with an interest in AI can share their experiences, which provides the added benefit of giving IT and security team members a direct opportunity to chime in about security and privacy considerations. 

‍

How Nudge Security can help

Nudge Security continuously discovers all the SaaS assets in your organization, including AI tools, and categorizes them so you can easily see which tools your employees are using. Our reliable SaaS discovery method detects email signups as well as OAuth grants, including both free tools and paid ones, so you can get visibility into what your employees are using well before a billing cycle hits. 

‍

You can also set up custom alerts for when new AI tools are introduced, catch OAuth grants with overly permissive scopes, evaluate new tools quickly with consolidated security insights, review SaaS supply chain risks, and engage directly with users in real time to push them towards approved options or better understand their productivity goals.

‍

If this sounds interesting, join us for an upcoming product demo to learn more.

Related posts

Report

Debunking the "stupid user" myth
in security

Exploring the influence of employees’ perception
and emotions on security behaviors