AI is no longer a futuristic concept; it's a present-day reality transforming how businesses operate. For security and IT leaders, understanding and managing the risks associated with AI implementation is crucial to safeguarding organizational data and reputation. This guide provides a structured approach to running an AI risk assessment, ensuring safe and compliant adoption of AI tools in your enterprise.
‍
‍
The foundation of AI starts with machine learning, deep learning, and large language learning models (LLMs). The most common models include ChatGPT from OpenAI, Claude from Anthropic, and Llama from Meta. Many of these companies have created a chatbot interface with the models; this is what the general public knows as ChatGPT, Claude, and Meta AI, respectively.
‍
GenAI apps don’t stop there. A flood of startups are seizing on the demand for AI by building purpose-built solutions on top of these AI models’ APIs. These GenAI “wrapper” apps aim to reduce the learning curve of prompt engineering with a user-friendly UI designed for specific use cases and outcomes. Since they don’t require a lot of heavy infrastructural development, GenAI wrappers can be launched quickly and easily as a weekend side project, which may suggest that rigorous security controls are not properly in place.
‍
Finally, there are “AI-powered” SaaS apps: the multitudes of SaaS providers that want to capitalize on the novelty of AI, boost top line revenue, and stay ahead of the competition by embedding AI-powered capabilities in their offerings. “AI-powered” could mean anything from using one of the common LLMs to surface documentation faster to actively delivering suggestions, results, and value within the product.
‍
The bottom line: the AI landscape is vast and growing exponentially faster. In fact, AI growth trends from Nudge Security show that the number of unique GenAI tools has roughly doubled each quarter starting in 2023. It's critical to keep up with the pace that GenAI tools are created and used by your employees and their SaaS tools.
‍
The first step in any AI risk assessment is identifying all AI-related accounts, users, and applications within your organization. This process involves cataloging not only known GenAI tools in use, but also uncovering new, niche tools that may have slipped under the radar, and any AI-powered SaaS apps. There are five ways to discover what GenAI tools are being used at your organization, providing various levels of visibility.
‍
Given the pace at which GenAI tools have entered the market (many without security programs), it's vital to determine the security posture of GenAI tools and ensure they align with your organization's security standards. (This is especially concerning given that 90% of the 2,500+ GenAI vendors Nudge Security has automatically discovered and catalogued have fewer than 50 employees.)
‍
When reviewing GenAI vendors, many of the same questions that you would consider for other vendors apply:
‍
However, for GenAI apps it's also important to understand if/how you can prevent your data from being used in training models which introduces the risk that it could be surfaced in response to prompts from users outside of your business.
‍
Consider asking these questions to better understand this risk:
‍
And, given the complexities of how data is handled in AI tools, questions around data locality and data processing will likely require a closer review than for other types of tools.
‍
By asking these questions, you can better evaluate the AI provider’s data policies and determine the level of control you have over your sensitive information.
‍
While security questionnaires can cover some of these questions, conducting these reviews can be time intensive and impede workforce productivity if it is a requirement before using every tool. It can be helpful to steer employees towards already vetted and approved GenAI tools rather than continuing a never-ending stream of GenAI vendor security reviews.
‍
Note: Nudge Security provides free, publicly available security profiles for thousands of SaaS tools, including an expanding list of GenAI tools.
‍
GenAI tools often connect to other systems within your organization, creating points where data leaks could happen if not properly managed. A detailed integrations review helps map out these connections and assess their security implications. Key considerations include:
‍
Discovering integrations with AI tools is not always straightforward. A good place to start is to review OAuth grants in your IdP (Microsoft 365, Google Workspace) to look for OAuth grants that enable AI tools to access Google Drive, SharePoint, or other data repositories. Our blog post on the hidden dangers of ChatGPT’s integrations with Google Drive and Microsoft OneDrive covers more details.
‍
Beyond that, you'll also want to look for OAuth grants and API integrations that connect AI tools to your other systems, particularly those that handle sensitive data like finance systems, HR tools, your CRM, etc. Depending on the logging and API options available for these tools, you may be able to forward events related to OAuth grants and API connections to your SIEM or SOAR. Or, if you are using an SSPM solution, you may be able to get details on integrations for the apps that are managed within your SSPM. If neither of these options are in place, then you will likely need to log in to each app to review the list of OAuth grants and API integrations manually.
‍
SaaS vendors have been launching AI-powered functionality at an accelerated pace since ChatGPT went viral in December 2022, so it is critical for third-party risk management teams to stay on top of which vendors are adding AI to their sub-processor list and supply chain. To manage third and fourth party risk, this review should regularly investigate:
‍
Employees want more AI education. In fact, in a recent EY survey, 81% say they would feel more comfortable about using AI if best practices on responsible AI were routinely shared. IT and security leaders have an opportunity and responsibility to ensure that employees are aware of the organization's acceptable use policy and AI best practices. Regular training sessions, clear communication channels, and accessible support resources can help reinforce these policies.
‍
Ask yourself:
‍
By embracing a structured approach to AI risk assessment, leaders can not only safeguard their data and reputation but also unlock AI’s transformative potential securely. Encouraging a culture of vigilance and continuous improvement positions your organization at the forefront of innovation while maintaining robust security protocols.
‍
Nudge Security has discovered over 1,000 unique GenAI tools in customer environments to date, and provides a scalable approach to AI governance. With Nudge Security, you can: