Back to the blog

The 2024 AI adoption curve and what it means for your business

What does the rapid pace of AI adoption mean for cybersecurity leaders as they grapple with AI security and governance?

It’s well-trodden territory by now that the past two years have been as explosive for generative AI as they were for Taylor Swift. Once the realm of data science and engineering teams, generative AI was packaged and delivered to the masses in 2023, and the party has continued at full volume in 2024. Along with the splashy arrival of ChatGPT came the persistent waves of headlines, prognostications, and predictions. 

‍

But beyond all the hype, how widely and quickly is AI actually being adopted by digital workers, and what does that pace of adoption mean for cybersecurity leaders?

‍

Adoption of generative AI since 2023

Data from our platform illustrates the hockey stick of generative AI adoption over the course of the year, as well as the exploding number of disparate AI tools. In July 2023, we had discovered and categorized a total of 75 generative AI tools within our customers’ environments. In October, that number had doubled to 150. By December 2023, it had more than doubled again, to 344. And today, that total has climbed to 847. Overall, we’ve seen the number of GenAI tools double every quarter over the past year. Is this a new, super-charged version of “Moore’s law”? It’s sure looking that way.

‍

Total unique AI tools discovered in Nudge Security customer environments

‍

Meanwhile, organizations used an average of just six distinct generative AI applications back in July 2023, with ChatGPT and Jasper leading the pack. By the end of 2023, that number had swelled to an average of 14 applications per organization. At this writing in October 2024, we see an average of 25 unique GenAI applications per organization. And while OpenAI still easily tops our most-used list, new AI tools like Anthropic, Otter.ai, and Beautiful.ai are quickly rising in the ranks.

‍

Percent of organizations that have adopted popular Al SaaS tools, based on product data from Nudge Security. ***New additions to the Top 10

‍

All that is to say, growth in AI use is widespread, accelerating, evolving, and showing no signs of slowing. For anyone hoping this would be a short-lived hype-cycle, the data points to generative AI tools being woven right into the fabric of the modern tech stack. Organizations can no longer bury their heads in the sand when it comes to AI governance and security—IT and security leaders need to figure out how to embrace AI adoption while navigating numerous thorny issues, ranging from privacy concerns, data governance challenges, legal quandaries, and security risks. 

‍

When it comes to security risks, here are four primary areas to consider in designing your AI governance strategy:

‍

Risk #1: You can’t secure what you can’t see.

The broadest and most overlooked risk applies not just to AI tools but to all SaaS applications: If you don’t know whether an app is in use in your organization, you can’t take the necessary steps to ensure secure access, provide acceptable use guidance, and govern usage. As illustrated above, with new AI tools cropping up on a daily basis, IT and security teams will have a very hard time gaining an understanding of who’s using which tools without some kind of automated discovery tool that doesn’t rely on prior knowledge of an app’s existence.

‍

Risk #2: Data governance for AI tools is complex.

Without proper education and governance, employees could upload sensitive data to AI tools that a company may not want a third-party to process without proper controls, either from a compliance or a security perspective. 

‍

As with any third-party software product, organizations need to evaluate the security implications of the AI tools their employees are using. Does the product store the data that is shared with it? Where does the data go? How is this data secured? How and where is it stored? Who is it shared with? If your employees put PII or other confidential data into AI tools, where might they end up? And with new AI tools being introduced and adopted at the breakneck speed we’re witnessing, it’s nearly impossible for security teams to get the kind of visibility they would need to complete just-in-time vendor security assessments.

‍

Risk #3: Supply chain risks endanger corporate data.

Many AI productivity tools rely on a powerful web of interconnections with other applications, with access granted through OAuth. The ease of approving an OAuth grant can entice users to hand over more access than they might realize—a well-meaning employee can inadvertently give AI tools access to, say, their entire corporate Google Drive, calendar, or email without realizing it. 

‍

Twenty-five of the top 100 SaaS apps observed in our customer environments rely on a third-party AI service in their supply chain, with OpenAI and Anthropic being most common. As SaaS providers race to market with shiny, new AI-powered features, the sprawling mesh of the AI supply chain is further entrenched in our organizations. 

‍

While it can be difficult to visualize the full security implications of any product’s SaaS supply chain risks, we’ve already seen attacks on AI products that are deeply embedded in the AI supply chain, including OpenAI, Hugging Face, and Microsoft Copilot. 

‍

So when security organizations consider their overall AI risk posture, they must evolve their thinking about third-party risk. Organizations not only need to assess third-party AI services, but all third-party vendors who use AI. Risk managers need to consider scenarios such as: “Does my CRM provider share my customers data with a third-party AI provider? And is that data used to train a public model?”  

‍

Risk #4: AI outages can cause severe disruption.

A recent MIT study found that 78% of organizations rely on third-party AI tools like OpenAI, and that third-party AI tools account for 55% of AI failures. In November 2023, OpenAI’s API and ChatGPT services experienced an outage that impacted 2 million developers and 100 million users, respectively. This kind of absolute dominance of the AI supply chain could lead to a single point of failure for thousands or even millions of other companies during an outage, or the inevitable security incident. 

‍

As product teams embed AI tools into their own solutions, thoughtful consideration will need to be given to redundancy and maintaining service level agreements in the event of an outage.

‍

Moving from AI experimentation to AI governance

In retrospect, 2023 was the year of experimentation with AI tools—workers rapidly tested tools, honed their prompting skills, and considered how generative AI could allow them to be both more efficient and more effective at their jobs.

‍

In contrast, 2024 has been the year of operationalizing and securing AI usage. Organizations have grappled with the Herculean project of balancing the risks and rewards of AI productivity tools—and to do it in a way that’s scalable. Because while the security risks of AI tools are nothing to scoff at, the business risks of not adopting AI are existential. 

‍

How Nudge Security helps

Nudge Security’s AI security solution helps you discover and evaluate AI tools in a way that’s scalable and sustainable for your organization, so you can embrace the productivity benefits generative AI can offer without taking on excessive risk. 

‍

First, we discover all GenAI accounts ever created by anyone in your org—even the apps you’ve  never heard of. 

‍

Next, our AI usage dashboard helps you visualize and understand AI usage within your own organization, including where OAuth grants have linked GenAI apps to other tools in your environment. We’ve even surfaced supply chain data to help you understand which of your SaaS providers are leveraging AI in their software supply chains. 

‍

Additionally, Nudge Security’s GenAI playbook helps you onboard AI tools safely by delivering your organization’s acceptable usage policy to employees at the moment they sign up for new AI tools. You can also nudge app users to adopt an already vetted tool when new tools show up in your environment.

‍

Ready to learn more about how Nudge Security can help with GenAI discovery, security, and governance?

Read our article on AI usage in The Hacker News.

Related posts

Report

Debunking the "stupid user" myth
in security

Exploring the influence of employees’ perception
and emotions on security behaviors