This article was originally published on the Forbes Technology Council.
‍
The breakneck speed at which generative AI has mainlined into seemingly every organization this year has caused some major whiplash for enterprise IT, risk and security leaders, who face tough questions about how to govern the use of these new tools without hamstringing the ability to innovate. In practice, many are still struggling to get their arms around the shadow AI being adopted across the organization.
‍
This situation reflects a much larger problem that these leaders have been facing for years. A tectonic shift has taken place in technology adoption in the workplace: Over the last five years, top-down IT procurement has been usurped by business-led and employee-led IT adoption. This shift has made it difficult for technology governance leaders to keep tabs on what tools are being used, where sensitive data resides and who (and what) has access to it.
The difference now is that everyone is paying close attention to the data privacy and security concerns of generative AI in the workplace. Enterprise IT, risk and security leaders are at a pivotal moment. How they address AI governance will have a larger, lasting impact on the perception and influence of their functions within their organizations. Bungle it, and they’ll be invited to go sit in a corner and patch vulnerabilities while the business moves forward without them. Master it, and they’ll have laid the groundwork for a modern, adaptable IT security and governance model that allows the business to move forward quickly and safely.
‍
To balance the risks and potential rewards of these emerging technologies without repeating past mistakes, enterprise IT, risk and security leaders need a new approach to securing and governing access to new cloud-delivered technologies. Here are a few do’s and don’ts to consider.
‍
You can’t manage what you can’t see, which is why visibility is a prerequisite for enacting any security policy. Yet, the question, “What SaaS tools are employees actually using?” often lands with a record scratch. The oversight of technology use that IT and security teams once had has steadily eroded against the rising tides of remote work, BYOD and the proliferation of cloud and SaaS applications.
‍
Left to mine DNS records, expense reports and internal Slack channels to cobble together an incomplete picture of unsanctioned SaaS use, IT and security teams lack the context they need to understand what SaaS tools are being adopted and when, by whom, and for what purposes. This context, along with insights into what types of data are being shared, what business systems are being connected and what the vendors’ security profiles and terms of service look like, is essential to assessing organizational risk, making policy decisions and enforcing those policies.
‍
It’s no longer enough to limit IT oversight to a subset of carefully procured enterprise SaaS applications. Rather, IT and security teams must work to gain visibility of the long tail of unsanctioned cloud and SaaS use, even when workers are off-network and on personal devices. The good news is that this is possible—just not with legacy approaches and perimeter-based monitoring technologies. Next-generation SaaS asset discovery solutions exist for today’s hybrid, cloud-first world and are worth considering.
‍
Samsung, Apple, Verizon and other major companies made headlines recently for their broad-sweeping restrictions on the use of generative AI tools like ChatGPT. But is that the right approach? Is it even enforceable?
‍
IT and security leaders face a difficult balancing act in governing the adoption and use of emerging cloud and SaaS technologies. As one technology leader put it, there’s “a fear of missing out and a fear of messing up.” Lean too far into allowing experimental SaaS use without safeguards, and you could increase risk, sprawl and operational inefficiencies. Lean too far into blocking unsanctioned SaaS, and you could stifle an organization’s ability to innovate, or more likely, employees will just work around these controls and processes altogether.
‍
The question IT and security organizations should be asking is, “How can we enable safe SaaS use at the speed of business, and what do we need to get there?” For example, instead of delaying SaaS procurement by weeks to conduct vendor security reviews, organizations can work toward just-in-time assessments that run as SaaS applications are adopted, alerting IT and security teams to risky or non-compliant findings.
‍
Traditional media and social media have been plastered recently with messages about how learning generative AI is the ultimate productivity hack and a must-have skill set for any modern digital worker. Can you really blame employees for brazenly, if not recklessly, experimenting with these new shiny objects, especially when getting started is as simple as entering a corporate email address, throwing down a few bucks on the company credit card and feeding it work: code, customer data, next month’s earnings report, whatever.
‍
Security awareness training won’t solve this because it isn’t a “stupid user” problem; it’s a conflict of interests. Employees want to skill up, work efficiently and help the business succeed. And, if an IT policy or security control stands in the way of that, most workers will opt to beg for forgiveness rather than ask for permission. According to Gartner, 74% of employees said they would be willing to bypass cybersecurity guidance if it helped them or their team achieve a business objective.
‍
So much of the current discussion on AI security has focused on what security policies should look like, yet too little of the discussion has focused on how policies should be (and can be) enacted and enforced. Without the right SaaS security and governance process and technology in place, IT and security leaders risk losing the ability and the right to govern IT going forward. However, with the bright spotlight on generative AI, they can seize the opportunity to address this existential crisis and create a path forward to sustainable technology governance.