With all of the recent discussion around the “Software Bill of Materials,” heightened by the Biden administration's executive order on improving the nation's cybersecurity, I can not help but feel like we are rubbernecking. Let me be clear: there is no downside to driving organizations to be more diligent and transparent about the supply chain that they inherit from the open-source software that they use. This is a great step forward and for the most part, it will do a lot of good. However, it feels like we are solving the problem of 2012, not 2022. The way we build software has undergone a massive transformation in the last handful of years. The applications we build today increasingly consist of services accessed by API rather than open-source packages.
Around 2010, we saw a huge explosion of productivity in our development organizations as many open-source packages reached mainstream maturity. Our development efforts were unburdened from the responsibility of re-developing massive amounts of pure “plumbing code.” Instead, we were able to quickly integrate flexible, powerful packages into our applications with a few lines of code. This dramatically reduced the effort required to build the products that we shipped. Whether it was powerful web frameworks (Flask, Django, Spring, Ruby on Rails) or frontend libraries (JQuery, Angular) or database abstraction layers (Hibernate, SQLAlchemy), open-source software minimized the effort required to get to a solid, functional product, fast. This was good and still is. Yet, this no longer reflects the current reality of modern software development.
While open-source packages still undoubtedly play a major role in the software we develop, they are no longer the real driver of speed to market. Today, we leverage services through APIs to minimize our development efforts and reduce the time to production and complexity of our environments. Whether it’s a serverless API from AWS, or a platform as a service (PaaS) such as Snowflake or MongoDB Atlas, or an entire authentication stack powered by Auth0, we now are seeing productivity gains that far outpace those made during the open-source revolution of the 2010s. Our modern applications have further reduced the requirements for development by relying on these APIs for large parts of our business logic and production environments. Today, it’s feasible to see fully functional applications built on no-code platforms, hitting data stores, and authenticating users without having to write a single line of traditional “code.” From a business perspective, this is a massive gain. But it also calls into question the relative utility of our SBOM efforts.
The real scope and reality of our modern software stack is not determined by what packages we leverage in our products, but rather the sum total of the services we use to make that product a reality. The SBOM is a great tool for on-premise enterprise software deployments, but the world is running away from the responsibility of running software in their own environments. The cost optimization and productivity gains of software as a service (SaaS) is the modern standard, and for these services we need to do more than a simple SBOM. To take a practical example, knowing your favorite SaaS provider is using MongoDB, Express, Angular and Node isn’t as useful as knowing that they are also leveraging Auth0, Cloudflare, Shopify, Stripe, and Upstash. Such SaaS providers are capable of delivering large portions of the application stack, drastically reducing the amount of business logic an organization needs to create. The real bill of materials we need to create is the cumulative set of integrated software (open source) as well as the SaaS, PaaS, and IaaS providers used to run the application.
The real challenge of assessing risk in this modern stack is that this is a transitive problem. For each service an application may rely upon, there are subsequently many more services upstream. As an example, Auth0 relies upon MongoDB Atlas, Stripe, AWS, SendGrid, and many more. Understanding these transitive relationships and upstream risk from your direct providers is the coming challenge. We have already seen a few incidents this year where providers of SaaS services were targeted as an attempt to compromise the users of that service. This occurred with the Twilio incident, in which there were subsequent attacks on Signal users, as well as the SendGrid incident, in which there were attacks on Coinbase users.
The challenge of understanding our SaaS supply chain is found in the lack of visibility into its composition, as well as the simple (and rapid) expansion of our attack surface as services proliferate. Understandably, the focus of many API providers is simplicity of use and integration. The reality of this is that a developer can quickly change the attack surface of your application with a few clicks and lines of code.
The same is true upstream. We are now living in a fully dynamic world where anything less than continuous monitoring and assessment means we are taking on unmanaged risk. Organizations need to establish clear guidelines for the integration of technology, as well as programs for the ongoing assessment of these dependencies, to get ahead of the coming wave of SaaS supply chain attacks. Instead of relying on a point-in-time assessment of a provider’s SBOM or SOC2 report, we need to establish a standard of data exchange that provides dynamic transparency for our customers to view the services we leverage and the nature of our trust relationship with them. Our environments evolve too quickly to rely on point-in-time limited transparency.
Nudge Security was built to help address this issue. Not only do we provide a view into the upstream dependencies of your SaaS providers, but we also provide immediate insight into the services your employees have created accounts in to dynamically identify when your own supply chain changes. Sign up for a trial today to see your own SaaS supply chain in a few minutes.