Controlling FedRAMP Vulnerability Management Costs: An Auditor’s Analysis

Depending on the scope of your FedRAMP compliance needs and the desired level of authorization, initial compliance efforts can cost hundreds of thousands to millions of dollars to execute.

Table of Contents

FedRAMP — the Federal Risk and Authorization Management Program — is an essential compliance program for any cloud service provider (CSP) that wants to do business with U.S. federal government agencies or their key contractors. But compliance with it is not an easy or inexpensive proposition. Depending on the scope of your FedRAMP compliance needs and the desired level of authorization, initial compliance efforts can cost hundreds of thousands to millions of dollars to execute.

And of course, there are ongoing costs to maintain compliance as well. The costs associated with the mandatory annual assessment from a Third-Party Assessment Organization (3PAO), like Fortreum, have to be planned for, but the larger cost comes from the ongoing efforts to monitor your FedRAMP-authorized products and respond to the issues that monitoring uncovers. 

Continuous monitoring for application vulnerabilities

One of the core requirements that demands ongoing investment is continuous monitoring (ConMon) of FedRAMP-authorized applications for vulnerabilities, and responding to those that are discovered. Tools used for this type of monitoring must meet specific requirements [PDF] to ensure that they’re accurate, complete, and produce results that are consumable by the various parties involved in FedRAMP oversight.

And when your tools — whether those be SCA (Software Composition Analysis), container scanning, or anything else — do find a vulnerability, you have to remediate that promptly, based on severity level:

  • High (CVSS 7.0 and higher) — must remediate within 30 days
  • Medium (CVSS 4.0 to 6.9) — must remediate within 90 days
  • Low (CVSS under 4.0) — must remediate within 180 days


While remediation of vulnerabilities is often fairly simple on its face, there are plenty of cases where it isn’t. Sometimes fixes are not available in a timely manner. Often, updates do much more than repair the vulnerability, like repair other bugs, change how certain features work, add new functionality, or remove some functions. These frequently require engineering or operational resources to make system changes, test them thoroughly, and push the changes to production on very short timelines. And the remediation itself is on top of the cost of the people and tools needed to track and manage these items.

Accurate Inventories and Risk Assessments


If your tools and processes are failing to identify and monitor every component, or if they’re failing to identify vulnerabilities in those components, you won’t meet the ConMon requirement — and your FedRAMP authorization could be at risk. It’s important that you’re able to identify all the components that make up a FedRAMP-authorized application, and that you have up-to-date vulnerability information for all those components.

For applications and containers, this means ensuring you have a way to discover and assess vulnerabilities in:

  • container images — applications and libraries in use in a given container, even those that are not part of your application.
  • application dependencies — 3rd-party components packaged with your application, whether deployed in a container or not.
  • transitive dependencies — your application dependencies have their own dependencies, these are also in scope.

     

Once identified, you also need to be able to track each unique vulnerability on your Plan of Action and Milestones (POA&M) document.

The more complete and accurate an automated tool can be for these items, the less you’ll spend on human-driven processes — but be careful that the cost to acquire and operate a tool isn’t higher than the savings it helps you achieve.

Correlating Findings from Different Tools

Since you must track each unique vulnerability on your POA&M, it’s important to be able to correlate findings from different scanner types that might find the same vulnerabilities. FedRAMP does not allow you to “group” vulnerabilities, but it does allow you to “de-duplicate” vulnerability findings — if two or more tools find the exact same instance of a vulnerability, then those separate findings are really one unique vulnerability on your POA&M.

For example, if SCA-based ConMon scan — one of several scan types needed to meet ConMon requirements — reports an outdated, vulnerable version of a library like log4j in a specific application, that’s a vulnerability that must have an entry on your POA&M. If your container scanner examines the container image you used to deploy that same application, it will also detect that vulnerable version of log4j. If you can’t correlate those two findings to demonstrate that they’re actually the exact same issue, then you’ll have an extra line to manage on your POA&M.

While a single extra line item to manage doesn’t seem like much, these issues can add up quickly, resulting in significant management costs that could be reduced or avoided by designing correlation activities into your ConMon program. This capability can often be largely automated, either through selecting tool suites that natively correlate findings from different scanning modes, or through security data tools that correlate findings from multiple vendors and tools.

Reducing the Cost of FedRAMP Compliance

When it comes to ConMon requirements for application and container vulnerabilities, there are three main strategies available to reduce the cost of your FedRAMP program:

  1. Automate discovery, assessment, and correlation to reduce the human cost of getting accurate and actionable vulnerability findings
  2. Improve your software development lifecycle (SDLC) so that more vulnerabilities are removed routinely
  3. Identify false positives and risk adjustments to reduce your workload (without adding more work than you’re saving)

Automate Discovery, Assessment, and Correlation

It’s not feasible to meet FedRAMP’s vulnerability scanning and ConMon requirements without automation. But not all automation is created equal. When selecting tools for this purpose, it’s important to understand what parts of your compliance requirement they can reliably automate, and what parts you’ll still need to have human processes to support.

Vulnerability scanning tools can — but don’t always — provide automation for things like:

  • Discovery generating a component inventory of applications, identifying container layers and composition, and producing adequate documentation of this inventory (for example, an SBOM document)

  • Assessment — identifying vulnerabilities in components, providing analysis like CVSS and EPSS scores, identifying remediation actions, and other automation that reduces the amount of human research needed to successfully remediate

  • Correlation — connecting results from different types of scans (such as container and application scans) to ensure that a given unique vulnerability doesn’t produce multiple POA&M entries

    It’s also important to make sure that your vulnerability scanning tools can provide machine-readable results. This is  a FedRAMP requirement; but it’s a good idea to select tools that use industry standard formats to meet it. When tools offer automated ways to obtain results in standard formats, it opens up options for using data management tools to meet automation needs that no one scanning tool provides. For example, if all your scanning tools provide results in the standard SARIF format, you can easily import those results into many off-the-shelf compliance management solutions.

Improve Your SDLC

Because managing vulnerabilities involves  interrupting development and operations work to remediate them, and this is costly for your organization, one of the most valuable cost-saving approaches is to reduce the chance that vulnerabilities appear in production. You can’t fully control this — a component that has no CVEs associated with it right now might have a new one discovered tomorrow, for example. But every time you identify potential vulnerabilities and eliminate them before they reach production, you reduce the disruption of having to patch a production service on a tight SLA.

Many SCA and container scanning tools support integration into CI/CD environments, allowing engineering teams to be alerted about potential vulnerabilities before they get put into production, rather than waiting for a later-stage scan to interrupt their work. Doing this well requires careful planning to avoid simply moving the cost elsewhere in the SDLC — just turning a scanner on for an earlier SDLC stage doesn’t automatically reduce your costs.

When done well, integration into the SDLC can provide developers with useful information about vulnerabilities that need repair prior to release. But if done poorly, it can result in developers being overwhelmed with alerts and needing to engage in costly triage activities.

Sign and Verify your Artifacts

Code signing, a capability under the “artifact signing” umbrella, is an important control when coupled with deployment controls that verify signatures before allowing an application or container to be deployed to a FedRAMP-authorized environment. Properly implemented, a code or container signature provides a high-confidence, auditable attestation that the thing you’re deploying has gone through the authorized release process — which in turn has the application security controls you’ve specified.

This means that your deployment workflow can effectively prevent code that didn’t go through the release process — whether that’s a result of accident or malice — from ever being deployed to your controlled production environment. A good verification system will also provide you with information about provenance (what version of the code is this, how was it built, etc.) which can make it much easier to connect any runtime security alerts (like those generated by a  CNAPP) with the development and operations teams that own the affected items. 

Signing systems that support revocation also make your remediation efforts simpler. As just one example, you can revoke a signature of a container image that has vulnerabilities you’ve since remediated, and ensure that container can never be accidentally deployed again.

Identify False Positives and Risk Adjustments

FedRAMP itself has two paths to reduce your remediation workload:

  1. Classifying findings as false positives — which means you don’t have to address them at all

     

  2. Adjusting the risk rating downward — moving from a High to a Medium or Low gives you more time to remediate, which can reduce the impact and cost on your teams

     

Both of these are ultimately negotiations between you and the FedRAMP assessors; as such, it’s important to consult with FedRAMP experts (such as your 3PAO or FedRAMP Advisor, like Fortreum) about your specific approaches and methods.

Classifying a finding as false positive requires that you demonstrate to the satisfaction of the FedRAMP PMO, your 3PAO, and your Initial Sponsoring agency that there is no actual risk of adversarial action posed by the vulnerability. This is a high bar. “Minimal risk” or “very low risk” will not do. But if successful, it means that you will not be required to remediate that finding. This sort of analysis generally requires a tool; manual approaches don’t scale, but more importantly are difficult for stakeholders and assessors to trust.

Classifying a vulnerability as a false positive still requires you to track the item on your FedRAMP Security Assessment Report (SAR), and record it as a false positive in your POA&M — this isn’t a “free pass”. There also needs to be assurance that false positives will get reassessed at least annually — and more frequently is better. Adopting tools that can identify false positives reliably can help keep this process manageable; and it’s important to ensure that the tool and the process supporting it can re-test false positives routinely and identify if the issue no longer meets that criteria.

Fortunately, security tools are beginning to provide capabilities that can dramatically reduce the research required to demonstrate something is a false positive. With FedRAMP, as with most regulations, technology moves faster than the standard. Industry invents new tools and methodologies that aren’t directly addressed in FedRAMP, and this innovation gives CSPs an opportunity to automate what was once manual. The process for getting a new methodology accepted is for the CSP to explain (in detail) how the technology works to their 3PAO. If the 3PAO agrees that the concept has merit, it is presented to FedRAMP for approval. We recently went through this process when a client asked us to review Endor Labs’ “reachability analysis” as a means of automatically identifying false positives.

Reachability analysis is used in this scenario. For example, imagine you adopt a tool that  identifies vulnerabilities that have no reachable path for exploitation, meaning anything deemed “unreachable” and you use it to determine that the vulnerability is a false positive. For this technique to be used to reduce ConMon burdens, you also need to make sure that if changes to the application create a reachable path, the tool and your process will be able to recategorize the vulnerability as a true positive that now needs to be addressed within a target time frame. Failure to have a solution to this problem can result in being unable to make effective use of the false-positive classification. And from a risk standpoint, you definitely want to know that a vulnerability that didn’t pose risk before has started to.

When it comes to application vulnerability scanning, a high-quality reachability analysis — that is, an automated way to demonstrate that there is no path for an adversary to exploit a given vulnerability — can provide strong evidence that a vulnerability is a false positive. Given that less than 9.5% of vulnerabilities are exploitable at the function level, reachability analysis can greatly reduce ConMon burdens. 

It is important to be cautious when evaluating products that claim reachability analysis; it is, unfortunately, a fairly vague term in the industry. Key things to be cautious of:

  • Ensure that an “unreachable” determination is high-confidence. It’s important to be able to distinguish between “we can’t determine whether this vulnerability is reachable” and “we’re confident this vulnerability is not reachable”, because the former will not meet the high bar for a false-positive classification. Not all tools make this distinction clear, which could undermine your efforts to reliably identify false positives. 
  • Make sure that “unreachable” means “from any source”. For example, consider a tool that only tells you that something isn’t reachable from the public Internet. That could potentially support a risk adjustment, but not a false positive determination on its own: an adversary with local-network access might still pose a risk. If your tool is able to tell you that something isn’t reachable without a code or configuration change, that’s a much stronger support for a false-positive determination
  • Your tool must be able to tell you when an unreachable vulnerability becomes reachable. Regular reassessment is required by FedRAMP in any case, but your argument for a false positive is stronger if it is clear to assessors that your tool can surface an alert when a code or configuration change that makes a particular vulnerability no longer be a false-positive.

     

It’s also important to note that having a capable tool is necessary but not sufficient. You also need to have appropriate processes in place to ensure that the tool is used at the right times, that there’s appropriate change control (perhaps including code signatures that demonstrate a deployed application went through approved controls), and so on. Always proceed with the advice of your 3PAO or FedRAMP expert advisor.

Using environmental data to adjust risk ratings downward

Initial classification of a vulnerability is done with the CVSS (Common Vulnerability Scoring System) base score, which attempts to take both impact (how bad is this if exploited) and likelihood (what’s the chance it will be exploited) into account — though it heavily favors impact criteria. CVSS base score, however, doesn’t reflect the risk in your specific application and environment. The CVSS does include a path to calculating environmental metrics that can be used to establish whether a particular vulnerability, as it exists in your environment, might be lower risk than the CVSS base score would imply.

Adjusting the risk rating downward requires that you demonstrate the actual risk the vulnerability poses is lower than what its base CVSS score suggests. You must still track these through SAR and POA&M, and must be able to make a strong argument with your assessors that the risk is genuinely lower than the CVSS score would suggest.

One way to make this argument is through completing these other aspects of the CVSS system. Tools can provide data that supports this type of analysis, such as identifying compensating controls or providing exploitability evaluations, but ultimately risk reductions are a negotiation between you and the FedRAMP program.

CVSS also provides for temporal metrics that attempt to express how time-related items (like exploit availability) impact risk at a given point in time. However, the FedRAMP PMO has not generally accepted temporal metrics as a strong argument for formal risk adjustments. So, if you choose to use this approach — or the Exploit Prediction Scoring System (EPSS) which is maintained by the same organization that developed CVSS, and serves a similar purpose to temporal metrics — the value is largely in helping you prioritize risks for purposes beyond FedRAMP compliance requirements.

Summary

Ensuring you identify and remediate vulnerabilities in applications and containers is a critical, but costly, part of attaining and maintaining FedRAMP authorization. With good SCA and container scanning tools, coupled with well-designed processes and systems, it is possible to lower those costs. 

The key things to consider are:

  • Automate as much as possible about discovery, assessment, and correlation. This reduces the effort required to generate a complete and accurate inventory of software components (applications, containers, and dependencies) and the associated vulnerabilities.
  • Vulnerable-function-level reachability analysis capabilities can automate much of the work of identifying FedRAMP false positives, reducing your remediation workload.
  • Context — like reachability, EPSS probability, etc. — supports efforts to identify risk adjustments that buy you more time to fix.
  • You must have tools and processes in place to support regular review of items tracked as false positives or reduced risk, if you choose to use the justifications we’ve discussed here. This is because you have to be able to find out if the “ground truth” changes and respond to it appropriately.
  • Get long-term savings by selecting tools that can be used in both audit capacities and earlier in the SDLC. Prioritize identifying vulnerabilities before they go to production (and thus before they go into FedRAMP scope), and attesting to your controls with code signing (aka artifact signing) to help ensure only properly-built code gets deployed.

Properly implementing the processes, as well as selecting and configuring the right tools, can be a complex process. Trusted and experienced FedRAMP experts like Fortreum can help you ensure you’re doing the right thing to attain and maintain your authorization in a way that makes sense for your business and products.

About the Author: As a Senior Director at Fortreum, Ben Scudera leads the Cyber Defense & Readiness Division where he provides oversight and guidance for cloud service providers (CSPs) pursuing FedRAMP authorization and other NIST-based frameworks. Over the course of 10 years in cybersecurity compliance, Ben has developed methodologies for testing and reporting in conjunction with internal and external teams in support of his mission to help CSPs achieve and maintain FedRAMP compliance, and to enhance their security posture and resilience in the cloud.


About Fortreum:

We started with a mission to simplify cloud and cybersecurity challenges for our customers. With an extensive track record spanning nearly a quarter of a century across Public and Private Sectors, we possess a keen dedication to solving our customers complex cloud and cybersecurity challenges. Our industry commitment extends to supporting and fostering the development of future cybersecurity experts within our communities. We encourage you to investigate our services further to learn how leverage to cybersecurity as a business enabler.

Should you have questions about your cloud and cybersecurity readiness, please reach out to us at Info@fortreum.com or Contact Us at https://fortreum.com/contact/

Recent Insights

Get In Touch

We’re happy to share our insights and work with you to fast-track your CMMC Certification.