After 11 years, FedRAMP is now the law

Capitol with FedRAMP logo

Among the many provisions contained within the thousands of pages of the National Defense Authorization Act (NDAA) signed into law by President Biden on December 28 is the codification of the Federal Risk and Authorization Management Program (FedRAMP), formalizing in legislation the primary mechanism upon which federal agencies rely to acquire cloud computing services. First established in 2011 in an official memorandum from the Office of Management and Budget (OMB), the program sought to ensure that all cloud computing services sold to the federal government satisfied at least a minimum set of security requirements, verified by explicitly accredited third-party assessment organizations (3PAOs) to provide an independent examination of the extent to which each cloud service provider (CSP) has effectively implemented the security controls required. In much the same way that federal systems and data centers are required to obtain an authorization to operate (ATO) before agencies can use them, the FedRAMP program grants authorizations either from a sponsoring agency or from the Joint Authorization Board (JAB), with representation from the General Services Administration (GSA), Department of Homeland Security (DHS), and Department of Defense (DoD). What was unusual about FedRAMP when it was established is that it gave cloud service providers the opportunity to obtain an official government ATO even without a federal contract in place. With FedRAMP authorization now a go-to-market requirement for cloud vendors seeking to serve government customers, it’s not surprising that the program has produced a lot of authorized services — nearly 300 services were FedRAMP authorized at the end of 2022, with more than 100 other services actively engaged in the authorization process.

It’s not entirely clear if simply codifying FedRAMP will have a noticeable impact on the program or the way it operates, but the bill introduced in early 2021 (as the “FedRAMP Authorization Act”) assigns GSA the statutory authority to “establish a governmentwide program that provides the authoritative standardized approach to security assessment and authorization for cloud computing products and services that process unclassified information used by agencies.” It also formalizes the FedRAMP Program Management Office to implement and administer the program and the Joint Authorization Board to make authorization decisions about cloud service providers using the results of independent security control assessments. The text of the bill includes language intended to reduce the time and level of effort required to obtain and maintain FedRAMP authorization, including through increased use of automation for security assessments, authorization decisions, and continuous monitoring of FedRAMP authorized services. These provisions are presumably a result of the frustration among federal agencies and cloud service providers regarding the work and time (and associated cost) required to produce all the documentation required under FedRAMP and complete initial and annual assessments and compliance reviews. The focus on automation and efficiency may be a nod to ongoing efforts such as NIST’s Open Security Controls Assessment Language (OSCAL) as a technical standard to enable automated security assessments and other risk management activities. At least one FedRAMP CSP has successfully submitted a set of FedRAMP assessment documentation using the OSCAL format, although the vast majority of ATO requests and assessments are still completed using Word and Excel templates uploaded to the FedRAMP PMO’s online repository.

One noteworthy change to FedRAMP contained in the new legislation is an explicit “presumption of adequacy” of a FedRAMP authorization, a provision designed to encourage reciprocity among agency FedRAMP authorizations and to move closer to the original FedRAMP vision of authorizing a cloud service once and reusing that authorization many times across the government. The law here is essentially saying that a federal agency should view a successful FedRAMP authorization as adequate for the agency’s own authorization determination. It does not appear that federal agencies are obligated to ignore any agency-specific requirements or standards they may have that exceed the FedRAMP baselines on which FedRAMP authorizations are based, but many agencies currently consider FedRAMP authorization as necessary but not sufficient to warrant an authorization to operate. FedRAMP as first envisioned was supposed to provide a minimum security standard, but shifting to a position that presumes that minimum level of security should satisfy every agency’s risk tolerance suggests a fundamental misunderstanding of risk management principles and of differing agency opinions about data sensitivity and system criticality. Even within certain business domains many agencies do not agree on the level of security protection that should be applied. For example, the Department of Veterans Affairs (VA) and Centers for Medicare and Medicaid Services (CMS) both handle protected health information (PHI) on their beneficiaries, but VA assigns a “high” security categorization to data it holds such as veteran medical records while CMS assigns a “moderate” categorization to similar types of data it maintains on Medicare and Medicaid recipients. Cloud service providers looking to sell cloud infrastructure services or software-as-a-service (SaaS) applications for use by government agencies have little alternative but to seek FedRAMP high authorizations to appeal to the broadest set of government customers they can, but even with a FedRAMP high authorization many agencies still impose addition requirements on CSPs before they authorize cloud services for use. Any presumption of adequacy should also require cloud service providers to be more diligent about maintaining their ongoing FedRAMP authorization status in a way that mitigates concerns agencies may have about ensuring implemented controls remain at least as effective as they were at the point in time when their ATOs are first granted. Cloud services are subject to threats, vulnerabilities, patches, updates, and enhancements just as any conventional systems or services are, but agencies using FedRAMP-authorized services rarely have access to operational performance and security metrics on a real-time basis. Instead of continuous monitoring, many agencies rely on regular reviews of CSP plans of action and milestones (POA&Ms) to learn what weaknesses or vulnerabilities their CSPs are working to address. It is not uncommon for CSPs to report on dozens or even hundreds of POA&M corrective actions so each agency has to make its own determination about the risk of using a given cloud service based on how well the CSP is managing its ongoing authorization.

To security practitioners, the use of the word “adequate” in the law is likely to be both unsatisfying and completely expected. There is of course no such thing as “perfect” security and any adjective connoting higher or stronger levels of protection suffers from problems of subjective interpretation. The standard of “adequate security” for federal government data and systems predates even the Federal Information Security Management Act of 2002 (FISMA) that prompted most of the official standards and guidance from NIST that agencies must rely on to manage their security activities. The need for adequate security (and, incidentally, the need to formally authorize or federal systems every three years) stems from OMB Circular A-130, released in 2000, that gave explicit instructions to executive agencies on implementing provisions of the Information Technology Management Reform Act of 1996, more familiarly known as the Clinger-Cohen Act. Given the current threat environment most federal agencies face, it might be nice to see security-focused legislation aiming a little higher, but perhaps FedRAMP isn’t the appropriate mechanism for that. The FedRAMP Authorization Act does create a new federal secure cloud advisory committee focused on determining ways to increase adoption and use of FedRAMP services and to both increase the number of services authorized and reduce the cost of FedRAMP authorization to agencies and to CSPs. The committee is apparently not tasked with identifying or recommending ways to make cloud services used by the government more secure.

SolarWinds compromise focuses new attention on trust in vendor supply chain

Solarwinds

Recent media attention on the successful intrusion of multiple commercial and government organizations using a backdoor embedded in the popular SolarWinds Orion software platform has (justifiably) focused on learning the extent of the compromise and the potential damage or loss to SolarWinds’ customers. Both the trade press and companies suffering breaches or intrusions are typically quick to characterize these situations as the result of “sophisticated” attacks perpetrated by advanced or highly capable threat actors. Early reports of the ability the SolarWinds intruders had to evade detection for months after their backdoor was successfully installed and the way they disguised their malicious software suggest that the “sophisticated” moniker is actually warranted in this case, although the precise circumstances surrounding the initial penetration of SolarWinds’ software development operation are far from clear. The consensus seems to be that the attackers were able to gain access to SolarWinds’ toolsets and processes for building release packages (either the source code repository where the build components were managed or a release or package manager elsewhere in the process) and insert malicious code into the build cycle while avoiding detection. When the software update packages were built for release to customers, the malicious code was included with the actual software modules and signed using a cryptographic hash function. The end result is that customers who downloaded and installed the SolarWinds software updates had every reason to believe the software was legitimate and fully authorized by the company. Add to this the fact that, at the time the malware-infested software was distributed, no commercially available anti-malware tools recognized the backdoor as malware, and you have an effective means of intrusion while evading detection. On a side note, it’s much less clear why the external connections to the intruders’ command-and-control servers weren’t detected as anomalous, either by the U.S. Government’s EINSTEIN intrusion detection system or by any of the intrusion detection technologies deployed by commercial entities’ who were victimized by the SolarWinds exploit.

Given what has been publicly reported so far, including by the Cybersecurity and Infrastructure Security Agency (CISA) within the Department of Homeland Security, it is difficult to suggest that any of the affected organizations should have been able to identify the problem with the infected SolarWinds software itself. This incident has brought supply chain attacks into the mainstream (to be fair, NIST first published its Special Publication 800-161, Supply Chain Risk Management Practices for Federal Information Systems and Organizations, more than five years ago) but calls for greater due diligence by customers of enterprise-class software tools like SolarWinds are both overly simplistic and not particularly feasible. There reason software vendors release things like cryptographic hash values is to help customers verify that the software customers download can be verified as authentic. Most buyers of such software lack the technical knowledge or capabilities to perform deep analysis of the software products they buy and, if such software has an embedded vulnerability somewhere it its source code, in most cases there is no way for a customer to examine the source code at all (not to mention that reverse engineering commercial software tools typically violates the license agreement governing the purchase of the software). Buyers of commercial software tools have little alternative than to rely on their chosen vendors to deliver software that is free of malware like the SolarWinds backdoor. When dealing with well-established software vendors, it is probably not overstating the situation to say that customers trust their vendors not to deliver products rife with hidden vulnerabilities. Software vendors, like other types of organizations, may in fact be worthy of customers’ trust, but it is at least a semantic mistake for any buyer to say they trust software. Organizations may demonstrate trustworthiness by (depending on your characterization) exhibiting competence, honesty, openness, credibility, or reliability. Among these attributes, however, software can only demonstrate reliability or unreliability; a product that performs consistently and predictably may be highly valued by its users, but that does not make it trustworthy. The future prospects for SolarWinds among public sector and private sector organizations remain to be seen, but the guidance issued to government agencies in the past week suggests that CISA is skeptical of many of SolarWinds’ assertions about which product releases are affected about the company’s ability to prevent further exploitation of its platform.

Repeal of planned FCC privacy rules leave ISPs largely unregulated

FCC-FTC Privacy

Last week Congressional Republicans successfully passed legislation to repeal privacy regulations that would have imposed several constraints on the ability of broadband Internet service providers (ISPs) to collect, analyze, sell, and otherwise manage personal information about their customers and their use of the Internet. The repealed rules, which were developed by the Obama administration and passed by the Federal Communications Commission (FCC) in October 2016, were set to go into effect this year. The new, now abandoned, FCC rules applied key privacy principles like transparency, choice, and consent to different categories of personally identifiable information, notably requiring customers to give affirmative consent (that is, to “opt in”) to use or sharing of sensitive personal information. The rules consider sensitive information to include precise geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history, and the content of communications. ISPs would have had more freedom to use or share non-sensitive personal information, but customers could still opt out of any use of their information if they choose to do so.

Beyond consent and use of personal information, the FCC would have added requirements that ISPs provide customers with “clear, conspicuous, and
persistent notice” regarding what information the ISPs collect, how that information may be used, and with whom and under what circumstances it will be shared. This element is consistent with notice of privacy practices requirements that the Federal Trade Commission (FTC) imposes on many types of companies, including e-commerce vendors, social media sites, and website operators. ISPs also would have been obligated to implement industry best practices for data security, authentication, monitoring, and oversight, again consistent with FTC best practices and the Consumer Privacy Bill of Rights, and to notify customers and law enforcement agencies notice of data breaches or other failures to protect customer information.

Instead, now that President Trump signed the measure into law, ISPs like Comcast, Verizon, and AT&T have few practical restrictions on how they handle their customers’ information and are subject to substantially fewer regulations than web content providers, e-commerce companies, and technology firms like Google and Facebook that depend on the ISPs so that end users can reach their products, content, and services. Since the new FCC rules never went into effect, it might seem that privacy protections for customers of Internet service providers are no worse than they were before, but unfortunately that is not the case, due to a separate decision the FCC made in early 2015. That decision, when the FCC voted in its Open Internet Order to adopt “net neutrality” principles, reclassified Internet service providers as common carriers, placing them under the jurisdiction of the Telecommunications Act of 1934 and, by treating them in a manner analogous to conventional telephone companies, shifted the regulatory authority for ISPs from the FTC to the FCC. The exemption from FTC oversight was made explicit in a landmark ruling last year by the 9th Circuit Court of Appeals, which found AT&T was not subject to action by the FTC, even for behavior that occurred prior to the Open Internet Order. One clear intent of the Obama-era FCC privacy rules was to bring regulations for ISPs in line with FTC rules and enforcement actions applicable to other technology companies. Now, unless further regulatory changes are introduced that somehow alter the common-carrier designation, Internet service providers are uniquely positioned to capitalize on the personal information and online behavior patterns of their customers.

Tax season means it’s time to watch out for W-2 scams

W-2 phishing

As American individuals and companies head into tax season, the Internal Revenue Service (IRS) is warning organizations of all types to be on the lookout for attempted W-2 phishing attacks as part of a broader pattern of business email compromise attempts. The urgent alert issued by the IRS on February 2 was the second such notice in a span of just eight days and emphasized that the phishing scam centered on employers’ Form W-2 information appears to be affecting many types of organizations beyond the commercial corporate entities typically targeted by this sort of attack. The IRS has for several years included phishing on its “dirty dozen” list of tax scams, although historically the most prevalent scams seem to have been attempts by attackers to send fake emails purportedly from the IRS. Beginning just last year, this class of attacks evolved to include phishing emails directed to company employees working in payroll or human resources that claim to be from the company CEO, asking the recipient to send copies of employee W-2 forms. According to data compiled by industry media sources such as CSO Online, data from more than 40 companies was compromised by these attacks in 2016. This “success” rate, coupled with what the IRS says is new notifications it has received already this year for the tax year 2016 filing season, prompted a renewed alert to corporate payroll and HR departments.

It should come as no surprise to anyone that paperwork or data related to tax returns are attractive targets for attackers, or that phishing scammers have gotten more creative about who the originating party is supposed to be in the emails they send. What is perhaps harder to understand is why so many of these emails make it through to their recipients, whether or not the recipients actually fall for the scam. A phishing email of this type is almost always sent from a source outside the targeted organization, so while it is a trivial matter for a scammer to change the “reply to” value in the email to be a corporate CEO or other official, it is technically much less trivial to hide the true origin (server, IP address, and domain) of the email. It should be simple to apply a rule to to incoming email that essentially says, “reject any email received from an external domain that claims to originate from an address in the internal domain.” Essentially every network firewall implements an analogous rule by default (dropping packets from external sources that have an internal source IP address), but few managed email service providers allow such rules to be defined and enforced. This deficiency leads to a market opportunity for email security gateway vendors like Barracuda, Cisco, Proofpoint, Sophos, and Websense. While many organizations have treated phishing avoidance as a security awareness issue, the increasing frequency of specialized attacks like the W-2 scams might push more companies to augment their phishing prevention capabilities so they don’t have to rely so heavily on their employees.

European Court of Justice rules against UK on data retention

ECJ

The Court of Justice of the European Union (familiarly known as the European Court of Justice or ECJ) issued a judgment this week explicitly against laws in the United Kingdom and in Sweden that require telecommunications service providers to collect and retain data about telephone calls and other electronic communications (for 12 months in the UK law and for six months in the Swedish law). In its ruling, the ECJ found that the British and Swedish data retention regulation “prescribes general and indiscriminate retention of data” in a manner inconsistent with norms of democratic society and, in particular, with privacy protections for electronic communications included European Council Directive 2002/58/EC. The Court’s ruling makes clear that it is possible for European Union member nations to establish targeted data retention rules for specific purposes, such as supporting criminal or anti-terrorism investigations, but the December 21 judgment further clarifies interpretations of EU policy since the Court ruled invalid the EC-wide Data Retention Directive in 2014.

EU law precludes a general and indiscriminate retention of traffic data and location data, but it is open to Members States to make provision, as a preventive measure, for targeted retention of that data solely for the purpose of fighting serious crime, provided that such retention is, with respect to the categories of data to be retained, the means of communication affected, the persons concerned and the chosen duration of retention, limited to what is strictly necessary.

To put this recent ruling in context, a little history may be in order. Even those with only a casual interest in personal privacy protections are often aware that, in general, regulations governing the collection, use, and disclosure of personal data are stronger in the European Union than privacy regulations in the United States. Despite those overarching privacy protections, enumerated in multiple EC Directives dating at least to 1995, the European Parliament and the European Council established Directive 2006/24/EC in March 2006 to harmonize member countries’ retention of data related to electronic communications services. The 2006 Directive concerned location and telecommunications metadata that could be used by law enforcement authorities or other authorized entities to identify the source and destination of electronic communications (including telephony services and Internet transmissions such as email) and the identity of the subscriber or registered user initiating such transmissions. Individual countries were free to establish their own specific retention periods, but Directive 2006/24/EC set the minimum at six months and the maximum at two years. Laws such as the Swedish regulation addressed in this week’s ECJ ruling were crafted specifically to conform to the guidelines in 2006/24/EC.

Directive 2006/24/EC was in effect for approximately eight years; in April 2014 the ECJ declared the data retention directive invalid, largely because it did not require any “differentiation, limitation, or exception” in the collection of electronic communications data nor did it ensure that government authorities could only use the collected data for preventing, detecting, or prosecuting serious crime. The Swedish case brought to the ECJ challenged a law that was enacted prior to the 2014 ruling invalidating 2006/24/EC, while the UK case concerned the Data Retention and Investigatory Powers Act of 2014, which was enacted specifically in response to the invalidation of the EC data retention directive. Dubbed the “snoopers’ charter” by opponents, the UK law requires telecommunications carriers and Internet service providers to hold data about all electronic communications by subscribers or users for a period of 12 months. While many national data retention laws (and Directive 2006/24/EC) exclude the content of electronic communications, news reports about the UK law suggest that service providers would be required to retain, and to make available to law enforcement, details such as the Internet websites individuals visit and the applications and messaging services individuals use. The UK efforts to increase this type of data retention stand in stark contrast to actions by other EU nations in the years while 2006/24/EC was still in effect, such as the rejection by the German Federal Constitutional Court of a data retention law that had been designed to comply with the EC Directive. As for the U.S., while there is no mandatory data retention law currently in place, Congress has tried several times to enact these rules, including failed efforts in 2009 and 2011, and U.S. law enforcement authorities have well-established legal procedures under the Stored Communications Act to access any data or records that electronic communications providers choose to maintain for their own business purposes.