NIST finalizing standard government-wide security controls

After more than two years of collaboration among civilian, defense, and intelligence agencies, the National Institute of Standards and Technology’s Information Technology Laboratory has released the final public draft of revision 3 of its Special Publication 800-53, “Recommended Security Controls for Federal Information Systems and Organizations.” The most notable aspect of this release is not the set of controls it contains, as the control structure has been largely the same since 800-53 was originally published in 2005, with minor modifications in 2006 and again in 2007. What is remarkable this time around is the consensus achieved by Ron Ross and his team at NIST in for the first time getting DOD, the intel community, and civilian agencies to agree to a single set of government-wide standards. Dr. Ross leads the Joint Task Force Transformation Initiative Interagency Working Group, whose primary focus is harmonizing security practices, standards, and guidelines across the government, even for national security systems not falling under the scope of the Federal Information Security Management Act. Similar consolidation is forthcoming in the area of Certification and Accreditation under Special Publication 800-37, “Guide for Security Authorization of Federal Information Systems: A Security Lifecycle Approach,” still in draft, which when finalized and adopted, will unify the DIACAP approach (itself a revision of DITSCAP) long used in the Department of Defense with the civilian agency C&A process that became required for use in 2004. This trend of agreement on standards and practices throughout government seems to be a positive indicator for those (like President Obama) who advocate more central oversight and administration of federal information security.

Making sense of information privacy

With more and more initiatives focused on information sharing, data exchange, aggregation, and analysis, there is also increased attention on establishing and protecting privacy, particularly of personal information. As noted in yesterday’s post, a federal panel focusing on such issues has recommended substantial updates on the primary federal laws governing privacy protections, including the Privacy Act of 1974 and the E-Government Act of 2002. Federal legislation related to privacy is a large and complex area, with numerous narrowly defined requirements based on specific types of data (medical records, educational records, social security numbers, etc.) or on the nature of the individuals whose data is being stored (veterans, mental health patients, children, U.S. citizens vs. non-permanent residents, etc.). Efforts at establishing over-arching legal or policy frameworks on privacy have been frustrated by a number of factors, including technological advances beyond what was envisioned at the time existing laws were written. As noted yesterday by the New York Times, there are two conflicting perspectives on information privacy that are fundamentally at odds: one favors a “data minimization” approach centered on the idea that the best way to prevent the disclosure of sensitive data is not to store it in the first place; the other places great value on storing as much information as possible, as long as individuals are given control over who can see the information and under what circumstances. The latter perspective is a characteristic of social networking, although the extent to which individual users of sites like Facebook and MySpace are really in control of their data is subject to some debate.

Coming to some resolution or balancing point between data interoperability and privacy (and security) provisions is a prerequisite to success for many ambitious initiatives in both the government and private sectors. For instance, wide-scale adoption of health information technology such as electronic medical records or personal health records will not become a reality (no matter the industry incentives offered by the government) unless and until individuals are satisfied that their personal information will be appropriately secured and their privacy maintained. There are technical, legal, and philosophical arguments being proposed by privacy advocates, information sharing proponents, and neutral observers, but the goal of getting to common understanding of just what “privacy” means, in what contexts, and how that privacy should be protected remains elusive. Into this debate comes Daniel Solove, a law professor at George Washington University who has written quite a bit in recent years on the legal aspects of privacy and information technology. Solove’s most recent effort is Understanding Privacy, which provides a detailed analysis of why previous attempts at defining privacy and setting standards for its protection have failed. Solove also proposes his own method for resolving this issue, following a pragmatic methodology that does not seek to establish a single, over-arching conception of privacy, but instead accepts that privacy problems and their solutions are different depending on their contexts. While his writing style is by turns both academic and philosophical, his concise (the text is just under 200 pages) treatment of the topic does indeed contribute to developing an understanding of privacy and just why it remains so problematic, whether or not you agree with his take on addressing it.

NIST recommends updates to Privacy Act

Last week the Information Security and Privacy Advisory Board (ISPAB) published a report, “Toward A 21st Century Framework for Federal Government Privacy Policy”, recommending a variety of both broad and targeted actions intended to update the privacy provisions in the Privacy Act of 1974 and the privacy portions (section 208) of the E-Government Act of 2002. The recommendations are largely driven by the perceived gap in privacy legislative requirements developed 35 years ago and the technological and operational environments of the modern information age. Key recommendations in this report include:

  • Amendments to the Privacy Act and E-Government Act in order to:
    • Improve Government privacy notices;
    • Update the definition of System of Records to cover relational and distributed systems based on government use, not holding, of records.
    • Clearly cover commercial data sources under both the Privacy Act and the E‐Government Act.
  • Improve government leadership and governance of privacy
    • OMB should hire a full-time Chief Privacy Officer with resources.
    • Privacy Act Guidance from OMB must be regularly updated.
    • Chief Privacy Officers should be hired at all “CFO agencies.”
    • A Chief Privacy Officers’ Council should be developed.
  • Changes and clarifications in privacy policy
    • OMB should update the federal government’s cookie policy.
    • OMB should issue privacy guidance on agency use of location information.
    • OMB should work with US‐CERT to create interagency information on data loss across the government
    • There should be public reporting on use of Social Security Numbers

For privacy practitioners, there is much to like in this report. Of particular interest (especially in light of new federal data breach disclosure notice requirements that apply to some commercial sector organizations as well as the government) is the re-definition of “system of records” to encompass databases and other systems storing personal information that are used by the government, and not limited to those the government actually holds. On a more technical level, the ISPAB recommends a long-overdue reevaluation of the federal policy on the use of cookies (in general, the use of persistent cookies is banned on federal websites), in part to help the government realize some of the benefits of Web 2.0 technologies.

A couple of recommendations for the new cybersecurity czar

As an immediate result of the 60-day review of the state of federal cybersecurity activities conducted at the behest of the Obama administration, the president announced he will (as has been anticipated) appoint a federal cybersecurity czar in the executive office of the president to direct security policy for the government. In general this should be seen as a positive move, but assuming this new position will not come with significant control over resource allocations or individual agency-level provisions of security measures, just creating the position is of course insufficient to ensure any real improvements in government security posture. It remains to be seen how the position will be structured or what the extent of the responsibilities and powers are that accrue to the post, but there are a couple of things that the administration might want to keep in mind to make this move a success.

The first consideration is how to set an appropriate scope for the sphere of influence the federal cybersecurity director will have. There are a variety of opinions circulating in various draft cybersecurity bills in both houses of Congress, within DHS, OMB, DOD and other key agencies with current responsibility for cyber defense, critical infrastructure protection, and other relevant mission areas. Historically cross-government approaches for security have been quite limited in the set of services or standards they seek to specify for all agencies. The Information Systems Security Line of Business (ISSLOB) chartered by OMB, for instance, only provides two common security services – FISMA reporting and security awareness training – that are more amenable to a “same-for-everyone” approach than some more sensitive services like vulnerability scanning or intrusion detection might be. Having said that, the Department of Homeland Security is moving ahead with the next generation of the Einstein network monitoring program, which would mandate real-time intrusion detection and prevention sensors on all federal network traffic. Government agencies are in the process of consolidating their Internet points of presence under the Trusted Internet Connectivity initiative, in part to facilitate government-wide monitoring with Einstein. There has also been progress made in specifying minimum security configuration settings for desktop computer operating systems (the Federal Desktop Core Configuration) and providing a freely available tool to help agencies check to see that their workstations comply with the standard. So, while there are some good examples to point to for true government-wide standards, it may be difficult to even try to apply consistent security measures on a government-wide basis.

The early write-ups on the new position suggest that a key aspect of the role will be directing cybersecurity policy. In contrast to some of the technical layers of security architecture, policy is an area where some comprehensive guidance or minimum standards would be a welcome addition to managing information security programs in government agencies. The current state of the government leaves the determination of security drivers, requirements, and corresponding levels of risk tolerance to each agency or, in some cases, to each major organizational unit. This results in a system where most or all agencies follow similar processes for evaluating risk, but vary significantly in whether they choose to mitigate that risk and how they choose to do so. Federal information security management is handled in a federated approach and is quite subjective in its execution. This subjectivity results in wide disparities in responses to threats and vulnerabilities, because what one agency considers an acceptable risk may be a show-stopper for another.

So the new cybersecurity czar should develop and issue a set of security policies for all federal agencies, along with appropriate existing or updated standards and procedures on how to realize the security objectives articulated in those policies. It would also be nice, where appropriate, to see the administration break from the Congressional tradition of never specifying or mandating technical methods or tools. The language on protecting public health information from breaches and required disclosures of breaches in the HITECH Act didn’t even use the word “encryption” but instead specified a need to make data “unusable, unreadable, or otherwise indecipherable.” No one should suggest that the administration tell its cabinet agencies to all go out and buy the same firewall, but there are opportunities in areas such as identity verification, authentication, and authorization where the reluctance to suggest a common technology or approach creates its own set of obstacles.

Renewed interest in detecting and preventing health care fraud

In an effort to identify any and all ways to shore up the financial stability of Medicare and Medicaid programs, the Obama administration has announced the formation of a task force focused on the use of technology to detect and help prevent health care fraud, as noted in an article in today’s Washington Post among other places. Experts in this area have long disagreed on the level of fraud activity in the U.S. health care system, with government officials typically estimating the percentage of Medicare and Medicare payments made due to fraud in single digits, while other estimates put the proportion as high as 20 percent. With this issue, the way technology has been used to drive efficiency in claims processing is a bit part of the problem, but there are many potential uses of technical monitoring and analysis tools that could help the government recapture some of the enormous losses due to fraud and use that money to offset some of the other drains on major healthcare entitlement programs.

To anyone interested in understanding the nature and extent of this problem, as well as gaining some insight into possible solutions, I strongly recommend Malcolm Sparrow’s License to Steal, which despite being written over 12 years ago (updated in 2000) remains highly relevant to the problem of health care fraud in the United States. Going back to the days when the Centers for Medicare and Medicaid Services (CMS) was known as the Healthcare Financing Administration (HCFA), Sparrow contrasts the inherent conflict between government program managers seeking to automate the claims processing system as much as possible and the need to slow down (in some cases) that same processing in order to do the sort of analysis that would help reduce payment of fraudulent claims. While some great strides have been made in technical means of fraud detection (for example, CMS can now verify patient social security numbers associated with submitted claims, something that was not possible ten years ago), it seems there remains a great opportunity to drive costs out of the system simply by ensuring payments are made correctly for legitimate claims. (In the interest of full disclosure, I should point out that I worked as a course assistant for Professor Sparrow while in graduate school). Another excellent and somewhat more recent treatment of this topic is Healthcare Fraud by Rebecca Busch.