A couple of recommendations for the new cybersecurity czar

As an immediate result of the 60-day review of the state of federal cybersecurity activities conducted at the behest of the Obama administration, the president announced he will (as has been anticipated) appoint a federal cybersecurity czar in the executive office of the president to direct security policy for the government. In general this should be seen as a positive move, but assuming this new position will not come with significant control over resource allocations or individual agency-level provisions of security measures, just creating the position is of course insufficient to ensure any real improvements in government security posture. It remains to be seen how the position will be structured or what the extent of the responsibilities and powers are that accrue to the post, but there are a couple of things that the administration might want to keep in mind to make this move a success.

The first consideration is how to set an appropriate scope for the sphere of influence the federal cybersecurity director will have. There are a variety of opinions circulating in various draft cybersecurity bills in both houses of Congress, within DHS, OMB, DOD and other key agencies with current responsibility for cyber defense, critical infrastructure protection, and other relevant mission areas. Historically cross-government approaches for security have been quite limited in the set of services or standards they seek to specify for all agencies. The Information Systems Security Line of Business (ISSLOB) chartered by OMB, for instance, only provides two common security services – FISMA reporting and security awareness training – that are more amenable to a “same-for-everyone” approach than some more sensitive services like vulnerability scanning or intrusion detection might be. Having said that, the Department of Homeland Security is moving ahead with the next generation of the Einstein network monitoring program, which would mandate real-time intrusion detection and prevention sensors on all federal network traffic. Government agencies are in the process of consolidating their Internet points of presence under the Trusted Internet Connectivity initiative, in part to facilitate government-wide monitoring with Einstein. There has also been progress made in specifying minimum security configuration settings for desktop computer operating systems (the Federal Desktop Core Configuration) and providing a freely available tool to help agencies check to see that their workstations comply with the standard. So, while there are some good examples to point to for true government-wide standards, it may be difficult to even try to apply consistent security measures on a government-wide basis.

The early write-ups on the new position suggest that a key aspect of the role will be directing cybersecurity policy. In contrast to some of the technical layers of security architecture, policy is an area where some comprehensive guidance or minimum standards would be a welcome addition to managing information security programs in government agencies. The current state of the government leaves the determination of security drivers, requirements, and corresponding levels of risk tolerance to each agency or, in some cases, to each major organizational unit. This results in a system where most or all agencies follow similar processes for evaluating risk, but vary significantly in whether they choose to mitigate that risk and how they choose to do so. Federal information security management is handled in a federated approach and is quite subjective in its execution. This subjectivity results in wide disparities in responses to threats and vulnerabilities, because what one agency considers an acceptable risk may be a show-stopper for another.

So the new cybersecurity czar should develop and issue a set of security policies for all federal agencies, along with appropriate existing or updated standards and procedures on how to realize the security objectives articulated in those policies. It would also be nice, where appropriate, to see the administration break from the Congressional tradition of never specifying or mandating technical methods or tools. The language on protecting public health information from breaches and required disclosures of breaches in the HITECH Act didn’t even use the word “encryption” but instead specified a need to make data “unusable, unreadable, or otherwise indecipherable.” No one should suggest that the administration tell its cabinet agencies to all go out and buy the same firewall, but there are opportunities in areas such as identity verification, authentication, and authorization where the reluctance to suggest a common technology or approach creates its own set of obstacles.

Renewed interest in detecting and preventing health care fraud

In an effort to identify any and all ways to shore up the financial stability of Medicare and Medicaid programs, the Obama administration has announced the formation of a task force focused on the use of technology to detect and help prevent health care fraud, as noted in an article in today’s Washington Post among other places. Experts in this area have long disagreed on the level of fraud activity in the U.S. health care system, with government officials typically estimating the percentage of Medicare and Medicare payments made due to fraud in single digits, while other estimates put the proportion as high as 20 percent. With this issue, the way technology has been used to drive efficiency in claims processing is a bit part of the problem, but there are many potential uses of technical monitoring and analysis tools that could help the government recapture some of the enormous losses due to fraud and use that money to offset some of the other drains on major healthcare entitlement programs.

To anyone interested in understanding the nature and extent of this problem, as well as gaining some insight into possible solutions, I strongly recommend Malcolm Sparrow’s License to Steal, which despite being written over 12 years ago (updated in 2000) remains highly relevant to the problem of health care fraud in the United States. Going back to the days when the Centers for Medicare and Medicaid Services (CMS) was known as the Healthcare Financing Administration (HCFA), Sparrow contrasts the inherent conflict between government program managers seeking to automate the claims processing system as much as possible and the need to slow down (in some cases) that same processing in order to do the sort of analysis that would help reduce payment of fraudulent claims. While some great strides have been made in technical means of fraud detection (for example, CMS can now verify patient social security numbers associated with submitted claims, something that was not possible ten years ago), it seems there remains a great opportunity to drive costs out of the system simply by ensuring payments are made correctly for legitimate claims. (In the interest of full disclosure, I should point out that I worked as a course assistant for Professor Sparrow while in graduate school). Another excellent and somewhat more recent treatment of this topic is Healthcare Fraud by Rebecca Busch.

Personal data breach notification law may be in the works for Europe

A May 6 article in the New York Times (“E.U. To Consider More Stringent Reporting of Data Breaches“) includes quotes an opinions from a number of people suggesting that the European Union may be heading for a comprehensive breach notification law requiring public and private sector organizations to tell people when their personal information has been lost or disclosed. While the vast majority of states in the U.S. have some form of breach notification law, there in not yet a federal standard, with the possible exception of the disclosure requirements for breaches of unsecured personal health information contained in the American Recovery and Reinvestment Act. As noted in the Times article, “Most European countries, including Britain, do not require businesses or other entities to notify the public when they lose personal data, although some do so voluntarily.”

New security provisions in draft U.S. ICE legislation

The draft U.S. Information and Communications Enhancement (U.S. ICE) legislation expected to be introduced by Senator Tom Carper (D – Del.) addresses and tries to remedy many of the shortcomings in the Federal Information Security Management Act (FISMA). The feature drawing the most attention recently is the position and corresponding office that would be created in the legislation of an executive branch Director of the National Office for Cyberspace. This new role would provide direct oversight of federal agency security programs (civilian and defense), including reviewing and approving agency information security programs mandated under FISMA. For security architects, there are a number of very interesting provisions in the law. These include:

  • Annual reporting on the overall security posture of the federal government, including a detailed assessment of the effectiveness of information security programs in each agency. The agency-level evaluations will look at the effectiveness of virtually all aspects of security programs, including monitoring, detection, analysis, protection, reporting, and response.
  • Development and implementation of government-wide policy, guidance, and regulations to standardize security requirements for commercial off-the-shelf (COTS) products and services purchased by the government. This provision would presumably build on the approach of the Federal Desktop Core Configuration, which mandates minimum security settings for government computers running Windows operating systems.
  • A shift in the emphasis of explicit agency information security responsibilities away from compliance with recommended controls towards a model of on-going assessment of the effectiveness of implemented security controls. Specifically, the bill would require “continuously testing and monitoring information security controls and techniques to ensure that they are effectively implemented.”
  • A new requirement to establish, maintain, and update enterprise network, system, storage, and security architecture framework documentation explaining how security controls are implemented within the agency’s information infrastructure and how those controls provide the appropriate level of security (in terms of confidentiality, integrity, and availability). The emphasis on documenting how controls are implemented instead of merely reporting what controls would be a significant departure from conventional security thinking and reporting in the federal government.
  • Agency information security program responsibilities would be augmented to include not just periodic risk assessments (already called for in FISMA) but also penetration tests for agency information systems. Noted experts in security and, especially, incident response such as Richard Bejtlich have been consistent voices calling for less “paper compliance” and more testing, particularly testing that goes beyond automated vulnerability scanning or system security test and evaluations done within the federal certification and accreditation processes. Those with a fondness for semantics may be interested to note that although FISMA does call for annual independent evaluations of program effectiveness, the law is quite vague as to the methods to be used for such evaluations, and the use of the word “penetration” does not appear anywhere in the text of FISMA.
  • Lastly, the bill would take the first step towards standardizing minimum security requirements across agencies, presumably with a corresponding influence on the levels of risk deemed acceptable by agencies when authorizing systems for operation in their environments. The draft U.S. ICE legislation directs the Commerce Secretary (through NIST) to set unified standards for national security systems and information systems, including minimum information security requirements (agencies can employ more stringent standards). In another departure from precedent, these standards will be compulsory and binding.

Balancing prevention and response

The two primary information resources with which most people are familiar for security emergency response are the Computer Emergency Response Team Coordination Center (CERT/CC) at Carnegie-Mellon University’s Software Engineering Institute and the U.S. Computer Emergency Readiness Team (US-CERT) run by the U.S. Department of Homeland Security. The CERT/CC has long been a source of valuable information for public and private sector organizations seeking information on current and former security threats and vulnerabilities. Communication with CERT is two-way, as security researchers and others send information to CERT about the latest observations from the field, and others look to CERT for up-to-date information about new threats against which protective action should be taken. US-CERT serves as the focal point of communication for U.S. federal agencies that experience security breaches or other incidents; federal guidelines mandate that such incidents (or suspected incidents) be reported to US-CERT within one hour of their discovery. Because of this reporting process, US-CERT has tended to emphasize incident response to known exploits and incidents, while directing agencies to NIST or other sources for guidance on preventive controls. The director of US-CERT announced this week that the organization would shift its focus aware from incident response towards prevention. There’s no question that over-emphasizing incident response in the absence of security planning, risk assessment, and preventive controls can leads to a situation in which an organization is always one or more steps behind the attackers, always playing “catch-up” or cleaning up after an intrusion. It seems very risky however to openly de-emphasize incident response in the way that US-CERT Director Mischel Kwon does when she says “Incident response should be rare. Forensics should not be the norm.” She needs to look no further than her own organization, which reports more than a three-fold increase from 2006 to 2008 in the number of security incidents reported by federal agencies, to know that incident response and forensics remain important functions in any security program intended to improve the security posture of an organization. Better and more consistently applied preventive measures, including more robust penetration testing, will hopefully help stem the exponential growth of security incidents. However, until federal (and non-federal) organizations get a better handle on how the bad guys are getting to them, effective incident response will remain a critical componet of information security programs.