A report released this month by GAO on what it views as deficiencies in the information security program and security control effectiveness at the National Aeronautics and Space Administration (NASA) serves to highlight once again the challenge for organizations to move beyond compliance to ensure implemented security controls are actually doing what they are intended to do. Testing and demonstrating the effectiveness of security controls is a persistent challenge for all agencies, not just NASA, and the identified inconsistent and incomplete risk assessment procedures and security policies are also issues shared by many other agencies. What may be most notable about the findings in the report is the relatively basic level of some of the control weaknesses found at some of NASA’s facilities, including poorly implemented password-based access controls, non-functional physical security mechanisms, and less than comprehensive vulnerability scanning and intrusion detection.
NASA has had an unusual level of variability in its overall security program, at least as measured through the FISMA reporting process. While the agency has been trending better since fiscal 2006, when it received a D- on the FISMA scorecard, its progress since then has not equaled the level (B-) it achieved in 2005. The details in the most recent (FY2008) report to Congress give some indications of the NASA infosec program as work in progress, with strengths in C&A process, training of security personnel, and privacy compliance, and with gaps in testing of security controls and contingency plans, and in general employee security awareness training. NASA’s written response to the GAO report (which, as is typically the practice, was provided to the agency for comment prior to its public release) concurs with all eight of GAO’s findings and recommendations, but notes that a number of these recommendations are already being addressed by program improvements underway as the result of internal assessments.
Recent news that the Blue Cross Blue Shield Association (BCBSA) suffered the theft of an employee’s personal laptop that contained personal information on hundreds of thousands of physicians illustrates once again that it is not enough to have the right security policies in place, you have to be able to monitor compliance and enforce them. In this latest incident, the employee copied corporate data onto a personal laptop, in violation of existing security policy. What’s worse is that the data as stored by BCBSA was encrypted, but the employee decrypted the data before copying it. The employee obviously put the BCBSA at risk in a way its policies and database encryption controls were intended to prevent, and with the laptop lost, the American Medical Association is taking action to notify member physicians who may now be at risk of identity theft.
Data stewardship and data handling policies are the first step, and encrypting the data at rest is a good follow-up, but what else can organizations like BCBSA do to avoid this sort of incident? It’s not entirely clear how the data might have been transferred from the corporate computing environment to the personal laptop, but whether it was by thumb drive or even direct connection of the laptop to the BCBSA network, there are multiple technical options available to mitigate this type of risk. One answer might be data loss prevention controls that could be used to keep corporate data from being copied or written locally at all, whether the client computer was Association-owned or not. Encryption mechanisms can be added to provide protection in transit and during use, rather than just at rest. USB device access controls can be used to administer, monitor, and enable or disable USB ports when devices are plugged in to them, so for instance any attempt to use a non-approved thumb drive (perhaps one without device-level encryption) could be blocked. Network access control (NAC) can be used to gain awareness of (and prevent if desired) attempts to connect non-corporate computing devices to the network. Let’s also not forget the importance of security awareness training, which is just as relevant now as it was for the well-publicized case of the VA employee who had a laptop with veterans’ personal data stolen from home after taking the data off-site in violation of VA policy.
One notable aspect of the widely-reported launch of a Security Metrics Taskforce charged with coming up with new, outcome-based standards for measuring the effectiveness of federal agency information security efforts is a statement written by federal CIO Vivek Kundra, Navy CIO Robert Carey, and Justice CIO Vance Hitch that the group would follow a “trust but verify” approach while also fulfilling statutory requirements and driving towards real-time security awareness. This is consistent with the security posture of many federal agencies, particularly on the civilian side, that in general users and organizations can be trusted to do the right thing in terms of following policies and taking on expected responsibilities and obligations. There are many current examples (HIPAA security and privacy rules, FISMA requirements, etc.), where a major set of requirements has been enacted but no formal monitoring or auditing is put in place to make sure everyone is behaving as they should. Voluntary reporting of violations and requirements with no penalties for failing to comply can only be successful if the assumption holds that you can trust everyone to do the right thing. The new task force would go a long way towards achieving its stated goal of better protecting federal systems if the metrics it proposes include some set of requirements for auditing compliance with the appropriate security controls and management practices. If the recommended metrics do include those aspects, there may even be an opportunity for the government to define penetration testing standards and services that could be implemented across agencies to validate the effective implementation and use of the security mechanisms they select to secure their environments. Focusing on outcome-based metrics that agencies are ultimately left to their own to measure and track, even with real-time situational awareness, will fall short of hardening the federal cybersecurity infrastructure to the point where it is well-positioned to defend against the constantly evolving threats it faces.
In an development that should come as a welcome surprise to security watchers critical of U.S. federal information security efforts as too focused on compliance (at the expense of effectiveness), the Federal CIO Council announced last week that a new task force has been established (it held its first meeting on September 17) and begun work on new metrics for information security that will focus on outcomes. This effort is the latest development in a groundswell of activity both within Congress and parts of the executive branch to revise the requirements under the Federal Information Security Management Act (FISMA) to put less emphasis on compliance with federal security guidance, and more emphasis on results from implementing security controls. Legislation in various forms of development from both the house and the senate would require a similar re-alignment of security measurement approaches, so the action by the CIO Council would seem to be partly in anticipation of such requirements being enacted in law. The collaborative group includes participants from several key agencies as well as the information security and privacy advisory board (ISPAB). The schedule for the group appears quite ambitious: the task force is expected to have a draft set of metrics available for public comment by the end of November.
The security problem in this incident is that the media in question was not sanitized as it should have been according to federal and Defense Department policy. NARA had no intention of sending any data out of its custody; it merely wanted the hard drive repaired. NARA officials have defended their actions by saying that the return of hardware media such as disk drives is a routine process, and the fact that unencrypted personal data was on the drives doesn’t violate any rules. The situation was brought to light through the actions of an IT manager who reported it to NARA’s inspector general. NARA had not disclosed the loss of records to federal authorities (which it is required to do under federal regulations even if it believes no actual breach of personal information has occurred), and also chose not to notify veterans whose records might be affected. The manager who reported the breach and agency officials appear to differ markedly on whether the situation constitutes a breach: one the one hand the manager characterized the loss as “the single largest release of personally identifiable information by the government ever,” while the official position stated by the agency is “NARA does not believe that a breach of PII occurred, and therefore does not believe that notification is necessary or appropriate at this time.”
The position articulated by NARA calls to mind the “harm” provision in the personal health data breach notification regulations issued by HHS and the FTC that went into effect last week. In a change from the language in the HITECH Act that mandated the regulations, the final version of the HHS rules include an exception to the breach notification requirement if the organization suffering the loss of data believes that no harm will be caused by the loss. (The FTC rules have no such exception.) The self-determination of harm and the incentive organizations would have to minimize the estimate of harm to avoid disclosing breaches has angered privacy advocates and seems likely to result in under-reporting of breaches. The difference between the common sense perspective and the official NARA position on this latest data loss is strong support for the argument that leaving the determination of significance about breaches up to the organization suffering the loss will result in individuals not being notified that their personal information may have been compromised.