Not everyone agrees what is (and isn’t) personal information

Deliberations among European Union member countries made privacy headlines in early 2008 when Peter Scharr, data protection commissioner for Germany and leader of a group of EU data privacy regulators, while speaking at a European Parliament hearing on online data protection, concluded that Internet Protocol (IP) addresses should be considered personal information, insofar as they can often be used to identify an individual based on the individuals ownership or use of a computer associated with an IP address. This view — not pervasive across the European Community but at least indicative of a way of thinking that emphasizes personal privacy protections — has long been opposed by major IT industry players like Google, but is typically supported by privacy advocates like the Electronic Privacy Information Center. Last month a federal judge concluded just the opposite, ruling that IP addresses are not personal information while dismissing a class action suit against Microsoft in which plaintiffs had argued that Microsoft’s practice of collecting IP addresses during automated updates violated its user agreement, which does not allow the company to collect information that personally identifies users.

Beyond the still unresolved issue of whether an IP address should be treated as personally identifiable information, and under what specific circumstances, the disagreement in federal level thinking between countries further highlights the challenges that organizations face in complying with privacy laws and regulations across the global economy, and the difficulty faced in overseeing and enforcing those laws and regulations.

FISMA still being touted as best security for health information exchange

Coming out of the recent CONNECT User Training Seminar held this week in Washington, DC is a reiteration of the opinion previous expressed by federal stakeholders working on the Nationwide Health Information Network (NHIN) that non-federal entities seeking to participate in the NHIN need to step up their security and privacy to at least meet the level of federal practices under FISMA. The suggestion once again is that security practices of private sector healthcare organizations and other businesses are less rigorous and less effective than those of public sector organizations. The recommendation is that all would-be NHIN participants should adopt a risk-based security management and security control standard such as the framework articulated in NIST Special Publication 800-53, used by all federal agencies.

There’s no question that a baseline set of security standards and practices would go a long way towards establishing the minimum level of trust needed for public and private sector entities to be comfortable with sharing health data. What seems a bit disingenuous however is the suggestion, repeated on Tuesday by the CIO of the Center for Medicare and Medicaid Services, that current government security and privacy practices are the model that should be broadened to apply to the private sector. Any organization currently following ISO/IEC 27000 series standards for risk management and information security controls is already assuming a posture commensurate with a federal agency using 800-53 — no less an authority than the FISMA team at NIST has acknowledge the substantial overlap between 800-53 and ISO 27002 controls, and NIST’s more recent released SP800-39 risk management guidance was influenced by the corresponding risk management elements in ISO 27001, 27002, and 27005 as well.

The hardest piece to reconcile may be the need for organizations to certify the security of their systems and supporting processes. Here again, it’s hard to argue that some sort of certification (or even objective validation) of security controls could help establish, monitor, and enforce necessary security measures in all participating organizations. The federal model for certification and accreditation is a self-accrediting form of security governance, so the logical extension of this model would be to have private enterprises similarly self-certify and assert their security and privacy practices are sufficient. Aside from the trust issues inherent to any subjective system of self-reported compliance, it’s not at all clear what level of oversight would be put in place under the still-emerging NHIN governance framework, or what federal laws have to offer in terms of an approach. While there are explicit legal penalties for violation of health privacy and security laws such as HIPAA, the only outcome for a federal agency failing to follow effective security practices under FISMA is a bad grade on an OMB report card. FISMA simply isn’t a best practice for verifying effective security.

GAO adds to the chorus calling for better security metrics

In a GAO report released last week reflecting testimony delivered to the House subcommittee on Technology and Innovation, GAO’s Greg Wilshusen echoed his own previous testimony and a growing number of congressional voices pointing out that progress in FISMA scores do not translate into more effective security programs or improved security postures for federal agencies. Wilshusen’s recent testimony focused on the Department of Homeland Security and the National Institute for Standards and Technology (NIST), but his findings are broadly applicable across the government. Not only have many federal agencies failed to fully implement information security programs as required under FISMA, but the security measures reported annually to OMB continue to focus on the implementation of required security controls, rather than their effectiveness in achieving enhanced security. GAO joins a group of senators (including Tom Carper of Delaware and Olympia Snowe of Maine and John Rockefeller of West Virginia) who have introduced legislation intended to strengthen FISMA, both through assignment of responsibilities to the new federal cybersecurity coordinator, and through changes in the focus of requirements for security control measurement, testing, and oversight.

Lots of recommendations for new cyber-security czar

Ever since President Obama announced his intention to appoint a federal cyber-security “czar” in the Executive Office of the President, there have been a steady stream of open letters and articles making recommendations for the as-yet-unfilled position, as well as expressing concerns about the obstacles such an individual will face in being effective in the role. Adding to the unsolicited opinions this week is a succinct and thoughtful piece in Federal Computer Week by SANS research director Alan Paller, “The limits of a cyber-czar.” Paller highlights three of the many issues with the current state of information security in the federal government: too many integrators and vendors delivering systems that include security flaws or vulnerabilities; a lack of technically qualified security personnel; and a general failure on the part of government auditors (and agencies themselves) to assess the effectiveness (rather than just the implementation) of existing security controls.

Paller’s point (and mine in highlighting his article) is not just that these three issues represent big security challenges for the government, but also that they are issues that the new cyber-security czar may be able to take on an influence. Of course, not all aspects of federal information assurance are well-suited to top-down management or to common solutions, but with the emphasis to date placed on the role of the cyber-security czar in formulating and implementing policy for the federal government, these do seem like fruitful areas for exerting some executive branch influence. For the software security and general quality issue, the czar need look no further than the Department of Defense for standards, contract language, and requirements that demand vendors and integrators demonstrate that their products and developed systems meet specific security configuration criteria, such as those contained in the DISA Security Technical Implementation Guides (STIGs). Paller points to sparse implementation and enforcement of the Federal Desktop Core Configuration (FDCC) requirements that officially went into effect in February 2008, and lays the blame at the feet of the agencies who appear unwilling or uninteresting in explicitly requiring (and holding to account) their contractors to comply with the existing standards and rules. The scope of the FDCC is much narrower than that covered by the STIGs, so any federal-level policy about security standards should look beyond merely requiring FDCC compliance.

On certifying security workers, Paller calls out the DOD for having the right intentions (to require personnel with security responsibilities to be certified) but in short-sightedly lowering its technical certification standards. It’s a bit of a funny argument coming from Paller, given that the SANS GIAC cert is among those (along with the CISSP from (ISC)2 and CISA and CISM from ISACA) fulfilling the DOD certification requirement. It is true that in rolling out the DOD 8570 certification rules, DOD has included several certifications traditionally aimed at infosec managers (rather than hands-on practitioners), and that many of these are broad-based in nature rather than deeply technical. It is also true that the SANS set of certifications underlying the GIAC do tend to require much deeper mastery and demonstration of technical skills than DOD-approved certs from other organizations. Again using DOD as a model (even if its experience hasn’t been perfect) for how minimum security qualifications might be both required and demonstrated for federal infosec workers, the cyber-security czar could put in place personnel policies that would (over time) raise the level of competency in the federal security workforce.

The toughest nut to crack for federal agencies may be first assessing and then improving on the effectiveness of security controls implemented within the government. Even with the very positive development seen with the third revision of NIST Special Publication 800-53 that will standardize the set of controls for all government agency systems (Defense and Intel included), 800-53 is still used primarily as a planning or implementation checklist, and not as a basis for evaluating security controls put into production. It remains to be seen whether the cyber-security czar, perhaps under whatever framework emerges for the revised national cybersecurity initiative, can craft a policy to require penetration testing and specific audit procedures that go beyond the documentation-validation exercises so prevalent today, and work with agency CISOs to see it implemented.

Old security issues keep coming up

In an otherwise unremarkable Washington Post article about the Department of Defense’s plan to create a “cyber-command” run out of the Pentagon, a couple of points raised in the article demonstrate the persistence of some information assurance themes about both data integrity and the legal and ethical aspects of cyber warfare.

In the article by Post staff writer Ellen Nakashima, U.S. Strategic Commander General Kevin P. Chilton’s concern about maintaining the integrity of mission-critical information is quoted: “So I put out an order on my computer that says I want all my forces to go left, and when they receive it, it says, ‘Go right.’ . . . I’d want to defend against that.” This is a simple example of the data integrity problem known as “Byzantine failure,” a topic of great interest to us and one that underlies some of our ongoing research into integrity assertions.

The article also mentions a recent report from the National Research Council that called for a national policy on cyber attack to address, among other things, the legal and otherwise defensible bases upon which a military sort of response to a cyber attack would be justified. As Nakashima puts it, “If a foreign country flew a reconnaissance plane over the United States and took pictures, for instance, the United States would reserve the right to shoot it down in U.S. airspace, experts said. But if that same country sent malicious code into a military network, what should the response be?” The general legal line of thinking follows the Computer Security Act and the PATRIOT Act to essentially give the U.S. the right to defend itself from attack, even if that means responding in kind against an online adversary. The ethical implications of such a presumed stance are not at all clear, especially given the frequent use of secondary servers and compromised hosts to launch attacks. If an intrusion or attack is detected and traced to a source at a university, or a hospital, or a government data center, disabling the apparent attacker, even when technically feasible, may not always be the right thing to do.