Stiffer U.K. penalties coming for personal data misuse

The British Ministry of Justice recently published proposed new penalties for knowingly misusing personal data in violation of section 55 of the Data Protection Act. The proposals raise the maximum penalty to include jail time, in addition to the financial penalty already applied under the law. The reasons cited by the U.K. government for proposing the stronger penalties include the need for a bigger deterrent to those who obtain personal data illegally, and to increase public confidence in the legitimate collection, storage, and use of personal data. (Bear in mind that with a National Health System and other major government programs, the U.K. government maintains centralized data on its citizens in a variety of contexts and purposes, including health records.)

This overseas activity is paralleled to some extent in recent increases in domestic penalties associated with HIPAA violations (codified at 42 USC §1320d) as well as requiring the formal investigation of knowing and willful violations of the law. Along with lack of proactive enforcement measures (as opposed to current voluntary reporting of violations), HIPAA and other U.S. privacy laws are often criticized for having insufficient penalties imposed for violations. There is little movement in the United States to adopt the sort of strong citizen-centered privacy laws in force in the European Community, but it is nonetheless heartening to see risks to personal data taken seriously among major economic powers.

Early potential for national data breach regulation bears watching

Coming on the heels of numerous draft pieces of legislation from the U.S. Senate (including those from Sens. Carper, Snowe, and Rockefeller) is an announcement last week by New York Congresswoman Yvette Clarke that she hopes to begin congressional hearings within the next few months on creating a national law for the protection of private data. Clarke, who chairs the House Homeland Security Subcommittee on Emerging Threats, Cybersecurity and Science and Technology, cites the ever-increasing incidence of identity theft and public demand for action to make both public and private sector organizations more diligent in protecting personal information and in disclosing breaches of that data when they occur.

This idea bears watching, not least to get past the industry segmentation on private data protection and breach notification rules that currently exist, with the clearest regulations applying to health records and financial data, but not without gaps in those contexts either. However, if the final version of HHS rules on disclosure of health data breaches is any guide, any new legislation shouldn’t just extend to personal data in uses beyond health and finance, but might also best be crafted to remove some of the subjectivity and compliance discretion that organizations are allowed under existing federal rules, particularly the harm exception to disclosure for organizations suffering breaches of health data.

Security issues at NASA highlight challenges in control effectiveness

A report released this month by GAO on what it views as deficiencies in the information security program and security control effectiveness at the National Aeronautics and Space Administration (NASA) serves to highlight once again the challenge for organizations to move beyond compliance to ensure implemented security controls are actually doing what they are intended to do. Testing and demonstrating the effectiveness of security controls is a persistent challenge for all agencies, not just NASA, and the identified inconsistent and incomplete risk assessment procedures and security policies are also issues shared by many other agencies. What may be most notable about the findings in the report is the relatively basic level of some of the control weaknesses found at some of NASA’s facilities, including poorly implemented password-based access controls, non-functional physical security mechanisms, and less than comprehensive vulnerability scanning and intrusion detection.

NASA has had an unusual level of variability in its overall security program, at least as measured through the FISMA reporting process. While the agency has been trending better since fiscal 2006, when it received a D- on the FISMA scorecard, its progress since then has not equaled the level (B-) it achieved in 2005. The details in the most recent (FY2008) report to Congress give some indications of the NASA infosec program as work in progress, with strengths in C&A process, training of security personnel, and privacy compliance, and with gaps in testing of security controls and contingency plans, and in general employee security awareness training. NASA’s written response to the GAO report (which, as is typically the practice, was provided to the agency for comment prior to its public release) concurs with all eight of GAO’s findings and recommendations, but notes that a number of these recommendations are already being addressed by program improvements underway as the result of internal assessments.

BCBSA data breach another lesson in policy enforcement

Recent news that the Blue Cross Blue Shield Association (BCBSA) suffered the theft of an employee’s personal laptop that contained personal information on hundreds of thousands of physicians illustrates once again that it is not enough to have the right security policies in place, you have to be able to monitor compliance and enforce them. In this latest incident, the employee copied corporate data onto a personal laptop, in violation of existing security policy. What’s worse is that the data as stored by BCBSA was encrypted, but the employee decrypted the data before copying it. The employee obviously put the BCBSA at risk in a way its policies and database encryption controls were intended to prevent, and with the laptop lost, the American Medical Association is taking action to notify member physicians who may now be at risk of identity theft.

Data stewardship and data handling policies are the first step, and encrypting the data at rest is a good follow-up, but what else can organizations like BCBSA do to avoid this sort of incident? It’s not entirely clear how the data might have been transferred from the corporate computing environment to the personal laptop, but whether it was by thumb drive or even direct connection of the laptop to the BCBSA network, there are multiple technical options available to mitigate this type of risk. One answer might be data loss prevention controls that could be used to keep corporate data from being copied or written locally at all, whether the client computer was Association-owned or not. Encryption mechanisms can be added to provide protection in transit and during use, rather than just at rest. USB device access controls can be used to administer, monitor, and enable or disable USB ports when devices are plugged in to them, so for instance any attempt to use a non-approved thumb drive (perhaps one without device-level encryption) could be blocked. Network access control (NAC) can be used to gain awareness of (and prevent if desired) attempts to connect non-corporate computing devices to the network. Let’s also not forget the importance of security awareness training, which is just as relevant now as it was for the well-publicized case of the VA employee who had a laptop with veterans’ personal data stolen from home after taking the data off-site in violation of VA policy.

Need a little more verify to go with that trust

One notable aspect of the widely-reported launch of a Security Metrics Taskforce charged with coming up with new, outcome-based standards for measuring the effectiveness of federal agency information security efforts is a statement written by federal CIO Vivek Kundra, Navy CIO Robert Carey, and Justice CIO Vance Hitch that the group would follow a “trust but verify” approach while also fulfilling statutory requirements and driving towards real-time security awareness. This is consistent with the security posture of many federal agencies, particularly on the civilian side, that in general users and organizations can be trusted to do the right thing in terms of following policies and taking on expected responsibilities and obligations. There are many current examples (HIPAA security and privacy rules, FISMA requirements, etc.), where a major set of requirements has been enacted but no formal monitoring or auditing is put in place to make sure everyone is behaving as they should. Voluntary reporting of violations and requirements with no penalties for failing to comply can only be successful if the assumption holds that you can trust everyone to do the right thing. The new task force would go a long way towards achieving its stated goal of better protecting federal systems if the metrics it proposes include some set of requirements for auditing compliance with the appropriate security controls and management practices. If the recommended metrics do include those aspects, there may even be an opportunity for the government to define penetration testing standards and services that could be implemented across agencies to validate the effective implementation and use of the security mechanisms they select to secure their environments. Focusing on outcome-based metrics that agencies are ultimately left to their own to measure and track, even with real-time situational awareness, will fall short of hardening the federal cybersecurity infrastructure to the point where it is well-positioned to defend against the constantly evolving threats it faces.