Congress and HHS continue to disagree on health data breach disclosure rules

The new federal health information data breach disclosure rules went into effect in September, but as HHS works on finalizing another set of HIPAA rule changes (this time about penalties for HIPAA violations), Mitch Wagner of Information Week notes that Congress and the administration are still arguing about the subjective “harm” threshold that HHS inserted into the breach disclosure law, as seen in a letter from six Congressmen to HHS Secretary Kathleen Sibelius. This provision gives entities who suffer a data loss or theft the option of not reporting the disclosure, if the entity believes no harm will occur to individuals because of the breach. We’re with Congress on this one. Requirements like accounting of disclosures, which apply both to health information under HIPAA and government information like IRS tax records, don’t have these sorts of exceptions (HIPAA accounting used to be waived for routine disclosures in the course of treatment or normal business operations, but the HITECH Act changed that and now all disclosures must be recorded). The biggest problem is with the subjectivity (and that fact that the subjective decision is in the hands of the breach sufferer). Is “harm” intended to mean actual financial harm? Identity theft? Embarrassment? Nothing in the rules provides any guidance on this. Perhaps had these rules been in place, the public would not have heard about the UCLA Medical Center staff members who viewed Britney Spears medical records; it would seem they were driven only by celebrity curiosity, rather than a desire to use the information they saw for any particular purpose, so did that cause “harm” to Spears or not, particularly if she didn’t know about it? HHS has acknowledged that it chose to deviate from the wording of the law in the HITECH Act and added the no-harm exception in response to multiple comments it received on the draft version of the breach notification rules. It’s not hard to imagine the organizations that were the source of these comments, given that the final rule now delegates to HIPAA-covered entities and business associates the responsibility for determining whether a loss of health information is significant or not.

Security quote of the week

Another article focusing on policies and controls to prevent the use of peer-to-peer file sharing technologies in the wake of the Congressional ethics committee report last week contains the best concise statement we’ve seen in a long time on the problem facing information security programs today. Tom Kellerman of Core Security Technologies is quoted in the NextGov article thusly: “Policy compliance in the absence of a dynamic audit is impossible, [and any] assumption that only insiders can violate policies” is false.

A recurring theme in posts seen in this space is that too often organizations write and communicate well-meaning and appropriate security policies, but then assume that the policies will be followed without implementing any means of enforcement. This problem applies equally to government agencies and private sector organizations, and in some cases is even the result of the sort of risk-based security management approach that organizations should be following. If, however, organizations choose to leave the risk of policy violations un-mitigated, they don’t have much credibility when they express shock that an incident occurred contrary to policy.

Congressional breach: balancing security with convenience

Whether or not you believe, as some pundits appear to, that the call for an inquiry into cybersecurity practices in the House of Representatives after the details of an ethics committee inquiry were disclosed is a smoke screen designed to divert attention away from the behavior under investigation, the situation provides a useful illustration of what can happen when user desires for convenience trump security controls. According to numerous reports, the inquiry information was inadvertently disclosed by a staffer who both put sensitive information on a personal computer and also exposed the contents of that computer by running peer-to-peer file sharing software. As you might expect, copying official files to personal computers goes against existing security policy, and while there are presumably no policies governing whether employees choose to install and use P2P on their personal computers, the federal government has long recognized the particular risk posed by P2P technology, to the point that the FISMA report that agencies fill in and submit to OMB includes questions specifically about P2P (both about banning its use within agencies and making sure that security awareness training addresses P2P file sharing).

The general scenario is reminiscent of the aftermath of the well-publicized laptop theft from the home of an employee of the Veterans Administration, who was not using a personal computer, but who had placed VA records with personally identifiable information on his laptop to work on at home, in direct violation of VA security policy. In both of these cases it seems unlikely that the government employees meant any harm through their actions, and were seeking only to extend their government workdays by taking work home with them. This tension between the restrictions or constraints on business practices imposed by security and the demands of information economy workers to have access to their work whenever and from wherever they want it is something security managers have to deal with every day.

Every organization must find the right balance point between appropriate security measures, security policies, and the mechanisms put in place to enforce those policies when voluntary compliance is ineffective. In a few key ways the legislative branch is especially susceptible to erring on the side of employee convenience at the expense of security. While the houses of Congress are sometimes considered cohesive organizational entities, the reality is that just about every member of Congress and committee has their own information technology operations, and for the members in particular, there is a need to conduct business not just in Washington, DC but also from office locations in their home states and districts. This results in what is essentially a wide area network with at least 535 remote locations, from all of which elected officials and their staffs need to be able to conduct their business just as if they were on Capitol Hill. The geographical distribution of computer system users, along with office personnel that vary widely in terms of their security awareness and technical savvy, combine to produce a bias in favor of facilitating work at remote locations (including local storage of sensitive information), rather than imposing security-driven constraints on business operations. The technical means are readily available to help avoid the recurrence of events such as this latest disclosure, but what must first change is the organizational bias in favor of letting workers, no matter how well intentioned, perform actions in the name of convenience or efficiency that put sensitive information assets at risk.

New CyberScope is another step in the right direction on federal security

This month the federal government launched a new online FISMA reporting application, CyberScope, based on the Justice Department’s Cyber Security Assessment and Management (CSAM) system, which was already offering FISMA reporting services to other agencies through the Information Systems Security Line of Business initiative. As noted in a recent interview with federal CIO Vivek Kundra, the initial intent of CyberScope is to replace the heavy reliance on electronic documents submitted as email attachments with a centrally managed, access controlled repository. Kundra has also noted that he (along with Sen. Tom Carper and many others in Congress) would like to help move agency information security management away from emphasizing compliance and towards continuous monitoring and situational awareness. With any luck the use of online reporting will evolve to make the FISMA reporting process more automated and less onerous for agencies, while the content and emphasis of the FISMA reporting requirements continue to be revised and, hopefully, improved. As long as agencies are still reporting the same information under FISMA requirements, having a better mechanism to support that reporting won’t do anything to address FISMA’s shortcomings, particularly its failure to address ongoing assessment of the effectiveness of security controls implemented by federal agencies.

Over the past couple of years, NIST has made a renewed push to get federal agencies to apply consistent risk management practices to their information security management decisions. This is a worthwhile goal, but as the authority designated to establish federal agency security standards, NIST itself frustrates efforts to manage security in a risk-based manner by requiring extensive security controls for information systems based not on the specific risk profile of the system, but instead by a broad low-moderate-high security categorization system. The practice of requiring the same set of controls for all systems categorized as “moderate,” for instance, suggests that the risks associated with all “moderate” systems is the same. This is a false assumption that violates one of the fundamental principles of information security management, which says that assets like systems and data should be protected to a level commensurate with their value, and should be protected only as long as the assets are still of value. This principle of “adequate protection,” while less simple to implement in practice than it sounds in theory, is nonetheless a sensible approach for organizations trying to allocate their security resources in and effective manner. The goal of following system-level risk management practices demands an approach that differs from the “one size fits most” control requirement used in current federal guidance.

Is de-identification of personal records possible?

Last month Harvard Magazine ran a fantastic article on privacy in the current era, focusing in particular on the work of researcher Latanya Sweeney, who has demonstrated a somewhat alarming ability take personal data that has been de-identified in accordance with current technical standards and “re-identify” it through the use of publicly available data sources. Then last week the New York Times reported on two computer scientists at UT-Austin who had great success identifying individuals whose de-identified movie rental records had been provided by Netflix as part of a competition to improve the video rental-by-mail firm’s automated recommendation software. Netflix went so far as to deny that it was possible to positively identify anyone in the data it provided, due to measures the company had taken to alter the data, and compared the de-identification measures Netflix used to standards for anonymizing personal health information.

While it may be a bit of a leap to extrapolate the results of the Texas researchers to the health information domain, the privacy advocates appear to have reason for concern. The frequency with which de-identified health record information is made available to industry, government, and research organizations coupled with what seems to be a failure among many governing authorities to understand just how feasible it is to successfully correlate these anonymous records with other available personal information sets seem to be imparting a false sense of security around de-identification in general. As more and more attention is focused on this area of research, it may well be that current standards for de-identification simply cannot provide the sort of privacy protection they are intended to deliver.