Whether or not you believe, as some pundits appear to, that the call for an inquiry into cybersecurity practices in the House of Representatives after the details of an ethics committee inquiry were disclosed is a smoke screen designed to divert attention away from the behavior under investigation, the situation provides a useful illustration of what can happen when user desires for convenience trump security controls. According to numerous reports, the inquiry information was inadvertently disclosed by a staffer who both put sensitive information on a personal computer and also exposed the contents of that computer by running peer-to-peer file sharing software. As you might expect, copying official files to personal computers goes against existing security policy, and while there are presumably no policies governing whether employees choose to install and use P2P on their personal computers, the federal government has long recognized the particular risk posed by P2P technology, to the point that the FISMA report that agencies fill in and submit to OMB includes questions specifically about P2P (both about banning its use within agencies and making sure that security awareness training addresses P2P file sharing).
The general scenario is reminiscent of the aftermath of the well-publicized laptop theft from the home of an employee of the Veterans Administration, who was not using a personal computer, but who had placed VA records with personally identifiable information on his laptop to work on at home, in direct violation of VA security policy. In both of these cases it seems unlikely that the government employees meant any harm through their actions, and were seeking only to extend their government workdays by taking work home with them. This tension between the restrictions or constraints on business practices imposed by security and the demands of information economy workers to have access to their work whenever and from wherever they want it is something security managers have to deal with every day.
Every organization must find the right balance point between appropriate security measures, security policies, and the mechanisms put in place to enforce those policies when voluntary compliance is ineffective. In a few key ways the legislative branch is especially susceptible to erring on the side of employee convenience at the expense of security. While the houses of Congress are sometimes considered cohesive organizational entities, the reality is that just about every member of Congress and committee has their own information technology operations, and for the members in particular, there is a need to conduct business not just in Washington, DC but also from office locations in their home states and districts. This results in what is essentially a wide area network with at least 535 remote locations, from all of which elected officials and their staffs need to be able to conduct their business just as if they were on Capitol Hill. The geographical distribution of computer system users, along with office personnel that vary widely in terms of their security awareness and technical savvy, combine to produce a bias in favor of facilitating work at remote locations (including local storage of sensitive information), rather than imposing security-driven constraints on business operations. The technical means are readily available to help avoid the recurrence of events such as this latest disclosure, but what must first change is the organizational bias in favor of letting workers, no matter how well intentioned, perform actions in the name of convenience or efficiency that put sensitive information assets at risk.
This month the federal government launched a new online FISMA reporting application, CyberScope, based on the Justice Department’s Cyber Security Assessment and Management (CSAM) system, which was already offering FISMA reporting services to other agencies through the Information Systems Security Line of Business initiative. As noted in a recent interview with federal CIO Vivek Kundra, the initial intent of CyberScope is to replace the heavy reliance on electronic documents submitted as email attachments with a centrally managed, access controlled repository. Kundra has also noted that he (along with Sen. Tom Carper and many others in Congress) would like to help move agency information security management away from emphasizing compliance and towards continuous monitoring and situational awareness. With any luck the use of online reporting will evolve to make the FISMA reporting process more automated and less onerous for agencies, while the content and emphasis of the FISMA reporting requirements continue to be revised and, hopefully, improved. As long as agencies are still reporting the same information under FISMA requirements, having a better mechanism to support that reporting won’t do anything to address FISMA’s shortcomings, particularly its failure to address ongoing assessment of the effectiveness of security controls implemented by federal agencies.
Over the past couple of years, NIST has made a renewed push to get federal agencies to apply consistent risk management practices to their information security management decisions. This is a worthwhile goal, but as the authority designated to establish federal agency security standards, NIST itself frustrates efforts to manage security in a risk-based manner by requiring extensive security controls for information systems based not on the specific risk profile of the system, but instead by a broad low-moderate-high security categorization system. The practice of requiring the same set of controls for all systems categorized as “moderate,” for instance, suggests that the risks associated with all “moderate” systems is the same. This is a false assumption that violates one of the fundamental principles of information security management, which says that assets like systems and data should be protected to a level commensurate with their value, and should be protected only as long as the assets are still of value. This principle of “adequate protection,” while less simple to implement in practice than it sounds in theory, is nonetheless a sensible approach for organizations trying to allocate their security resources in and effective manner. The goal of following system-level risk management practices demands an approach that differs from the “one size fits most” control requirement used in current federal guidance.
Last month Harvard Magazine ran a fantastic article on privacy in the current era, focusing in particular on the work of researcher Latanya Sweeney, who has demonstrated a somewhat alarming ability take personal data that has been de-identified in accordance with current technical standards and “re-identify” it through the use of publicly available data sources. Then last week the New York Times reported on two computer scientists at UT-Austin who had great success identifying individuals whose de-identified movie rental records had been provided by Netflix as part of a competition to improve the video rental-by-mail firm’s automated recommendation software. Netflix went so far as to deny that it was possible to positively identify anyone in the data it provided, due to measures the company had taken to alter the data, and compared the de-identification measures Netflix used to standards for anonymizing personal health information.
While it may be a bit of a leap to extrapolate the results of the Texas researchers to the health information domain, the privacy advocates appear to have reason for concern. The frequency with which de-identified health record information is made available to industry, government, and research organizations coupled with what seems to be a failure among many governing authorities to understand just how feasible it is to successfully correlate these anonymous records with other available personal information sets seem to be imparting a false sense of security around de-identification in general. As more and more attention is focused on this area of research, it may well be that current standards for de-identification simply cannot provide the sort of privacy protection they are intended to deliver.
The British Ministry of Justice recently published proposed new penalties for knowingly misusing personal data in violation of section 55 of the Data Protection Act. The proposals raise the maximum penalty to include jail time, in addition to the financial penalty already applied under the law. The reasons cited by the U.K. government for proposing the stronger penalties include the need for a bigger deterrent to those who obtain personal data illegally, and to increase public confidence in the legitimate collection, storage, and use of personal data. (Bear in mind that with a National Health System and other major government programs, the U.K. government maintains centralized data on its citizens in a variety of contexts and purposes, including health records.)
This overseas activity is paralleled to some extent in recent increases in domestic penalties associated with HIPAA violations (codified at 42 USC §1320d) as well as requiring the formal investigation of knowing and willful violations of the law. Along with lack of proactive enforcement measures (as opposed to current voluntary reporting of violations), HIPAA and other U.S. privacy laws are often criticized for having insufficient penalties imposed for violations. There is little movement in the United States to adopt the sort of strong citizen-centered privacy laws in force in the European Community, but it is nonetheless heartening to see risks to personal data taken seriously among major economic powers.
Coming on the heels of numerous draft pieces of legislation from the U.S. Senate (including those from Sens. Carper, Snowe, and Rockefeller) is an announcement last week by New York Congresswoman Yvette Clarke that she hopes to begin congressional hearings within the next few months on creating a national law for the protection of private data. Clarke, who chairs the House Homeland Security Subcommittee on Emerging Threats, Cybersecurity and Science and Technology, cites the ever-increasing incidence of identity theft and public demand for action to make both public and private sector organizations more diligent in protecting personal information and in disclosing breaches of that data when they occur.
This idea bears watching, not least to get past the industry segmentation on private data protection and breach notification rules that currently exist, with the clearest regulations applying to health records and financial data, but not without gaps in those contexts either. However, if the final version of HHS rules on disclosure of health data breaches is any guide, any new legislation shouldn’t just extend to personal data in uses beyond health and finance, but might also best be crafted to remove some of the subjectivity and compliance discretion that organizations are allowed under existing federal rules, particularly the harm exception to disclosure for organizations suffering breaches of health data.