Stronger provisions coming with the stimulus bill

The American Recovery and Reinvestment Act of 2009, which the president is expected to sign into law on Tuesday, contains within it the Health Information Technology for Economic and Clinical Health (HITECH) Act, as Title XIII. Subtitle D of the HITECH Act effects a number of changes to current privacy and security law intended to strengthen the protection of individually identifiable health information, especially that contained in electronic medial records. This is the first of several posts highlighting notable features of the new legislation.

One big change is the expansion of applicability of security and privacy requirements under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA has both a Privacy Rule and a Security Rule, the provisions of which have applied to what HIPAA terms “covered entities” — health plans, health care providers, and health care clearinghouses — and to a lesser degree to “business associates” with contractual relationships with covered entities. In essence, HIPAA said that covered entities are responsible for the compliance of their business associates, and the requirements with which business associates should comply need to be spelled out in the agreements (contracts) covered entities make with business associates. The HITECH Act removes this distinction, so that business associates are now held to the same requirements as covered entities. The law also now considers as business associates a class of organizations that were previously considered non-covered entities under HIPAA: those that provide “data transmission of protected health information” to covered entities or their business associates and that require “access on a routine basis to such protected health information.” (Sect. 13408) This provision is meant to extend Privacy and Security Rule requirements to regional health information organizations (RHIOs), health information exchange gateways, or vendors providing personal health records to covered entities’ customers under contract to the covered entities.

Despite this expansion in HIPAA coverage, there are still significant potential players in health information exchange that remain non-covered entities, most notably including vendors of personal health records like Google Health and Microsoft Health Vault. These are data aggregation applications that depend on pulling personal health information from records maintained by insurance plans, health providers, labs, and other covered entities, so resolving the disparity in required privacy and security protections is necessary to establish sufficient trust to allow personal health record systems to function as intended. Personal health records are often promoted as the best mechanism for allowing individuals to control their own health information, including providing or revoking consent to disclose their information for specific purposes. To make this vision feasible, it is essential that personal health record systems are able to retrieve individually identifiable health information from a broad range of covered and non-covered entities.

Privacy front and center for health IT

Since the 2004 call for widespread adoption of electronic health records (EHRs) by 2014, one of the primary barriers to implementation of health information technology solutions and to achieving interoperability of existing health data sources is concerns over establishing and maintaining the privacy of the information contained in medical records (electronic or otherwise). While there is no shortage of opinions, recommended privacy practices, and regulatory requirements, to date no single set of privacy requirements has been established. In December, then-Secretary of Health and Human Services Michael Leavitt announced the Nationwide Privacy and Security Framework for electronic exchange of individually identifiable health information. The framework is structured around a set of eight core privacy principles, both similar to and consistent with the Fair Information Principles first promulgated by the U.S. Department of Health, Education, and Welfare in 1973, and with the OECD “Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” one or both of which serve as the foundation for most major governmental privacy policies in both the U.S. and the European Community. The release of the new privacy and security framework is intended in part to facilitate adoption of existing and emerging standards governing the exchange of health information among public and private sector entities.

A greater catalyst in this arena looks to be the pending economic stimulus plan proposed by the Obama administration. The version of the bill already passed by the House of Representatives includes an objective to computerize all health records within five years, and billions of dollars in new funding in the form of increased spending on health IT infrastructure and direct incentives to healthcare providers to adopt new technologies and participate in electronic health information exchanges. This past week the Senate Judiciary Committee held a hearing on “Health IT: Protecting Americans’ Privacy in the Digital Age” which once again brought privacy concerns to the fore. One likely result of this attention is the modification of the privacy and security provisions in the Health Insurance Portability and Accountability Act (HIPAA). More significantly, it appears likely that there will be an expansion of the definition of “covered entities” under HIPAA to include likely health IT intermediaries such as network infrastructure providers that have no direct role in the provision of health care but nonetheless have at least temporary custody of and access to data as it passes between health information exchange participants. It will be interesting to see how this plays out over time, but one notable aspect about the Judiciary Committee hearing was the similar concerns and priorities expressed by each of the individuals testifying, despite the entities they represented (collectively, the software industry, consumer and privacy advocates, state-level information exchanges, and conservative think tanks).

The need for data integrity assertion

There’s a lot of energy these days focused on data interoperability, within and across industries. Generally speaking, interoperability is a laudable and worthwhile goal, but with greater access to data from broader and more diverse sources comes a need for greater attention to establishing and maintaining data integrity. While reading the (highly recommended) Tao Security blog, the January 16 post from Richard Bjetlich brought this issue into clear focus: when the recipient of a message relies on the contents of that message to make a decision or take specific action, the importance of the data’s accuracy cannot be understated.

The problem of executing processes (or making decisions) based on information that is received using communication channels that may not be reliable has been addressed in the context of preventing “Byzantine” failures. The Byzantine Generals’ Problem from which this class of fault-tolerance gets its name is concerned with protecting against bad actors interfering with the transmission of messages where the content of the messages is the basis for determining and taking a course of action (in the Byzantine Generals’ Problem, the action is whether to attack an enemy target). There are many technical treatments of this problem scenario in which a system may crash or otherwise behave unpredictably when the information it receives as input is erroneous. This class of errors can be distinguished from typical syntactical or technical errors because in a Byzantine failure both the format and the method of delivery of the message is valid, but the content of the message is erroneous or invalid in some way. In fact, even in the mathematical solutions to the Byzantine Generals Problem, the best outcome that can be achieved is that all the loyal generals take the same action; there is no provision in the problem or its solutions for the case where the original order (i.e., the message contents) is invalid.

Most approaches to mitigate Byzantine failure provide fault-tolerance through the use of multiple sources for the same information. In this model even if some of the messages are unreliable, having sufficient redundant sources allows a system to evaluate valid versus forged messages and make a correct decision. Whether or not multiple message sources are available, the addition of technical anti-forgery mechanisms such as digital signatures can also provide the means to validate messages, at least in terms of their origin. All of these approaches focus on ways to ensure that the content of messages when received are the same as the content when the messages are sent. However, even when the sender of messages is trusted by the recipient, to date there has not been much attention focused on the integrity of the data itself. This gives rise to any number of scenarios where data is received from a trusted sender with the intent of taking action based on the data, but the data sent is invalid.

The example of incorrect medication dosages cited in Richard Bejtlich’s recent blog post is an eye-opening illustration of the risk involved here. A more familiar example to many would be the appearance of erroneous information on an individual’s credit report. The major credit reporting agencies receive input from many sources, and use the data they receive in order to produce a composite credit rating. The calculation of a credit score assumes the input data to be accurate; if a creditor reports incorrect data to the credit reporting agency, the credit score calculation will also be incorrect. Unfortunately for consumers, when this sort of error occurs, the burden usually falls on the individual to follow up to try to correct the error. The credit reporting agencies take no responsibility for the accuracy of the data used to produce their reports, so it would be great if the companies serving as the sources for the data would. What would help from a technical standpoint is some way to assert the accuracy or validity of data when it is provided. This would give the receiving entity a greater degree of confidence that calculations based on multiple inputs were in fact accurate, and would also reduce the risk of choosing the wrong course of action when using such a calculation in support of decision making.

It would seem there is a small but growing awareness of this particular data integrity problem. It’s a significant risk even when considering mistakes or inadvertent corruption of data, but adding the dimension of intentional malicious modification of data – which may then be used as the basis for decisions – raises the threat to a critical level. Conventional approaches to data integrity protection – e.g., access controls, encryption, host-based IDS – could in theory be complemented by regularly executed processes to validate data held by a steward, and by assigning a tagging or scoring scheme to data when transmitted to provide an integrity assertion to receiving entities. The concept of a consistent, usable, enforceable integrity assertion mechanism is one of several areas we think warrants further investigation and research.

SecurityArchitecture.com Site Launch

A New Year’s Day announcement: the SecurityArchitecture.com site is now live. The website was created to provide some introductory education and helpful resources to those working on information security or just interested in learning more about the field. The initial focus for the material on the site is education: in forthcoming posts we’ll consider some of the different approaches to security-related training in both continuing education and formal institutional settings. Key challenges include staying current on the different functional, technical, and managerial aspects of information assurance; establishing and maintaining security awareness at organizational and individual levels; and finding effective ways to provide access to and deliver what often includes very technical subject matter.