Accounting of disclosures to become more comprehensive

One of the requirements under the HIPAA Privacy Rule is that covered entities maintain an “accounting of disclosures” of protected health information, in part so that an individual may request a record of who accessed their health information, at what time, and for what purpose. As codified into law (45 CFR §164.528), the accounting of disclosures rule specifies a time period of six years, so covered entities are obligated to maintain records of disclosures for at least that long. There is a significant set of exceptions in the original accounting of disclosures requirement that the disclosures made for the purposes of treatment, payment, or health care operations do not have to be recorded and made available in the accounting of disclosures. This greatly reduces the administrative burden on covered entities, as most “routine” uses of individually identifiable health information are not subject to the accounting rule. The language in the HITECH Act on accounting of protected health information disclosures removes these exceptions, essentially requiring that an accounting provided to an individual cover all disclosures (there is still an exception for the requests and individual makes to see his or her own information). So the revised accounting of disclosure rule now gives individuals the right to receive a three-year history of all disclosures of their information through an electronic health record.

For EHR vendors, this simplification of the accounting of disclosures rule may actually make it easier to produce the disclosure history, because a comprehensive transaction log showing authorized (and unauthorized) access to records will produce the accounting required. There is a related provision in the HITECH Act that directs the HHS Secretary within six months to promulgate regulations on what specific information should be collected about each disclosure. There is also a pretty protracted period until the rule takes effect, especially for entities that already had electronic health records as of January 1 of this year. The accounting of disclosures rule applies to these covered entities as of January 1, 2014, while for entities acquiring electronic health records after January 1, 2009, the rule goes into effect as of January 1, 2011, or whenever the entity acquires an EHR, whichever is later.

New Federal notification requirement for breaches of protected health information

One of the more widely anticipated provisions of the HITECH Act is a new provision requiring many health information exchange participants (specifically, covered entities and business associates under HIPAA) to provide notification to individuals in the event of unauthorized disclosure of “unsecured” protected health information. Although it only applies to health data, this would seem to be the first nationwide regulation for breach notifications, so that alone is noteworthy.

The breach notification law comes into play for any “unauthorized acquisition, access, use, or disclosure of protected health information which compromises the security or privacy of such information, except where an unauthorized person to whom such information is disclosed would not reasonably have been able to retain such information. The key element here is the breach of “unsecured protected health information” — for practical purposes, this means the notification rule only applies if the data is not encrypted. The law doesn’t use the term “encryption.” Instead, it says more generically that “unsecured” information is not secured through the use of a technology or method for rendering information “unusable, unreadable, or indecipherable.” the law gives the HHS Secretary 60 days to come up with guidance specifying technologies or methodologies that should be used to provide this protection for data.

Notification must take place “without unreasonable delay” and in any case within 60 days, by written notification or public posting where contact information is unavailable, or telephone where urgency exists. The 60-day timeline seems in stark contrast to existing computer intrusion notification rules for federal agencies, which require notice to the US Computer Emergency Response Team (US-CERT) within one hour of the discovery of the intrusion. Notice must also be provided to major media outlets and to the HHS Secretary for breaches involving 500 or more individuals; these breaches are to be posted on the HHS website. The law gives the HHS Secretary 180 days to publish final regulations on data breach notifications.

Breach notification requirements are also specified for vendors of personal health records, even though these remain non-covered entities under HIPAA. When a breach occurs, notice must be provided to individuals (only US citizens and permanent residents) affected, and to the Federal Trade Commission (the FTC then notifies the Secretary). Violation of this rule is considered an unfair and deceptive act or practice, and as such would be subject to action by the FTC.

What is interesting from a security practices standpoint is that this data breach notification requirement — by exempting secured data from the regulations — all but requires the use of encryption at rest for health records. A great deal of attention has been given to encryption in transit (secure communication channels, digital signatures, and the like) for health information exchange services, but health IT standards efforts have stopped short of imposing controls that would have to be implemented within the boundaries of a participating organization. It will be interesting to see if the health IT standards bodies re-authorized in the HITECH Act will expand their scope into the technical environments of the entities participating in health information exchange.

Stronger provisions coming with the stimulus bill

The American Recovery and Reinvestment Act of 2009, which the president is expected to sign into law on Tuesday, contains within it the Health Information Technology for Economic and Clinical Health (HITECH) Act, as Title XIII. Subtitle D of the HITECH Act effects a number of changes to current privacy and security law intended to strengthen the protection of individually identifiable health information, especially that contained in electronic medial records. This is the first of several posts highlighting notable features of the new legislation.

One big change is the expansion of applicability of security and privacy requirements under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). HIPAA has both a Privacy Rule and a Security Rule, the provisions of which have applied to what HIPAA terms “covered entities” — health plans, health care providers, and health care clearinghouses — and to a lesser degree to “business associates” with contractual relationships with covered entities. In essence, HIPAA said that covered entities are responsible for the compliance of their business associates, and the requirements with which business associates should comply need to be spelled out in the agreements (contracts) covered entities make with business associates. The HITECH Act removes this distinction, so that business associates are now held to the same requirements as covered entities. The law also now considers as business associates a class of organizations that were previously considered non-covered entities under HIPAA: those that provide “data transmission of protected health information” to covered entities or their business associates and that require “access on a routine basis to such protected health information.” (Sect. 13408) This provision is meant to extend Privacy and Security Rule requirements to regional health information organizations (RHIOs), health information exchange gateways, or vendors providing personal health records to covered entities’ customers under contract to the covered entities.

Despite this expansion in HIPAA coverage, there are still significant potential players in health information exchange that remain non-covered entities, most notably including vendors of personal health records like Google Health and Microsoft Health Vault. These are data aggregation applications that depend on pulling personal health information from records maintained by insurance plans, health providers, labs, and other covered entities, so resolving the disparity in required privacy and security protections is necessary to establish sufficient trust to allow personal health record systems to function as intended. Personal health records are often promoted as the best mechanism for allowing individuals to control their own health information, including providing or revoking consent to disclose their information for specific purposes. To make this vision feasible, it is essential that personal health record systems are able to retrieve individually identifiable health information from a broad range of covered and non-covered entities.

Privacy front and center for health IT

Since the 2004 call for widespread adoption of electronic health records (EHRs) by 2014, one of the primary barriers to implementation of health information technology solutions and to achieving interoperability of existing health data sources is concerns over establishing and maintaining the privacy of the information contained in medical records (electronic or otherwise). While there is no shortage of opinions, recommended privacy practices, and regulatory requirements, to date no single set of privacy requirements has been established. In December, then-Secretary of Health and Human Services Michael Leavitt announced the Nationwide Privacy and Security Framework for electronic exchange of individually identifiable health information. The framework is structured around a set of eight core privacy principles, both similar to and consistent with the Fair Information Principles first promulgated by the U.S. Department of Health, Education, and Welfare in 1973, and with the OECD “Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” one or both of which serve as the foundation for most major governmental privacy policies in both the U.S. and the European Community. The release of the new privacy and security framework is intended in part to facilitate adoption of existing and emerging standards governing the exchange of health information among public and private sector entities.

A greater catalyst in this arena looks to be the pending economic stimulus plan proposed by the Obama administration. The version of the bill already passed by the House of Representatives includes an objective to computerize all health records within five years, and billions of dollars in new funding in the form of increased spending on health IT infrastructure and direct incentives to healthcare providers to adopt new technologies and participate in electronic health information exchanges. This past week the Senate Judiciary Committee held a hearing on “Health IT: Protecting Americans’ Privacy in the Digital Age” which once again brought privacy concerns to the fore. One likely result of this attention is the modification of the privacy and security provisions in the Health Insurance Portability and Accountability Act (HIPAA). More significantly, it appears likely that there will be an expansion of the definition of “covered entities” under HIPAA to include likely health IT intermediaries such as network infrastructure providers that have no direct role in the provision of health care but nonetheless have at least temporary custody of and access to data as it passes between health information exchange participants. It will be interesting to see how this plays out over time, but one notable aspect about the Judiciary Committee hearing was the similar concerns and priorities expressed by each of the individuals testifying, despite the entities they represented (collectively, the software industry, consumer and privacy advocates, state-level information exchanges, and conservative think tanks).

The need for data integrity assertion

There’s a lot of energy these days focused on data interoperability, within and across industries. Generally speaking, interoperability is a laudable and worthwhile goal, but with greater access to data from broader and more diverse sources comes a need for greater attention to establishing and maintaining data integrity. While reading the (highly recommended) Tao Security blog, the January 16 post from Richard Bjetlich brought this issue into clear focus: when the recipient of a message relies on the contents of that message to make a decision or take specific action, the importance of the data’s accuracy cannot be understated.

The problem of executing processes (or making decisions) based on information that is received using communication channels that may not be reliable has been addressed in the context of preventing “Byzantine” failures. The Byzantine Generals’ Problem from which this class of fault-tolerance gets its name is concerned with protecting against bad actors interfering with the transmission of messages where the content of the messages is the basis for determining and taking a course of action (in the Byzantine Generals’ Problem, the action is whether to attack an enemy target). There are many technical treatments of this problem scenario in which a system may crash or otherwise behave unpredictably when the information it receives as input is erroneous. This class of errors can be distinguished from typical syntactical or technical errors because in a Byzantine failure both the format and the method of delivery of the message is valid, but the content of the message is erroneous or invalid in some way. In fact, even in the mathematical solutions to the Byzantine Generals Problem, the best outcome that can be achieved is that all the loyal generals take the same action; there is no provision in the problem or its solutions for the case where the original order (i.e., the message contents) is invalid.

Most approaches to mitigate Byzantine failure provide fault-tolerance through the use of multiple sources for the same information. In this model even if some of the messages are unreliable, having sufficient redundant sources allows a system to evaluate valid versus forged messages and make a correct decision. Whether or not multiple message sources are available, the addition of technical anti-forgery mechanisms such as digital signatures can also provide the means to validate messages, at least in terms of their origin. All of these approaches focus on ways to ensure that the content of messages when received are the same as the content when the messages are sent. However, even when the sender of messages is trusted by the recipient, to date there has not been much attention focused on the integrity of the data itself. This gives rise to any number of scenarios where data is received from a trusted sender with the intent of taking action based on the data, but the data sent is invalid.

The example of incorrect medication dosages cited in Richard Bejtlich’s recent blog post is an eye-opening illustration of the risk involved here. A more familiar example to many would be the appearance of erroneous information on an individual’s credit report. The major credit reporting agencies receive input from many sources, and use the data they receive in order to produce a composite credit rating. The calculation of a credit score assumes the input data to be accurate; if a creditor reports incorrect data to the credit reporting agency, the credit score calculation will also be incorrect. Unfortunately for consumers, when this sort of error occurs, the burden usually falls on the individual to follow up to try to correct the error. The credit reporting agencies take no responsibility for the accuracy of the data used to produce their reports, so it would be great if the companies serving as the sources for the data would. What would help from a technical standpoint is some way to assert the accuracy or validity of data when it is provided. This would give the receiving entity a greater degree of confidence that calculations based on multiple inputs were in fact accurate, and would also reduce the risk of choosing the wrong course of action when using such a calculation in support of decision making.

It would seem there is a small but growing awareness of this particular data integrity problem. It’s a significant risk even when considering mistakes or inadvertent corruption of data, but adding the dimension of intentional malicious modification of data – which may then be used as the basis for decisions – raises the threat to a critical level. Conventional approaches to data integrity protection – e.g., access controls, encryption, host-based IDS – could in theory be complemented by regularly executed processes to validate data held by a steward, and by assigning a tagging or scoring scheme to data when transmitted to provide an integrity assertion to receiving entities. The concept of a consistent, usable, enforceable integrity assertion mechanism is one of several areas we think warrants further investigation and research.