Rewarding processing speed at the expense of accuracy is a failure of risk managment

In the wake of the decision by many leading financial institutions to suspend mortgage foreclosure proceedings due to the discovery of pervasive deficiencies in the way those processes were being carried out, the practices of the banks, law firms, and contractors handling foreclosures have come under increased scrutiny. What has become apparent is that just about everyone involved in mortgage foreclosures — except of course for the mortgage obligees faced with losing their homes — were given incentives to move foreclosures along as expediently as possible, without regard to the quality, accuracy, or integrity of the process. The result of this approach, which really should come as no surprise, was the completion of an enormous volume of mortgage foreclosures, with little or no attention paid to validating the information underlying the process or even to adhering to legal requirements. By rewarding the parties involved for processing speed without regard to accuracy, the mortgage lenders got exactly what they paid for — efficiency at the expense of effectiveness.

The virtually single-minded pursuit of transactional efficiency in mortgage foreclosure processing, and what appears to be its error-ridden result, is reminiscent of some of the fundamental deficiencies associated with other industries and business contexts. Among the more obvious examples is the rampant healthcare fraud related to insurance claims processing, especially insurance provided under the government’s Medicare and Medicaid programs. As a result of years of efforts to drive down the cost-per-transaction in claims processing, the Centers for Medicare and Medicaid Services (CMS) and its claims-processing contractors achieved continuous performance gains in efficiently processing large volumes of claims, but did so in a way that provided almost no protection against fraud, as described in detail in License to Steal, Malcolm Sparrow’s authoritative work on the subject. While many of the core problems that existed in the days (prior to 2001) when CMS was known as the Health Care Financing Administration (HCFA) have been mitigated to some degree, the government’s operation still enables fraud, including systematic fraud on a grand scale, as evidenced by the massive fraud scheme disclosed this week by federal authorities.

Any organization in any sector that establishes an operational approach that prioritizes transaction cost efficiency without taking into consideration the effect of such a strategy on quality (one of several possible ways to measure effectiveness) is likely underestimating the risk — and specifically the business impact — associated with committing to transactions that shouldn’t be processed in the first place. In such organizations, to the extent quality assurance is conducted at all, it is done after-the-fact (in claims processing this is called “post-utilization review”), and long experience across many industries has shown that it is far more costly to try to recover money fraudulently or erroneously paid out than it is to prevent the incorrect payment from going out in the first place. Despite this economic reality, many organizations continue to under-invest in quality controls embedded in transaction processing, often because one side-effect of implementing such controls is slowing down the process.

A key difference between the mortgage foreclosure situation and health insurance fraud is that the banks holding the mortgages being foreclosed have a financial interest in the foreclosures being completed as quickly as possible (whether or not the process is conducted with appropriate rigor), so that the properties may be resold and some proportion of the mortgage obligation can be recovered. In most cases of health insurance fraud, it is the insurer (CMS in the case of Medicare or Medicaid fraud) who bears the loss due to fraudulent payments, and of course as with any government entitlement program, that means the cost is borne by U.S. taxpayers. In the mortgage foreclosure context, it remains to be seen if individuals whose homes were foreclosed upon without the appropriate validation of documentation and other details will have any means of redress, but the mortgage lenders are not the ones suffering financial harm. So, while Congressional hearings and other investigations of the lenders are pending, the individual homeowners who suffered the greatest harm will have to wait and see what action is taken against lenders that allowed or even facilitated improper handling of foreclosures, and whether any compensation will be provided to those individuals.

Evaluating technical tools and services as an exercise in trust

People often seek tools and technology services to help protect security and privacy of information, but when evaluating such technical tools, it can be equally important to consider the source of the tool to determine whether you can have sufficient confidence that the tool will do what it purports to do with respect to security, and not expose vulnerabilities of its own. This sort of thinking is seen in the recommendation (reported last week in The Washington Post) received by AT&T from the National Security Agency (NSA) to avoid sourcing telecommunications equipment from Chinese manufacturer Huawei, due to concerns the company might embed capabilities that would enable the equipment to be used for eavesdropping. Chinese companies in general and Huawei in particular have established successful market presence internationally, including in the U.S., but at least with the prospect of equipment from the company being deployed by AT&T in support of its government infrastructure operations, being an established provider apparently does not translate into being trusted.

On a somewhat smaller scale, some initial excitement in the blogosphere over email address shortener scr.im was quickly tempered by a realization that the online service had some flaws in the way it implemented security features like captchas that left it quite vulnerable to attacks that would compromise the psuedonymity of its users. Reactions to the service, which offers users a way to “share your email in a safe way,” were cited as an example of the need to “trust, but verify” when it comes to technology, including security technology. The underlying message may be appropriate but the invocation of the phrase popularized by Ronald Reagan when applied to computing systems results in an overly narrow connotation of the word trust, in this case to mean confidence that a system will perform as expected. As I have argued previously in this space, substituting the word trust where “reliability” or specific functionality is all that can be expected stops far short of the criteria that might actually need to be satisfied to establish the trustworthiness of a system, a service provider, or the parties behind them.

Lots of health data breaches reported to HHS, only trivial ones to FTC

With just over a year having passed since the health data breach notification rules mandated by the Health Information Technology for Economic and Clinical Health (HITECH) Act went into effect, and interesting contrast has emerged between the breaches disclosed to the Department of Health and Human Services (HHS) by HIPAA-covered entities and business associates and those disclosed to the Federal Trade Commission (FTC) by organizations that provide personal health records (PHRs) and associated services, but are not covered by HIPAA. As reported on Monday and evidenced by the complete listing of breaches posted by the FTC, as far as the FTC is aware there have been no major breaches (those involving 500 or more individuals) in the past year. All 13 of the breaches reported to the FTC involved lost or stolen credentials, which presumably could result in an unauthorized party gaining access to a user’s personal health information, but no actual loss of data seems to have been involved. It may or may not be interesting to note that all the breaches reported also came from one company:  Microsoft. In contrast, the current count of breaches reported to HHS is 181, all of which involve 500 or more individuals, many of which apparently involve loss or theft of data (or laptops or other paper or electronic record storage devices).

It seems fair to ask, can any substantial conclusions be drawn from the paucity of breaches reported to the FTC or their relative triviality? No one appears to be suggesting that the data protection practices of organizations subject to the FTC’s data breach rule are superior to those of those covered under HHS’ rules, so why so few breaches reported to the FTC? Several possible explanations come to mind, only some of which have anything to do with security or privacy practices:

  • The population of organizations subject to the rule is small. The FTC’s Health Breach Notification Rule, following language in the HITECH Act (§13407), applies specifically to “vendors of personal health records” and third-party service provides who are not covered by HIPAA. The total number of these vendors is very small relative to the number of covered entities and business associates subject instead to HHS’ rules.
  • Breaches of encrypted data do not have to be reported. Following HITECH (§13402), Both the HHS and FTC data breach notification rules apply to breaches of unsecured data, meaning data that has not been “rendered unusable, unreadable, or indecipherable” through the use of recommended technologies such as data encryption. It is possible that some PHR vendors who might have suffered relevant incidents had no cause for concern, and no reason to disclose them, because the data in question was encrypted.
  • Not many people use PHRs from non-HIPAA-covered vendors. This is not meant to imply that vendors like Dossia, Google, and Microsoft have so few users of their PHRs that there wouldn’t potentially rise to the level of a major breach if a data loss occurred, but instead to suggest that there may be more attractive targets for malicious attackers to go after among health care organizations.
  • Technology company employees (may) have better security awareness. Surely a suggestion open to challenge, but with the frequency with which health data breaches occur do to intentional or inadvertent misuse by employees (that is, authorized users), PHR vendors whose business depends to a great extent on their ability to secure customer’s data might logically make security and privacy awareness a higher priority among the employees who have access to the data. Also, it shouldn’t be overlooked that, unlike employees of health care organizations, PHR vendor employees have little or no reason to access personal health information stored in their systems.
  • Organizations subject to the rules are not reporting their breaches. It is also possible, as with any other mandatory reporting requirement that lacks proactive enforcement, that some of the PHR providers or other entities subject to the FTC’s rules have experienced breaches but chose not to report them. The inclusion in the HHS breach disclosure rules of a “harm” exception, under which entities can avoid the requirement to provide notification about breaches if they determine that no harm will occur to the individuals whose data was disclosed. The FTC opted not to provide such an exception due to the special sensitivity of health information, so PHR vendors can not use this as an excuse not to report. They may, however, perform their own internal risk calculation and decide that they would rather not disclose and risk sanctions if their failure to disclose is discovered.

Rules still pending on privacy and security requirements for PHRs

The Office of the National Coordinator for Health Information Technology (ONC) within the Department of Health and Human Services (HHS) has announced plans for a public roundtable discussion on personal health records (PHRs) to be held in early December. The event will address a variety of aspects related to PHRs and their future direction, and will focus in particular on the topic of privacy and security requirements for PHR vendors and third party providers who are not currently covered by HIPAA. While many HIPAA-covered entities such as health insurance plans and healthcare entities offer some form of personal health record to their customers, vendors like Google (Google Health), Dossia (Personal Health Platform), and Microsoft (HealthVault) do not necessarily fall under the scope of HIPAA, and where they do it is typically only in the role of a business associate. The Health Information Technology for Economic and Clinical Health (HITECH) Act referred specifically to PHR vendors as potentially non-covered entities, and directed HHS, in consultation with the Federal Trade Commission (FTC), “to conduct a study and submit a report on privacy and security requirements for entities that are not covered entities or business associates” (§13424(b)(1)) and to complete that report no later than one year after the law was enacted. The one-year deadline elapsed over six months ago in February, and while work on the report seems to be a higher priority now, ONC Chief Privacy Officer Joy Pritts said recently that she doesn’t anticipate its completion until early 2011.

The original version of the Health Insurance Portability and Accountability Act (HIPAA) passed in 1996 mandated compliance with security and privacy requirements only for HIPAA-covered entities, specifically including health care providers, health care plans, and health care clearinghouses. HITECH extended the coverage of the HIPAA Privacy Rule and Security Rule requirements to business associates (whose compliance was previously the responsibility of the covered entity with whom business associate agreements were entered into), and also addressed non-covered entities providing PHRs and associated technical services with some of the provisions in the law. Most notable among these is perhaps the health data breach disclosure and notification rules (§13407) that went into effect in September 2009 (although they remain in the form of “interim rules”). There were two sets of breach disclosure rules put in place — one for covered entities and business associates under the authority of HHS, and the other for data breaches from PHR vendors and other non-covered entities under the authority of the FTC. The section of the HITECH Act that applied the data breach disclosure rule to non-covered entities explicitly includes the word “temporary” in the title, although it is not clear if the expectation was that the rules for privacy and security requirements, when promulgated, would include additional rules on data breach disclosure and therefore supersede the provisions in §13407.

The text of §13424 includes in the scope of the now-overdue study and subsequent report “requirements relating to security, privacy, and notification in the case of a breach of security or privacy” and reflects the same sort of notification exemption language that applies to breaches involving encrypted data (“rendered unusable, unreadable, or indecipherable”). It would seem that the opportunity exists for ONC to revise or make new recommendations regarding data breach disclosures within the context of conducting the study and producing the report that HITECH requires. Of course, a report with recommendations has no force of law, and the scope of rules for non-covered entities that might potentially be promulgated under HITECH’s authority appears limited to handling of data breaches. Potential changes in HIPAA applicability — such as extending it to include currently non-covered entities that nonetheless process, store, or manage health data — would presumably require further legislative action in addition to any executive branch decisions about appropriate privacy and security requirements.

NCHICA offers recommendations to health care providers on security and meaningful use

The North Carolina Healthcare Information and Communications Alliance (NCHICA) just released a white paper entitled “Privacy and Security Implications of Meaningful Use for Health Care Providers” that reflects the results not only of an analysis of federal rules associated with the Electronic Health Record (EHR) Incentive Program administered by the Centers for Medicare and Medicaid Services (CMS), but also of a workshop held last June just before the Sixth Academic Medical Center Security & Privacy Conference. The paper provides a brief background on the “meaningful use” rules that will serve as the basis for health care providers and professionals to qualify for financial incentives to subsidize the cost of acquiring EHR technology, and offers a series of recommendations for health care providers in the areas of governance and compliance, the role of security officers, data exchange and coordinated care, health information exchanges, and patient engagement.

Overall, the NCHICA paper should provide useful high-level guidance to some of the many potentially eligible health care providers who are still trying to make sense of meaningful use. I’ve written fairly extensively on various aspects of this exact topic (including the need for authoritative guidance on conducting risk analyses and the meaningful use security requirements in general), and on balance it seems likely that however prepared or unprepared health care providers may be to comply with meaningful use measures, the requirements associated with security and privacy are not likely to be the most challenging ones to meet, at least for Stage 1 of the program that begins in early 2011. In the interest of full disclosure, please note that ISACA Journal Online just published an article I wrote on essentially the same subject, “Privacy and Security Considerations for EHR Incentives and Meaningful Use” so the NCHICA paper seems to confirm the timeliness and relevance of complying with meaningful use security requirements. Given the protracted federal rulemaking process involved with the meaningful use measures and associated EHR standards and certification criteria, one of the practical difficulties is trying to stay abreast of the specific requirements to which providers will be held accountable while those specifics are undergoing revisions.

As noted above, the contents of the white paper reflect discussions at a pre-conference workshop on privacy and security implications of meaningful use, coordinated by NCHICA and held in early June. At the time this Sixth Academic Medical Center Security & Privacy Conference was held, the final versions of the meaningful use rules and EHR standards and certification criteria had not been published (they were released in mid-July, and published in the Federal Register on July 28), so the material available to workshop participants was from the draft versions released in January. While most of the core themes of the meaningful use program were consistent from the interim to the final versions, some of the items that changed are not reflected in the white paper. For instance, to highlight the central importance of HIEs in the national health IT strategy, the white paper references a passage from the “Meaningful Use Notice Final Rule”, but the quote and its reference are from the proposed (draft) final rule published in January, not the final version published in July. The passage quoted was not included in the final rule, and while the federal funding allocated towards and policy emphasis placed on HIEs certainly speaks to the importance of health information exchanges in general, the final meaningful use rule greatly reduced the focus on the purported benefits to be delivered to health care entities through HIE.

The authors of the white paper point out that the single meaningful use measure related to security is essentially a reference to an existing requirement (under the provisions of the HIPAA Security Rule) to conduct regular risk analyses. Specifically, they explain that “the intent may be broadly interpreted that eligible professionals and eligible hospitals should assess their privacy and security practices in general and make improvements where necessary and appropriate.” This is a much broader interpretation than the guidance offered with the publication of the final rule, which altered the language in the draft meaningful use measure requiring risk analyses so that the final measure explicitly limits the scope of the risk analysis to the certified EHR system or modules being used by the eligible entity.

To their credit, the workshop participants and the white paper’s authors do not limit their focus to Stage 1, but appear to try to consider likely future requirements for Stage 2 and Stage 3, work on which has only just begun. The underlying message is that health care providers need to be taking steps now, both to try to meet existing rules and also to plan for complying with new requirements, including those that will implement provisions in the Health Information Technology for Economic and Clinical Health (HITECH) Act. A further challenge for health care organizations seeking to establish or maintain compliance with all relevant rules is that the EHR incentives program using meaningful use measures and criteria will be in effect before some key HITECH-driven rules are finalized (breach notification) or even drafted (accounting of disclosures). In the case of accounting of disclosure rules, the law gave the HHS Secretary the discretion to delay the implementation of the rules beyond the 2015 deadline for meaningful use Stage 3, so it is far from clear to what extent, if any, providers and professionals will be expected to comply with such legal requirements if the corresponding rules have not yet been promulgated.

When addressing health information exchanges, the white paper suggests that “it will be necessary to re-engineer workflows across organizations, replacing point-to-point connections/interfaces with robust HIE processes.” While it is hard to argue with the position that many-to-many integration patterns are well suited to enabling widespread health information exchange, none of the major federal HIE initiatives provide for any data exchange more complicated than mutually authenticated point-to-point transmissions. Both NHIN Exchange and NHIN Direct rely on entity-to-entity messaging models, although to be fair NHIN Exchange is intended to offer a logically centralized directory (registry) of participating entities, which can be used to satisfy requests from multiple participants to find out which ones have information about specific subjects (such as patients or providers). The integration interfaces that health care entities will use to exchange data in these models may indeed require updating to support multiple data exchange partners, but the communication model is likely to remain point-to-point, and to the extent these HIE participants adopt prevailing HIE standards, little or no entity-specific variation in interfaces should be needed.

The NCHICA authors astutely point to the need for more work to specify patient expectations and requirements with respect to EHRs, HIEs, and health IT in general. They also correctly pinpoint the clinician-to-patient or caregiver-to-patient relationship as the central locus for developing and maintaining patient trust, with a corresponding need to educate and inform clinicians and caregivers of this role with respect to patients. Particularly with respect to trust and EHRs, the paper highlights a suggestion that providers may need assign staff to the role of “patient advocate,” both to help patients understand relevant aspects of the health care system and to foster greater levels of patient engagement or active involvement in their own care.