Towards a more objective basis for establishing trust in information exchange

The emphasis on electronic information exchange among public sector agencies and private sector organizations has increased attention on both technical and non-technical barriers to sharing data among different organizational entities. In many ways, efforts to overcome the non-technical barriers have fallen short of their intended objectives, with the result that would-be information exchange participants who can exchange data choose not to do so because of concerns about how appropriate security and privacy requirements will be honored when the participants in the exchange are not subject to the same or comparable constraints. Some initial attempts to rationalize these differences have focused on information security controls, especially those applied at the system level. This approach cannot arrive at a mutually acceptable level of trust among diverse entities, because information system security drivers and requirements are too subjective. A more effective approach would focus on the data being exchanged, and the privacy and other content-based rules and regulations that apply to it, using these objective requirements to determine both procedural and technical safeguards needed to meet the requirements and provide the necessary basis of trust. Further development of this concept and a privacy requirement-driven framework to support it are key focus areas of our current research.

FISMA provides insufficient foundation for trust

There seems to be an inordinate amount of attention on FISMA in the ongoing debate about how to establish a sufficient trust framework among public and private sector participants in health information exchange. Federal government security executives seem especially focused on the idea, still under development, that there needs to be a way to apply the security and privacy requirements government agencies are held to under FISMA to non-government entities when those non-government entities are part of an information exchange with the federal government. Leaving aside for the moment the suggestion that there may be a more suitable foundation (such as health information privacy regulations) on which to base security and privacy minimally acceptable requirements, there are at least three major problems with using FISMA as the basis of trust among information exchange participants.

The biggest issue is that while many of the security and privacy standards used and guidance followed by federal agencies under FISMA are common references, the provision of “adequate” security and privacy protections is entirely subjective, and as such differs from agency to agency. While all agencies use the security control framework contained in NIST Special Publication 800-53 to identify the sorts of measures they put in place, there are very few requirements about how these controls are actually implemented. Recent annual FISMA reports (including the most recently released report to Congress for fiscal year 2008) highlight the increase in the number and proportion of systems that receive authorization to operate based on formal certification and accreditation. The decision to accredit a system means that the accrediting authority (usually a senior security officer for the agency operating the system) agrees to accept the risk related to putting the system into production. Almost all federal agencies are self-accrediting, and each has its own risk tolerance in terms of what risks it finds acceptable and what it does not. Two agencies might not render the same accreditation decision on the same system implemented in their own environments, even using the same security controls. This lack of consistency regarding what is “secure” or “secure enough” presents an enormous barrier to agreeing on an appropriate minimum set of security provisions that could be used as the basis of trust among health information exchange participants, both within and outside the government.

Perhaps just as troubling, by focusing on FISMA requirements, the government is implicitly de-emphasizing the protection of privacy. To be sure, FISMA addresses privacy, most obviously in the requirement that all accredited systems be analyzed to identify the extent to which they store and make available personally identifiable information. These privacy impact assessments typically result in public notice being given detailing the data stored in and used by any system that handles personally identifiable information. But FISMA does not specify any actions for protecting privacy, nor does its accompanying NIST guidance include any controls to address privacy requirements stemming from the wide variety of legislation and regulatory guidance related to privacy.

It’s not entirely clear what it would mean for a non-government organization to try to comply with FISMA requirements. As noted above, most federal agencies are self-accrediting, so presumably the determination of whether a non-government system is adequately secured against risk would rest with the organization itself. The basis for this determination (including the private-sector organization’s risk tolerance) might be more or less robust than corresponding decisions made by federal agencies, so simply requiring non-government organization to follow a formal certification and accreditation process cannot establish a minimum security baseline any more than it does within the government. Few outside of government follow NIST 800-53, but many follow the similarly rigorous ISO/IEC 27000 security framework, so these organizations arguably would not need to adopt 800-53 if they already comply with an acceptable security management standard. (NIST has been working on an alignment matrix between 800-53 and ISO 27002, partly as a reflection of the similarity between the two standards and also in an effort to better harmonize public and private sector approaches.)

Even if some agreement can be reached wherein non-governmental entities agree to comply with FISMA security requirements, the law as enacted contains no civil or criminal penalties for failure to comply. Federal agencies judged to be doing a poor job with their information security programs receive poor grades on their annual FISMA report cards (fully half the reporting agencies received a grade of C or below for fiscal year 2007), but there is no correlation between budget allocations and good or bad grades, and no negative impact to poorly performing agencies other than bad publicity.

A better alternative (and one more consistent with master trust agreements like the NHIN Data Use and Reciprocal Sharing Agreement) would use privacy controls as a basis for establishing trust. One challenge in this regard is the number of different privacy regulations that come into play, making the HIPAA Privacy Rule alone (or other major privacy legislation) insufficient. Building a comprehensive set of privacy requirements and corresponding controls to be used as the foundation for trust in health information exchange is a topic we’ll continue to address here.

A need for more meaningful security testing

The recently released fiscal year 2008 report to Congress on FISMA implementation once again highlights government-wide progress in meeting certain key objectives for their information systems. Among these is the periodic testing of their security controls, which is required for every system in an agency’s FISMA system inventory under one of the “Agency Program” requirements in the law (Pub. L. 107-347 §3544 (5)), and an annual independent evaluation of “the effectiveness of information security policies, procedures, and practices” (Pub. L. 107-347 §3544 (2)(A)) for a representative subset of their information systems. The FY2008 report indicates that testing of security controls was performed for 93 percent of the 10,679 FISMA systems across the federal government, a slight decrease from the 95 percent rate in fiscal 2007, but still reflecting a net increase of 142 systems tested. This sounds pretty good except for one small detail: there is no consistent definition for what it means to “test” security controls, and no prescribed standard under which independent assessments are carried out. With a pervasive emphasis on control-compliance auditing in the government, such as the use of the Federal Information System Control Audit Manual (FISCAM), it seems there remains too much emphasis on verifying that security controls are in place rather than checking to see that they are performing their intended functions.

As the annual debate resurfaces on the effectiveness (or lack thereof) of FISMA in actually increasing the security posture of the federal government, there will presumably be more calls for revision of the law in order to decrease its emphasis on documentation and try to shift attention to making the government more secure. The generally positive tone of the annual FISMA report seems hard to reconcile with the 39 percent growth in year-over-year security incidents reported to US-CERT by federal agencies (18,050 in 2008 vs. 12,986 in 2007). There is certainly an opportunity for security-minded executives to perhaps shift some resources from security paperwork exercises to penetration testing or other meaningful IT audit activities. This would align well with efforts already underway at some agencies to move toward continuous monitoring and assessment of systems and away from the current practice of comprehensive documentation and evaluation only once every three years under federal certification and accreditation guidelines. The lack of sufficient funding is often cited as a reason for not doing more formal internally and externally performed security assessments like pen tests. The current FISMA report suggests that security resources may not be applied appropriately — according to business risk, system sensitivity or criticality, or similar factors — as the rate of security control testing is the same for high and moderate risk impact level systems, and only slightly lower (91 percent vs. 95 percent) for low impact systems. With just under 11 percent of all federal information systems categorized as “high” for security, agencies might sensibly start with those systems as a focus for more rigorous security control testing, and move forward from there.

Reactions to the proposed Internet SAFETY Act

There’s a great deal of hand-wringing and outrage expressed over new legislation proposed in both the House and the Senate intended to add all sorts of requirements to Internet and other electronic communication service providers in order to do more to prevent trafficking in child pornography and generally protect children from exploitation over the Internet. The central point drawing a lot of attention is a provision in the Internet Stopping Adults Facilitating the Exploitation of Today’s Youth Act (“Internet SAFETY Act”) that requires any provider of an “electronic communication service” to log and maintain records about any users temporarily granted access to that service. According to many articles, the implication is that the law would impose record retention requirements not just on ISPs and wireless hotspot providers, but on individual home network users as well. It’s this last part that just doesn’t make sense.

The key passage in the text of the bills (both the House and Senate versions include the same wording in Section 5) is “Retention of Certain Records and Information – A provider of an electronic communication service or remote computing service shall retain for a period of at least two years all records or other information pertaining to the identity of a user of a temporarily assigned network address the service assigns to that user.” That’s the whole requirement. So to figure exactly what that means, you have to parse out the words and look at the official definitions of some key terms appearing in Title 18 of the US Code. One important definition is that of “electronic communication service”: electronic communication service means any service which provides to users thereof the ability to send or receive wire or electronic communications. (18 USC §2510 (15)) If you stop here, you might conclude that by standing up a wireless access point in your home, you become an electronic communication service provider. But in the same list of definitions is one for “electronic communication”: electronic communication means any transfer of signs, signals, writing, images, sounds, data, or intelligence of any nature transmitted in whole or in part by a wire, radio, electromagnetic, photoelectronic or photooptical system that affects interstate or foreign commerce (18 USC §2510 (12)) (emphasis added). No question this applies to an ISP, and most likely to public or paid network access providers including your local Starbucks. It’s another matter entirely to say that a decision by a home user to connect a few computers to an Internet access account provided by an ISP is affecting interstate commerce. It seems this interpretation would equate anyone who lets someone (family member, neighbor, intruder) connect to their home network to be considered the same as the ISP whose infrastructure the home network is attached to.

A separate point of contention arises from the term “temporarily assigned network address.” From a monitoring, investigation, and law enforcement perspective, you would expect this to mean an IP address that allows some association to be made between network activity and the computer performing that activity. For most home users, the network addresses assigned to client computers are “non-addressable” private network addresses (such as the familiar 10.x.x.x and 192.x.x.x). It’s unclear how tracking the assignment of these private addresses is of any investigative value, particularly without an accurate association to the device receiving the assignment, or more importantly the user controlling that device. The idea that a typical home user (most of whom can’t be bothered to learn enough to turn on the security features included in their routers) would be held to a) be aware of and b) keep persistent records to account for anyone intentionally or unintentionally connecting to the Internet through their home access point is a non-starter. Leaving all the valid privacy concerns completely out of the discussion, would anyone suggest that a consumer should be required to acquire the necessary technical acumen to maintain network access logs for their home? If not, perhaps the suggestion is that consumer network equipment vendors would need to build in these access logging features, enable them by default, and prevent consumers from turning them off?

Separate but related, several articles have also suggested that this rule would apply to VOIP communications over home and business networks, but the federal definition of electronic communication explicitly exclude any “wire or oral communication” so it seems pretty clear that VOIP phone calls are out of bounds. Regardless of legal interpretations on that issue, the reaction so far to this proposed legislation suggests that those in Congress would be wise to spell out more explicitly just who is and is not covered by the provisions in the bill, much as they have done for major oversight laws relevant to privacy like HIPAA, GLBA, Sarbanes-Oxley, FERPA, and COPPA.

A few new (and sharper) teeth in HIPAA enforcement

Several valid criticisms of HIPAA since the Privacy Rule went into effect in 2003 concern lackluster efforts on enforcement of the rule’s requirements and insufficient penalties for non-compliance. The basic civil penalty for unintentional violation is just $100 per occurrence, with a maximum of $25,000 in a single calendar year. Statutory criminal penalties available under the law go as high as $250,000 and 10 years in jail in cases of intentional disclosure in knowing violation of the law. However, only four criminal cases have been brought by the Justice Department in the five years since covered entities have been bound by the law. What’s more, individuals harmed by HIPAA violations have no private right of action under HIPAA, and are limited to filing complaints with the HHS Office of Civil Rights (OCR), and it is up to the feds to determine if a violation has occurred and whether a suit should be brought against the party accused of violating HIPAA. OCR receives many complaints, but in part due to common misconceptions about what is and isn’t permitted under HIPAA, most of the complaints are not actually HIPAA violations, and formal investigations of alleged violations are quite rare.

The HITECH Act strengthens the potential enforcement of privacy compliance in a couple of important ways. The minimum and maximum civil and criminal penalties remain the same, but there is a tiered hierarchy of civil penalties based on the severity of the violation, and HHS is now required to make a formal investigation into any suspected violation involving “willful neglect” of the law. Individuals still have no private right of action, but state attorneys general are now empowered to bring suit on behalf of state residents who have been harmed by HIPAA violations. Perhaps most interestingly in a framework based on voluntary compliance, civil monetary penalties collected for HIPAA violations will now go to OCR to fund compliance and investigation activities, and HHS has been tasked to come up with a plan under which civil penalties may in the future be shared with the individuals harmed by violations who bring complaints. This last aspect will provide a financial incentive to individuals to report HIPAA violations; however, given the low likelihood that there will be widespread public understanding of the full requirements of the law, this provision may result in an increase of alleged violations for actions or practices that are not in fact contrary to the law.