A need for more meaningful security testing

The recently released fiscal year 2008 report to Congress on FISMA implementation once again highlights government-wide progress in meeting certain key objectives for their information systems. Among these is the periodic testing of their security controls, which is required for every system in an agency’s FISMA system inventory under one of the “Agency Program” requirements in the law (Pub. L. 107-347 §3544 (5)), and an annual independent evaluation of “the effectiveness of information security policies, procedures, and practices” (Pub. L. 107-347 §3544 (2)(A)) for a representative subset of their information systems. The FY2008 report indicates that testing of security controls was performed for 93 percent of the 10,679 FISMA systems across the federal government, a slight decrease from the 95 percent rate in fiscal 2007, but still reflecting a net increase of 142 systems tested. This sounds pretty good except for one small detail: there is no consistent definition for what it means to “test” security controls, and no prescribed standard under which independent assessments are carried out. With a pervasive emphasis on control-compliance auditing in the government, such as the use of the Federal Information System Control Audit Manual (FISCAM), it seems there remains too much emphasis on verifying that security controls are in place rather than checking to see that they are performing their intended functions.

As the annual debate resurfaces on the effectiveness (or lack thereof) of FISMA in actually increasing the security posture of the federal government, there will presumably be more calls for revision of the law in order to decrease its emphasis on documentation and try to shift attention to making the government more secure. The generally positive tone of the annual FISMA report seems hard to reconcile with the 39 percent growth in year-over-year security incidents reported to US-CERT by federal agencies (18,050 in 2008 vs. 12,986 in 2007). There is certainly an opportunity for security-minded executives to perhaps shift some resources from security paperwork exercises to penetration testing or other meaningful IT audit activities. This would align well with efforts already underway at some agencies to move toward continuous monitoring and assessment of systems and away from the current practice of comprehensive documentation and evaluation only once every three years under federal certification and accreditation guidelines. The lack of sufficient funding is often cited as a reason for not doing more formal internally and externally performed security assessments like pen tests. The current FISMA report suggests that security resources may not be applied appropriately — according to business risk, system sensitivity or criticality, or similar factors — as the rate of security control testing is the same for high and moderate risk impact level systems, and only slightly lower (91 percent vs. 95 percent) for low impact systems. With just under 11 percent of all federal information systems categorized as “high” for security, agencies might sensibly start with those systems as a focus for more rigorous security control testing, and move forward from there.

Reactions to the proposed Internet SAFETY Act

There’s a great deal of hand-wringing and outrage expressed over new legislation proposed in both the House and the Senate intended to add all sorts of requirements to Internet and other electronic communication service providers in order to do more to prevent trafficking in child pornography and generally protect children from exploitation over the Internet. The central point drawing a lot of attention is a provision in the Internet Stopping Adults Facilitating the Exploitation of Today’s Youth Act (“Internet SAFETY Act”) that requires any provider of an “electronic communication service” to log and maintain records about any users temporarily granted access to that service. According to many articles, the implication is that the law would impose record retention requirements not just on ISPs and wireless hotspot providers, but on individual home network users as well. It’s this last part that just doesn’t make sense.

The key passage in the text of the bills (both the House and Senate versions include the same wording in Section 5) is “Retention of Certain Records and Information – A provider of an electronic communication service or remote computing service shall retain for a period of at least two years all records or other information pertaining to the identity of a user of a temporarily assigned network address the service assigns to that user.” That’s the whole requirement. So to figure exactly what that means, you have to parse out the words and look at the official definitions of some key terms appearing in Title 18 of the US Code. One important definition is that of “electronic communication service”: electronic communication service means any service which provides to users thereof the ability to send or receive wire or electronic communications. (18 USC §2510 (15)) If you stop here, you might conclude that by standing up a wireless access point in your home, you become an electronic communication service provider. But in the same list of definitions is one for “electronic communication”: electronic communication means any transfer of signs, signals, writing, images, sounds, data, or intelligence of any nature transmitted in whole or in part by a wire, radio, electromagnetic, photoelectronic or photooptical system that affects interstate or foreign commerce (18 USC §2510 (12)) (emphasis added). No question this applies to an ISP, and most likely to public or paid network access providers including your local Starbucks. It’s another matter entirely to say that a decision by a home user to connect a few computers to an Internet access account provided by an ISP is affecting interstate commerce. It seems this interpretation would equate anyone who lets someone (family member, neighbor, intruder) connect to their home network to be considered the same as the ISP whose infrastructure the home network is attached to.

A separate point of contention arises from the term “temporarily assigned network address.” From a monitoring, investigation, and law enforcement perspective, you would expect this to mean an IP address that allows some association to be made between network activity and the computer performing that activity. For most home users, the network addresses assigned to client computers are “non-addressable” private network addresses (such as the familiar 10.x.x.x and 192.x.x.x). It’s unclear how tracking the assignment of these private addresses is of any investigative value, particularly without an accurate association to the device receiving the assignment, or more importantly the user controlling that device. The idea that a typical home user (most of whom can’t be bothered to learn enough to turn on the security features included in their routers) would be held to a) be aware of and b) keep persistent records to account for anyone intentionally or unintentionally connecting to the Internet through their home access point is a non-starter. Leaving all the valid privacy concerns completely out of the discussion, would anyone suggest that a consumer should be required to acquire the necessary technical acumen to maintain network access logs for their home? If not, perhaps the suggestion is that consumer network equipment vendors would need to build in these access logging features, enable them by default, and prevent consumers from turning them off?

Separate but related, several articles have also suggested that this rule would apply to VOIP communications over home and business networks, but the federal definition of electronic communication explicitly exclude any “wire or oral communication” so it seems pretty clear that VOIP phone calls are out of bounds. Regardless of legal interpretations on that issue, the reaction so far to this proposed legislation suggests that those in Congress would be wise to spell out more explicitly just who is and is not covered by the provisions in the bill, much as they have done for major oversight laws relevant to privacy like HIPAA, GLBA, Sarbanes-Oxley, FERPA, and COPPA.

A few new (and sharper) teeth in HIPAA enforcement

Several valid criticisms of HIPAA since the Privacy Rule went into effect in 2003 concern lackluster efforts on enforcement of the rule’s requirements and insufficient penalties for non-compliance. The basic civil penalty for unintentional violation is just $100 per occurrence, with a maximum of $25,000 in a single calendar year. Statutory criminal penalties available under the law go as high as $250,000 and 10 years in jail in cases of intentional disclosure in knowing violation of the law. However, only four criminal cases have been brought by the Justice Department in the five years since covered entities have been bound by the law. What’s more, individuals harmed by HIPAA violations have no private right of action under HIPAA, and are limited to filing complaints with the HHS Office of Civil Rights (OCR), and it is up to the feds to determine if a violation has occurred and whether a suit should be brought against the party accused of violating HIPAA. OCR receives many complaints, but in part due to common misconceptions about what is and isn’t permitted under HIPAA, most of the complaints are not actually HIPAA violations, and formal investigations of alleged violations are quite rare.

The HITECH Act strengthens the potential enforcement of privacy compliance in a couple of important ways. The minimum and maximum civil and criminal penalties remain the same, but there is a tiered hierarchy of civil penalties based on the severity of the violation, and HHS is now required to make a formal investigation into any suspected violation involving “willful neglect” of the law. Individuals still have no private right of action, but state attorneys general are now empowered to bring suit on behalf of state residents who have been harmed by HIPAA violations. Perhaps most interestingly in a framework based on voluntary compliance, civil monetary penalties collected for HIPAA violations will now go to OCR to fund compliance and investigation activities, and HHS has been tasked to come up with a plan under which civil penalties may in the future be shared with the individuals harmed by violations who bring complaints. This last aspect will provide a financial incentive to individuals to report HIPAA violations; however, given the low likelihood that there will be widespread public understanding of the full requirements of the law, this provision may result in an increase of alleged violations for actions or practices that are not in fact contrary to the law.

Accounting of disclosures to become more comprehensive

One of the requirements under the HIPAA Privacy Rule is that covered entities maintain an “accounting of disclosures” of protected health information, in part so that an individual may request a record of who accessed their health information, at what time, and for what purpose. As codified into law (45 CFR §164.528), the accounting of disclosures rule specifies a time period of six years, so covered entities are obligated to maintain records of disclosures for at least that long. There is a significant set of exceptions in the original accounting of disclosures requirement that the disclosures made for the purposes of treatment, payment, or health care operations do not have to be recorded and made available in the accounting of disclosures. This greatly reduces the administrative burden on covered entities, as most “routine” uses of individually identifiable health information are not subject to the accounting rule. The language in the HITECH Act on accounting of protected health information disclosures removes these exceptions, essentially requiring that an accounting provided to an individual cover all disclosures (there is still an exception for the requests and individual makes to see his or her own information). So the revised accounting of disclosure rule now gives individuals the right to receive a three-year history of all disclosures of their information through an electronic health record.

For EHR vendors, this simplification of the accounting of disclosures rule may actually make it easier to produce the disclosure history, because a comprehensive transaction log showing authorized (and unauthorized) access to records will produce the accounting required. There is a related provision in the HITECH Act that directs the HHS Secretary within six months to promulgate regulations on what specific information should be collected about each disclosure. There is also a pretty protracted period until the rule takes effect, especially for entities that already had electronic health records as of January 1 of this year. The accounting of disclosures rule applies to these covered entities as of January 1, 2014, while for entities acquiring electronic health records after January 1, 2009, the rule goes into effect as of January 1, 2011, or whenever the entity acquires an EHR, whichever is later.

New Federal notification requirement for breaches of protected health information

One of the more widely anticipated provisions of the HITECH Act is a new provision requiring many health information exchange participants (specifically, covered entities and business associates under HIPAA) to provide notification to individuals in the event of unauthorized disclosure of “unsecured” protected health information. Although it only applies to health data, this would seem to be the first nationwide regulation for breach notifications, so that alone is noteworthy.

The breach notification law comes into play for any “unauthorized acquisition, access, use, or disclosure of protected health information which compromises the security or privacy of such information, except where an unauthorized person to whom such information is disclosed would not reasonably have been able to retain such information. The key element here is the breach of “unsecured protected health information” — for practical purposes, this means the notification rule only applies if the data is not encrypted. The law doesn’t use the term “encryption.” Instead, it says more generically that “unsecured” information is not secured through the use of a technology or method for rendering information “unusable, unreadable, or indecipherable.” the law gives the HHS Secretary 60 days to come up with guidance specifying technologies or methodologies that should be used to provide this protection for data.

Notification must take place “without unreasonable delay” and in any case within 60 days, by written notification or public posting where contact information is unavailable, or telephone where urgency exists. The 60-day timeline seems in stark contrast to existing computer intrusion notification rules for federal agencies, which require notice to the US Computer Emergency Response Team (US-CERT) within one hour of the discovery of the intrusion. Notice must also be provided to major media outlets and to the HHS Secretary for breaches involving 500 or more individuals; these breaches are to be posted on the HHS website. The law gives the HHS Secretary 180 days to publish final regulations on data breach notifications.

Breach notification requirements are also specified for vendors of personal health records, even though these remain non-covered entities under HIPAA. When a breach occurs, notice must be provided to individuals (only US citizens and permanent residents) affected, and to the Federal Trade Commission (the FTC then notifies the Secretary). Violation of this rule is considered an unfair and deceptive act or practice, and as such would be subject to action by the FTC.

What is interesting from a security practices standpoint is that this data breach notification requirement — by exempting secured data from the regulations — all but requires the use of encryption at rest for health records. A great deal of attention has been given to encryption in transit (secure communication channels, digital signatures, and the like) for health information exchange services, but health IT standards efforts have stopped short of imposing controls that would have to be implemented within the boundaries of a participating organization. It will be interesting to see if the health IT standards bodies re-authorized in the HITECH Act will expand their scope into the technical environments of the entities participating in health information exchange.