Policies without enforcement simply aren’t enough to guard against internal threats

Two recent studies of financial sector employees, sponsored by security vendors Cyber-Ark and Actimize, and reported last week by Tim Wilson of InformationWeek, indicate that employees are ready and willing to steal information from their employers, even though they know such actions violate laws as well as company policies. Taken together with some findings from the 2009 Computer Crime & Security Survey from the Computer Security Institute (results were presented yesterday in a CSI webcast, and will be released publicly on December 8 from www.gocsi.com), it’s clear that even when security awareness is made a priority, organizations need more than rules and policies or even laws to protect themselves from insiders.

Interesting results from the survey include a rise in malware and disruptive intrusions, at least in terms of the proportion of respondents experiencing such incidents, including denial of service attacks. Based on information about organizational responses to security incidents, the primary approach to security among surveyed organizations continues to be reactive, with security awareness a weak spot. As often highlighted in the context of laptop thefts and other high-profile data breaches, unauthorized disclosures are often the result of employees knowingly violating existing security policies, whether for convenience, through negligence, or for malicious purposes. Even the best-intentioned employees may need the reinforcement of technical measures to enforce what’s stated in policies or regulations. When companies are provided credible information indicating employees will disregard the rules if and when it suits them, the need for data loss prevention and similar safeguards cannot be made more clear.

Structural reorganization announced for government health IT oversight

In a Federal Register notice effective December 1, the Office of the National Coordinator for Health IT announced a reorganization of its office. Among the most notable changes is the decision to create the position of Chief Privacy Officer and a supporting office within ONC to address “privacy, security, and data stewardship of electronic health information” and serve as a point of contact and coordination privacy officials in domestic and international government agencies at all levels. Privacy has long been an emphasis along with information security for the National Coordinator, with ONC releasing its Nationwide Privacy and Security Framework for Electronic Exchange of Individually Identifiable Health Information a year ago. This latest development is an explicit acknowledgment of the central position occupied by privacy and protection of individually identifiable health information in the pursuit of health IT adoption and interoperability.

Another structural shift is the creation of an Office of Economic Modeling and Analysis to apply formal economic perspectives to aspects of the health care system to help justify investment in health information technology and assess different health information technology strategies and policies intended to promote health IT adoption and use. The idea appears to be to provide more quantitative information about ways to improve health care quality and efficiency, a side benefit of which might be to help identify operational business models and value (revenue) propositions to encourage adoption of health IT, not just incentives to start using the technology.

From a more practical standpoint, the reorganization should align ONC resources to help it better manage and oversee the significant funding flowing through ONC due to the provisions of the American Recovery and Reinvestment Act. Specific offices within ONC will also take responsibility for scientific research, grant programs, and new health IT developments, and for program oversight, internal office management, and budgeting. The office’s operations and ONC’s continuing work on health IT standards have been elevated to the responsibility of a Deputy National Coordinator.

More options, no resolution on bridging public and private sector security standards

As regularly noted in this space, one of the big points of disagreement in attempts to achieve greater levels of information integration, particularly health information exchanges, is how to reconcile disparate security and privacy standards in place that apply to government agencies and private sector entities (FISMA still being touted as best security for health information exchange; No point in asking private entities to comply with FISMA). The debate has been cast most often as one about where to draw the boundaries where the detailed security control requirements and other obligations to which federal agencies are bound under FISMA. When information exchanges involve data transmission from the government to private entities, the law is only clear in cases where the private entity is storing or managing information on behalf of the government. When the intended use of the data is for the private entity’s own purposes (with the permission of the government agency providing the data), the text of the FISMA legislation is pretty clear that the private sector entity is not bound by its requirements, but the agency providing the information still has obligations with respect to the data it sends out, at the time of transmission and after the fact.

At the most recent meeting of the ONC’s Health IT Standards Committee on November 19, federal executives including VA Deputy CIO Stephen Warren and CMS Deputy CISO Michael Mellor spoke of the need to beef up federal information systems security protections when those systems will be connected to non-government systems, and again endorsed the position that government security standards under FISMA are more strict than equivalent standards that apply to private sector entities, including those prescribed by HIPAA. In the past year, despite the creation of a government task group formed specifically to address federal security strategies for health information exchange, there has been little in the way of resolution in terms of arriving at a common set of standards that might apply to both public and private sector entities involved in data exchange.

An interesting entrant into this arena is the Health Information Trust Alliance (HITRUST), a consortium of healthcare industry and information technology companies that aims to define a common security framework (CSF) that might serve as the point of agreement for all health information exchange participants. Ambitious, to be sure, but the detail provided in the CSF itself and the assurance process that HITRUST has defined for assessing the security of health information exchange participants and reporting compliance with the framework should serve as at least as a structural model for the security standards and governance still under development for the Nationwide Health Information Network (NHIN). The HITRUST common security framework has yet to achieve significant market penetration, especially in the federal sector, perhaps in part due to the initial fee-based business model adopted by the Alliance for the CSF. In August of this year HITRUST announced that it would make the CSF available at no charge, and launched an online community called HITRUST Central to encourage collaboration on information security and privacy issues in the health IT arena. (In the interest of full disclosure, while SecurityArchitecture.com has no affiliation with HITRUST, some of our people are registered with HITRUST Central.) The point here is not to recommend or endorse the CSF, but simply to highlight that there is a relevant industry initiative focused on some of the very same security issues that are being considered by the Health IT Policy Committee and Health IT Standards Panel.

Revised SP800-37 not ideal, but an improvement

NIST has released for public comment a revision to its Special Publication 800-37, “Guide for Applying the Risk Management Framework to Federal Information Systems.” This document was formerly the “Guide for the Security Certification and Accreditation of Federal Information Systems,” so the first obvious change is in the title and corresponding focus of the publication. The change is most significantly seen is an explicit move away from the tri-annual certification and accreditation process under which federal information systems are authorized to operate, in favor of a continuous monitoring approach that seems to recognize the importance of achieving and maintaining awareness of current security status at any given point in time. While some of the more interesting revised elements may make their way into a future post, of equal interest at the moment is the question of how significant the altered approach in 800-37 may be for improving the security of federal information systems, and more generally of federal agency environments.

As noted by more than one expert (although few as forcefully, bluntly, or eloquently as Richard Bjetlich), continuous monitoring of security controls is a far cry from continuous threat monitoring, the latter of which demands more attention from the government in light of the dramatic rise in reported security incidents over the past three years. Among other things, FISMA has specific requirements that should result in agencies engaging in threat monitoring, such as “periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, of which such testing shall include testing of management, operational, and technical controls of every information system” identified in the agency system inventory required under OMB A-130 (§3544(b)(5)) and “procedures for detecting, reporting, and responding to security incidents” (§3544(b)(7)). Generally speaking, every agency has an incident response team or comparable capability, and threat monitoring using intrusion detection tools is one of several approaches many of the IR teams already implement. So more explicit guidance to agencies (from NIST or anyone else) on doing these things effectively on an enterprise-wide security basis could shore up a lot of the deficiencies that come from a system-level emphasis on controls alone.

Regardless of how all the pending proposals for revising or strengthening FISMA turn out or which ones pass, it’s not feasible to suggest that the government should completely abandon its current security practices in favor of a new approach emphasizing field testing of its controls (field testing being one of the ways that agencies could test and evaluate the effectiveness of their security controls). The revised 800-37 has to a least be considered a step in the right direction, because the current triannual documentation exercise does nothing to harden an agency’s security posture. A move to continuous monitoring narrows the gaping loophole that current system authorization policy leaves open, and is an explicit step towards achieving situational awareness. There’s a long history of ambitious and revolutionary initiatives failing in the federal government, and a corresponding (cynical yet accurate) view that “all successful change is incremental.” Let’s not characterize the failure on NIST’s part to recommend a wholesale replacement of current security program operations to mean that there couldn’t or shouldn’t be improvements sought within the sub-optimal control-driven model.

That’s not the same thing as intrusion detection or prevention, but any effort to mandate those activities had better be well thought out. Putting intrusion detection tools in place will yield no tangible security benefit if the agencies do not also have sufficiently expert security analysts to make sense of the alerts produced by the tools. So simply requiring threat monitoring activities can quickly become another compliance control or the source of a false sense of greater security. Where intrusion detection and prevention is concerned, it’s disingenuous to fault individual agencies for not moving to implement continuous threat monitoring when they have no current capability to make sense of the information. IDS or IPS is of no use (and may be counter-productive) without the corresponding experts to analyze the data produced by the tools, tailor detection rules, and tune operations to minimize false positives and separate noise from actual threats.

On the intrusion detection front, the government is moving headlong in this direction, but has no intention of leaving the management of such capabilities up to individual agencies. Under the Einstein program sponsored by DHS and to be run by the National Security Agency, all federal network traffic will be monitored centrally, not only for intrusion detection but also prevention in the form of blocking traffic or disabling network segments when malicious activity is detected. The technical feasibility of monitoring all federal networks is facilitated for Internet connectivity by the Trusted Internet Connections program — under which the entire federal government ostensibly will consolidate Internet points of presence down to fewer than 100 — and by plans under Einstein to place sensors within the physical environments of major providers of telecommunications infrastructure to the federal government.

Trust in cloud service providers no different than for other outsourced IT

As the private sector embraces outsourced IT services and the federal government apparently eager to follow suit, it should come as no surprise that both proponents and skeptics of IT service outsourcing (now under the new and more exciting moniker of “cloud computing” instead of the more pedestrian “software as a service” or “application service provider”) are highlighting positive and negatives examples of the potential for this business model. Security remains a top consideration, particularly when discussing public cloud computing providers, but some of the security incidents brought to light recently actually do more to emphasize the similarity between cloud computing requirements and those associated with conventional IT outsourcing. For any organization to move applications or services or infrastructure out of its own environment and direct control and give that responsibility to some other entity, the organization and the service provider have to establish a sufficient level of trust, which itself encompasses many factors falling under security and privacy. The basis of that trust will vary for different organizations seeking outsourcing services, but the key for service providers will be to ensure that once that level of trust is agreed upon, the provider can deliver on its promises.

To illustrate this simple concept, consider the case last month when T-Mobile users were notified by Microsoft that the Danger data storage service that provided data storage and backup to T-Mobile Sidekick users had failed, with all data lost without hope of recovery. Despite that dire message initially communicated by Microsoft to users, it turned out that the data was recoverable after all, but the incident itself suggests a breakdown in internal data backup procedures — just the sort of thing that would be addressed in the service level agreements negotiated between outsourcing customers and cloud computing providers. While any such SLA would likely have financial or other penalties should the provider fail to deliver the contracted level of service, without confidence that providers can actually do what they say they will, even companies whose customers who are compensated for their losses are unlikely to stick with the providers over time. There was actually some debate as to whether this specific incident was really a failure of cloud computing or not, but the semantic distinction is not important. Organizations considering outsourcing their applications and services need to assess the likelihood that the outsourced service provider can implement and execute on the processes and functions on which the applications depend, at least as reliably as the organizations themselves could if they kept operations in-house.

Even where the risks in question are more specific to the cloud model (such as the cross-over platform attacks to which logically separate or virtual applications may be vulnerable), the key issues are the same as those seen in more conventional environments. The risks of application co-location when there is insufficient exists just as surely in internally managed IT environments as it does in the cloud. A fairly well-publicized example occurred several years ago in the U.S. Senate, when Democratic party-specific documents stored on servers that were supposed to be tightly access controlled instead were available to GOP staffers, the problem was traced to poor configuration on servers used by the Senate Judiciary Committee that were shared by members of both parties. The Senate has since implemented physical separation of computing resources in addition to logical access controls based on committee membership and party affiliation.

These examples highlight the importance of maintaining focus on fundamental security considerations — like server and application configuration, access controls, and administrative services like patching and backup — whether you’re running your own applications on your own infrastructure or relying on the cloud or any other outsourcing model.