New research identifies additional risks for applications in the cloud

With great attention continuing to be focused on the potential for cloud computing services to re-shape the way public and private sector organizations manage their IT infrastructure and computing environments, a paper published this month by researchers from MIT and UCSD may provide more good reasons for caution in moving to outsourced services provided by prominent third-party cloud computing vendors like Amazon, Microsoft, and Google. Based on an analysis conducted on Amazon’s Elastic Compute Cloud (EC2) but which the authors suggest is generally applicable to other providers, there are a number of vulnerabilities that can be exploited against cloud-hosted apps that run in virtual machines multiplexed on the same physical server. The authors evaluated the provisioning of new virtual machines and identified ways to map the cloud infrastructure so that a theoretical attacker could effectively place an attacking virtual machine instance on the same server as the virtual machine hosting the application the attacker sought to compromise. This sort of “side channel” attack vector might understandably offer a malicious user the opportunity to launch attacks against whatever other applications might be running on the same server, but the research presented in the paper indicates that an attacker looking to compromise a specific service can do so, albeit with the need for more time and money to succeed.

It’s important to note that the authors work under the assumption that the cloud computing service provider is trusted. There are known risks such as the compromise of provider staff or attacks directed at hypervisors or other virtual machine administrative tools, but the attack vector on which the paper focuses is feasible even when the integrity of the provider’s security environment is maintained. The threat model used in the research the paper summarizes also does not address direct attacks against applications; these threats exist both for cloud-hosted and conventionally hosted applications, and there is no theoretical increase in risk to a network-accessible application that happens to be running on outsourced infrastructure. Instead, as the authors themselves note, the research focuses “on where third-party cloud computing gives attackers novel abilities; implicitly expanding the attack surface of the victim” (Ristenpart, T., Tromer, E., Shacham, H., & Savage, S., 2009; emphasis in the original).

Cloud computing service providers might do well to take note both of the issues presented in the paper and of the recommendations the authors make to mitigate the risks they found. These recommendations include revisions to business and administrative practices as well as technical defensive measures.

Reference:

Ristenpart, T., Tromer, E., Shacham, H., & Savage, S. (2009, November). Hey, you, get off of my cloud: Exploring information leakage in third-party compute clouds. Paper presented at the 16th Association for Computing Machinery Conference on Computer and Communications Security, Chicago, IL. Retrieved from

Health Net breach highlights weaknesses in state-level breach laws

While affected Connecticut residents and authorities are understandably upset about the recently reported loss by regional health plan provider Health Net of personal information on all 446,000 Connecticut customers served by the plan, the six-month delay by the company in making the breach public is seen as especially egregious. Connecticut has had a breach disclosure law on the books since 2006, but the statute does not have an explicit timeframe in which disclosure must occur, instead saying only that “disclosure shall be made without unreasonable delay” (699 Gen. Stat. Conn. §36a-701b). The law also includes a provision by which disclosure is not required if it can be determined that breach is not likely to result in harm to the individuals whose information has been lost, but this exception still requires notification of and consultation with appropriate government authorities to arrive at the determination that no harm will done. It appears that Health Net did not follow the spirit of the law in either context, and given the company’s conclusion that the data — contained on a portable disk drive and stored in a format proprietary to an application that Health Net used to access the data — was not encrypted and therefore could probably be read by someone who acquired it.

This incident occurred before the federal data breach disclosure provisions of the HITECH Act went into effect (Connecticut’s law is not limited to health information, but includes all personal information), but under those rules Health Net would be subject to federal penalties as well as any punitive action taken at the state level. The health data breach disclosure rules use the same “without unreasonable delay” language found in the Connecticut statute, but add a maximum time of 60 days from the date the breach is discovered (74 Fed. Reg. 42749 (2009)). Of course, the federal rules also include a harm exception like the one Connecticut has, so there are limits to the extent to which federal-level regulations remove subjectivity surrounding data breach disclosures, but the Health Net example serves to highlight the need for specificity in statutes to eliminate some of the room to equivocate that data-losing organizations now have.

Proposed federal P2P ban might extend to personal computers

The latest development in the wake of the unauthorized release of information about a House ethics investigation is newly proposed legislation in the form of what would be called the Secure Federal File Sharing Act (H.R. 4098) that would ban the use of peer-to-peer file sharing software in the federal government. As noted in many articles about this draft legislation, the bill would not only prohibit government employees and contractors from installing or using P2P technology on federally owned computers or those operated on its behalf, it would also set policies constraining the use of file sharing software on non-government computers where home-based remote access or teleworking to federal computers is occurring. This of course is not the first example of government extending policy into employee’s homes, but it demonstrates quite clearly the importance government agencies are placing on preventing data loss or disclosure.

Despite the reactive nature of H.R. 4098, there are already federal guidelines in place on the issue of securing computers and other electronic devices used for teleworking or remote access. NIST has two Special Publications on this topic: SP800-114, published in 2007, specifically addresses security of devices used for remote access, and SP800-46, recently revised and updated in June 2009, somewhat more broadly addresses telework security issues. Both of these documents mention peer-to-peer technology as a potential security risk. SP800-114 is probably the most relevant to the new House bill, as it includes specific sections on securing home network environments. What the special publications don’t do that the legislation would is establish formal policies (as opposed to recommended practices) related to the use of file sharing software. The challenge with establishing and enforcing security policies for non-government locations like employees’ homes is both in making employees aware of what they need to do (and not do) to avoid becoming a vulnerability and in giving them the tools and skills to be able to implement appropriate procedures and controls in their own computing environments.

CDT offers a good explanation of user-centric identity issues

The Center for Democracy and Technology (CDT) has a good summary up on their site detailing a variety of policy issues related to user-centric identity management. There is a lot of attention in the market focused on federated identity management in general, and user-centric identity in particular, but as CDT and others point out there are still plenty of important security and privacy considerations to be addressed. This discussion is in the same general vein as the rise of claims-based identity management, which got a boost in 2007 when Microsoft added support for that identity model in their Geneva platform and made it part of the .Net framework. This topic is timely and relevant once again in the health IT context, as the Center for Information Technology at the National Institutes of Health last fall engaged in a pilot project to assess the use of open identity in the federal government. This pilot, one among several launched in coordination with the federal-wide Identity, Credential, and Access Management (ICAM) initiative.

Among the interesting reading available through the CDT site is a recently released white paper that offers a detailed analysis of salient issues with user-centric identity management, focusing on governance and policy issues. Also linked on the CDT page is an ICAM produced document called the Trust Provider Framework Adoption Process which details a process and set of assessment procedures that federal entities can follow to evaluate trust provider frameworks that might be used by third parties seeking to serve as identity providers and credential issuers in support of federated identity management capabilities. The TPFAP is intended to help determine whether credentials issued by such third parties will satisfy the e-authentication requirements established by the government (and described in NIST Special Publication 800-63), at least at E-Authentication Levels 1 and 2, and non-PKI Level 3. The ICAM document provides a lot of useful technical detail on relevant e-authentication requirements, and as a side benefit offers and interesting example of using a technically focused approach to establishing and consistently evaluating trust models represented by different trust provider frameworks.

New OWASP Top 10 RC places injection at the top of the list

The Open Web Application Security Project (OWASP) has published the first release candidate for their “Top Ten Most Critical Application Security Risks,” which will supercede the previous version published in 2007. The OWASP Project team made an explicit shift to focusing on risks instead of vulnerabilities that were the focus of previous Top Ten lists, in order to call attention to risks that were likely to have the greatest impact on organizations. As described in a summary presentation separate from the RC file itself, for 2010 “Injection” takes the top position on the list, while “Cross-Site Scripting” drops to the second place from its first position in 2007 (on an interesting side note, the “Unvalidated Input” vulnerability which topped the first OWASP Top Ten list in 2004 is no longer among the issues addressed). Most of the 2007 vulnerabilities remain in some form on the 2010 risk list, with new additions for “Unvalidated Redirects and Forwards” and the re-appearance of “Security Configuration,” which was absent from the 2007 list but was part of the 2004 list as “Insecure Configuration Management.”

The focus on injection (not just SQL injection, but any interpreter that can be made to execute commands inserted in the data submitted to the application) is a combination of the large number of applications that are still vulnerable to this attack and the severe impact that can result from an exploitation of injection vulnerabilities. The primary mitigation for this problem boils down to input validation, whether by restricting input to stored procedures or encoding input before it is sent to the command interpreter; these are not technical complicated measures, so the prevalence of injection vulnerabilities defies easy explanation.

At first glance, the most surprising deprecation from the 2007 list may be”Information Leakage and Improper Error Handling,” given the current market emphasis on data loss prevention, but this vulnerability refers to situations where systems or applications divulge too much information about their configuration, operational characteristics, or other aspects of the application that might yield details that attackers would find useful in compromising the system. What has been brought forward from previous iterations of the Top Ten list is detailed descriptions of the ways the risks are manifested and how the underlying vulnerabilities may be exploited, as well as prescriptive guidance on ways to mitigate each risk, including design-level proactive actions where applicable.