Cyber insurance transfers risk but doesn’t replace due care

The ongoing series of high-profile data breaches reported by companies across multiple industry sectors – including major retailers (Target and Home Depot), health insurers (Anthem and Premera), online service vendors (Uber), hotels (Mandarin Oriental and Hilton HHonors), and entertainment (Sony) – has raised awareness of the diverse and sophisticated nature of threats that organizations face and increased interest among executive teams of ways to reduce risk exposure from data breaches. One increasingly popular option is cyber insurance, particularly to cover corporate liability from breaches and ensuing harm to consumers and to pay the costs of responding to breaches such as notifying affected individuals and providing credit monitoring services. Firms that underwrite cyber insurance and the companies that seek such coverage are separating cyber liability coverage from conventional commercial general liability policies. This separation provides policy holders greater confidence that the potential damages from a cyber incident will be covered, but also allows insurers to clearly define exactly what types of incidents and damages are covered and to prescribe conditions under which claims will be honored. Those shopping for cyber insurance should also be aware that while there are now dozens of insurers offering such policies, the terms of coverage vary widely.

For organizational executives and risk managers looking for a means to transfer (instead of mitigate or accept) risks related to IT security and privacy, cyber liability insurance may be a terrific option. These companies should be mindful, however, that securing cyber insurance coverage does not diminish their obligations to ensure adequate protective measures are in place for customer data and other IT assets. Adding insurance as a response to identified risks should not therefore be seen as a substitute for implementing many types of available security and privacy controls, as these measures may be necessary to satisfy the standard of due care. The standard of due care in American tort law says that organizations can be held liable if they fail to implement readily available technologies or practices that could mitigate or prevent loss or damage. The legal precedent for this traces back more than 80 years to a 1932 decision by the U.S. Second Circuit Court of Appeals, familiarly known as the T.J. Hooper case. This case involved two tugboats (the T. J. Hooper and the Montrose) that were towing coal barges that sank off the New Jersey coast in a storm. The cargo owner sued the barge company and the tugboat operators to cover its loss. The court ruled that both the barges towed by the tugboats and the tugboats themselves were “unseaworthy,” because with respect to the tugs they were not equipped with radios that could have been used to alert the tugboat pilots to the impending storm. Although the court noted that the use of such radios was not yet widespread, it nevertheless found the tugboat operators liable because radios were available and, had they been in place, the bad weather and the subsequent loss of cargo could have been avoided. The modern lesson is that where technology is available that can reasonably be expected to prevent or reduce the likelihood of loss or damage, under the standard of due care an organization may be held responsible for implementing that technology. This means, for instance, that organizations that have not established security monitoring or intrusion detection or prevention controls may find their cyber insurers unwilling to accept claims for breaches and resulting damages.

Installing Snort on Windows

On March 12, the Sourcefire team announced the release of Snort 2.9.7.2, the latest update to one of the most popular (and open source) network IDS tools. Detailed instructions for installing Snort on either Ubuntu Linux or Windows 7 are available under the Learning tab of this website. All things equal, installing Snort on Linux is preferred to Windows, especially for real-world use, but for learning about the tool or experimenting with rule-writing and alert generation either operating system is workable. The Windows approach is often preferred for less technical users looking to understand the basics of Snort because Windows installation is more automated and takes much less time than it does on Linux. As you can see from the video linked above, from start to finish the Windows installation process can be completed in as little as 20 minutes.

Is Clinton’s use of a private email server a big deal or not?

A little more than a week after the New York Times first reported that she used a personal email address rather than a government account during her tenure as Secretary of State, Hillary Clinton addressed the situation in a press conference in New York on March 10. Amid the many swift, often partisan, reactions to the somewhat unsatisfying explanation she offered are two broader questions that may turn out to be more relevant than whether Clinton was, intentionally or inadvertently, keeping from public scrutiny any details about her work as Secretary of State. While it seems fairly obvious that her decision to handle her email on her own was a poor choice, the significance of that decision by itself can only be measured with the benefit of as-yet-undetermined details about whether she complied with federal record-keeping regulations and whether the private server and the communications it handled were secured sufficiently to provide adequate protection, particularly against unauthorized disclosure.

The first of these is whether Clinton’s use of a personal account and corresponding privately managed email server violated federal regulations (particularly including but not limited to the Federal Records Act) or State Department or other executive branch requirements. The Times initially reported the situation using language that strongly implied Clinton might have violated federal law, but subsequent articles more accurately described government regulations and State Department guidelines and email preservation capabilities that were in place during Clinton’s time as Secretary. The revised consensus opinion seems to be that personal email use was discouraged but not forbidden, although individuals using personal instead of government email accounts were clearly obligated to ensure appropriate security measures were in place to protect email communications. Federal records management regulations do require agencies to create and preserve documentary materials (in paper or electronic form) that relate to the conduct of official duties by agency personnel or to the transaction of public business. By furnishing her government-related emails to the State Department, Clinton would be doing precisely what federal regulations require. There is a separate but related question as to whether Clinton should have turned over the entire contents of the email server to the government for review, instead of first removing what she considered to be personal communications. While Clinton opened herself to scrutiny by preemptively separating (and apparently deleting) her personal email, the relevant records management regulations clearly distinguish “federal records” from “personal files,” the latter being defined as “documentary materials belonging to an individual that are not used to conduct agency business” and explicitly “excluded from the definition of Federal records.” (36 CFR §1220.18)

The second question is whether Clinton’s private email system should be assumed to be insecure or at least less secure than the system operated by the State Department. There are at least two dimensions to consider on this point, because the answer depends both on how effectively the Clinton email server was initially configured and maintained over time and on the security of the government email that she should presumably have used instead. Most industry observers and government security types assume that the stringent security requirements derived from FISMA and other applicable regulations make it unlikely that a private email server – even one set up at the request of a former President of the United States – could match the security controls in place for an executive agency. The State Department, however, is not in the strongest position to make such a comparison, since suspected intrusions into its own email system prompted State to temporarily shut down the entire system last November and again as recently as yesterday. Potential breaches notwithstanding, maintaining the security of an Exchange server is not a one-time undertaking, but instead requires regular maintenance, monitoring, and updates. It remains unclear what level of day-to-day operational support Clinton’s email system has or who actually manages the server on the Clintons’ behalf.

Clinton invited some skepticism when she stated during the press conference that “there were no security breaches” of the email server, which was reportedly installed and maintained within the Clinton’s personal residence. It seems likely that, if the server had been implemented incorrectly or in a manner that exposed security vulnerabilities, someone might have drawn attention to any such weaknesses, particularly in the time since Clinton’s use of the clintonemail.com domain was publicized in 2013. The Clintons have not provided any rationale for choosing a Microsoft Exchange server (although it may have been something of a default since Exchange is widely used across the government). The email server, which remains active and Internet-reachable via Outlook Web App, can easily be found, researched, and presumably subjected to scans or attempted penetration attempts, yet to date there is only speculation as to how secure (or insecure) the server might be. It does appear that the Clinton email server permits the use of username and password credentials for access, in contrast to the two-factor authentication in place at the U.S. House of Representatives, for instance, which requires users to have a RSA SecurID token to authenticate. There are many federal civilian agencies that rely solely on usernames and passwords, so if the Clintons chose to do the same that would not be outside the government norm. Security analysts might be more interested to know what sort of intrusion detection system or network monitoring, if any, is in place to watch the server for signs of unauthorized access attempts.

Feds seek centralized threat analysis with CTIIC

The Obama administration, seeking to increase the quantity and quality of its cyber intelligence and enhance its ability to respond quickly to cyber attacks, will create a new Cyber Threat Intelligence Integration Center (CTIIC). Lisa Monaco, the Assistant to the President for Homeland Security and Counterterrorism, formally announced the creation of the new agency on February 10 during a Director’s Forum at the Wilson Center entitled “Cyber Threats and Vulnerabilities: Securing America’s Most Important Assets.” The new Center will not perform data collection, but instead will aggregate and analyze data collected by the numerous other government entities (and, potentially, private sector firms as well). With this specialized role, the administration is positioning CTIIC as complementary, not duplicative, to existing functions across government that conduct various cybersecurity activities. The new Center will be under the direction of the Director of National Intelligence – an organizational positioning likely driven at least in part by the need to include cyber-attack response within its sphere of operations. No civilian agency (even DHS) holds the authority to launch proactive or reactive attacks against cyber adversaries, but these capabilities both exist and are authorized for the U.S. Cyber Command and other specialized branches of the military and intelligence community.

The potential for “mission confusion” certainly exists in the federal government. There is already a National Cybersecurity and Communications Integration Center (NCCIC) within the Department of Homeland Security and a National Cybersecurity Center of Excellence (NCCOE) at NIST. The former, like the U.S. Computer Emergency Readiness Team (US-CERT) it manages, focuses its attention largely on security threats and vulnerabilities applicable to the U.S. government, although private sector organizations are certainly able to communicate with NCCIC and benefit from its analysis. The NCCOE, in contrast, serves businesses with information about security solutions leveraging commercially available technology. There are of course numerous programs with a role in cybersecurity and defense — including the FBI, NSA, DHS, DoD, CIA, and other civilian, military, and intelligence agencies.

What seems to be different about the newly proposed center is the intention to address state actors (Monaco specifically mentioned China, Russia, Iran, and North Korea) and non-state-based hacking groups like Anonymous. Historically, private sector organizations have been reluctant to either share threat and attack information with the federal government or to subject themselves to government regulations and oversight. With the notable exception of companies with roles in critical infrastructure sectors like energy and transportation and those in closely regulated industries such as health care and financial services, private sector firms have few federal obligations to publicize anything that happens within their computing environments. Although almost all states have enacted some type of regulation requiring companies to notify individuals if their personal information is compromised in a security breach, these rules generally do not mandate full disclosure of the nature of any successful attacks or the vulnerabilities that were exploited. Monaco noted during her speech that during the Sony Pictures incident, the government quickly shared cyber threat information in the form of attack signatures with private sector firms so that they could update their defenses and, presumably, try to avoid falling victim to a similar attack. The administration clearly would like more communication from the private sector in these areas that it currently gets. A neutral observer might accurately suggest that private sector organizations are likely to reach out to the government and share information only when they have been compromised and need help, but not as a routine preventative defense practice. Not everyone accepts the implied assertion that the government has better or more complete information than private security researchers, but the definitive attribution the administration made in naming North Korea as responsible for the Sony hack seemed to indicate that the government had more evidence to go on than any of the security analysts that came to different conclusions.

During the delivery of her prepared remarks, Monaco offered a simple rationale for the new center: “Currently, no single government entity is responsible for producing coordinated cyber threat assessments, ensuring that information is shared rapidly among existing cyber centers and other elements within our government, and supporting the work of operators and policy makers with timely intelligence about the latest cyber threats and threat actors.”

In a Q&A session following the speech, Monaco responded to a specific question from the event moderator regarding recent criticism that the CTIIC is nothing more than another layer of government bureaucracy and is, simply, unnecessary. She reaffirmed the administration’s position that there is a critical gap in current government analytical and information sharing capabilities. The goal for the administration is more complete and more rapidly produced actionable intelligence regarding threats. It remains to be seen whether the Center will be able to overcome the reluctance of individual agencies and programs to hand over their information to the Center, but the administration continually cites the positive example of the National Counterterrorism Center formed in response to the 9/11 attacks.

There is almost unquestionably a logical argument to be made that an existing agency working in the cybersecurity realm – perhaps DHS or NSA – could simply have their scope of responsibility expanded instead of creating a wholly new piece of the federal organization structure. It is far from clear, however, that effecting a change in mission for an existing agency would be any easier to bring about than carving out a newly defined one. For instance, the updated Federal Information Security Modernization Act (FISMA) passed with bipartisan support at the end of 2014 divides security oversight among multiple agencies, giving most operational security responsibilities to DHS. But FISMA only applies to federal executive agencies (not to the legislative or judicial branches of government let alone the private sector) and it also exempts many aspects of military and intelligence operations because it does not apply to “national security systems.” The administration’s take is that coordinated analysis of threat and attack information from all available sources is a crucial but missing piece in the government’s strategy to more effectively address cyber threats.

Anthem breach enabled by compromising administrator credentials

As an internal investigation continues into the massive data breach reported last week by Anthem, the company has confirmed reports that administrators who discovered the breach in late January noticed unusual activity on Anthem’s database systems – specifically that queries were being run against the database using the authenticated accounts of Anthem administrators. This information suggests that the attackers were able to access the database and retrieve data from it because they were in possession of valid administrator credentials. What’s less clear is how or when those credentials were compromised, or what level of authentication was required of administrators logging on to the database. If it turns out, as some observers have surmised, that one or more of Anthem’s administrators was victimized by a phishing attack, then this would also suggest that database administrators require only usernames and passwords to authenticate to the database. Presumably the successful attackers also needed to penetrate the insurer’s network perimeter in order to directly access the database, so perhaps a review of remote access logs associated with the compromised accounts will help confirm or refute the source of the attack.

Anthem Logo

Much has been made in the press of the fact that the data stolen from Anthem was not encrypted (which is recommended but not required under HIPAA). If the retrieval of the data occurred using administrator accounts, however, then any database-, drive-, or server-level encryption of data at rest would have been irrelevant because such data is typically decrypted on-the-fly when it is accessed by authorized users. The type of encryption advocated to protect health data is most useful to mitigate the physical theft of computers, hard drives, or removable media (such as backup tapes), or to safeguard sensitive data contained in database extracts or files to be electronically transferred from one location to another.

From the beginning, Anthem has characterized the breach as the result of “a very sophisticated external cyber attack.” Nothing the subsequent reporting or purported expert analysis has yielded evidence to the contrary – in fact there are indications that the breach itself may have been the culmination of an effort that began many months earlier with a concerted and prolonged attack consistent with an “advanced persistent threat.” To help with its investigation of the breach, Anthem has engaged security consultant Mandiant, a firm probably best known in security circles for bringing to light the allegedly Chinese government-sponsored cyber espionage group the company terms “APT1.” Although it is most likely a coincidence, according to initial reports from the Anthem investigation Chinese hackers are the leading suspects behind the breach.