11th Circuit court says no warrant needed for cell site location data

In a 9-2 ruling issued on May 5, a full 11-judge panel of the U.S. Court of Appeals for the 11th Circuit rejected one of its own panel rulings to decide that law enforcement authorities do not need to get a warrant to obtain cell tower location records or other business records created and maintained by telecommunications companies about subscribers or users of their services. In the case under consideration, United States v. Quartavious Davis, federal prosecutors had secured a conviction of Davis for multiple counts of armed robbery based in part on their introduction as evidence telephone call records from wireless carrier MetroPCS containing details of Davis’ calls over more than two months. The MetroPCS records included the numbers Davis called and the physical location information for every cell tower that connected the calls. While the cell tower location data could not precisely place Davis at the robbery sites, prosecutors used the data as evidence that Davis was at least near each of the locations around the times the crimes occurred. After his initial conviction, Davis’ attorneys appealed the decision from the federal District court to the 11th Circuit Appellate court, a three-judge panel of which upheld the convictions but ruled that the government nonetheless had violated Davis’ Fourth Amendment rights by obtaining the telephone records without a search warrant (which, it is important to note, existing law does not require).

The legal treatment of data produced by wireless carriers about the users of their cellular networks is an area of significant debate, in Congress as well as the courts, especially with respect to GPS data and cell site location data and other information that carriers gather or produce as a routine part of providing wireless service to consumer and businesses. The SCA was enacted as Title II of the Electronic Communications Privacy Act of 1986 (ECPA), the law governing most aspects of telecommunications transmissions. Because the law was written nearly 30 years ago, applying it to modern communication transmission methods – including text messaging, cell phones, and satellite navigation systems such as GPS – sometimes seems to take the law into uncertain territory. There have been multiple attempts in Congress to modernize the ECPA, including an ultimately unsuccessful effort by Sen. Patrick Leahy in 2011 to strengthen legal protection for geolocation information such as GPS coordinates and cell site location data.

The focus of the most recent ruling involves the way in which the authorities obtained the cell tower location data and other call data from MetroPCS. The government, following established procedures spelled out by statute under the Stored Communications Act (18 U.S.C. §2703), applied to a federal magistrate judge for a court order for MetroPCS’ records that were “relevant and material to an ongoing criminal investigation” as the statue requires. The law does not require the government to show probable cause to request such business records. The 11th Circuit Court opinion explicitly notes that, in this case, the government did not ask for, nor did it obtain, the contents of any telephone call, cell phone, or text message, any cell location information regarding when Davis’ cell phone was powered on but not in use for a call, or any GPS location information associated with the cell phone. Although the court order the government obtained met the applicable statutory requirements, Davis’ attorneys filed a motion to suppress the records, claiming the government’s ability to obtain and review the records constituted a search under the Fourth Amendment and therefore should have required a showing of probable cause and a search warrant. This is essentially an argument that the SCA, as codified, is unconstitutional when applied to cell tower location information. The District Court denied the motion, and the 11th Circuit ruling this week affirmed that decision. Davis’ attorneys apparently plan to appeal this decision to the U.S. Supreme Court, but it is far from certain that the Supreme Court would agree to hear the case, since other rulings at the appellate level so far seem to agree and the Court is often reluctant to take on an issue unless there is disagreement among lower courts.

In breaking down the rationale for its findings, the 11th Circuit decision seems to follow conventional Fourth Amendment analysis to determine both whether the request for cell tower location information is a search and whether the records are something for which an individual like Davis can assert a reasonable expectation of privacy. The decision cites multiple Supreme Court precedents holding that individuals cannot have a reasonable expectation of privacy for certain types of business records “owned and maintained by a third-party business.” This line of reasoning has been used to refute an expectation of privacy for the telephone numbers a caller dials, and first the Fifth Circuit and now the 11th Circuit have extended that thinking to cell tower location data and other telecommunications provider information that does not include the actual content of calls. Simply put, Davis cannot argue for the suppression of the data because, “This type of non-content evidence, lawfully created by a third-party telephone company for legitimate business purposes, does not belong to Davis, even if it concerns him.” The Court also found that Davis had “no subjective or objective reasonable expectation of privacy” regarding the cell tower location data.

Lawsuit for improper access to medical records faces many challenges

In a legal action noted by several privacy-minded observers, a woman in Cabell County, West Virginia filed suit in March against health care provider Marshall Health (the collective name for a group of clinical centers affiliated with Marshall University School of Medicine) for failing to prevent unauthorized access to her daughter’s medical records by a Marshall Health employee. According to an online article published by the West Virginia Record, the plaintiff’s daughter sought medical treatment from Marshall Health, where a woman in a relationship with the girl’s father was an employee. The employee, either acting on her own or on behalf of the girl’s father, accessed the daughter’s electronic medical records on multiple occasions over a period of more than a year. The employee was not involved in the daughter’s care, so her access to the medical records was unauthorized – a fact that Marshall Health acknowledged – and therefore constituted a breach of privacy. According to the account in the WV Record, Marshall Health management only became aware of their employee’s activity after the plaintiff contacted Marshall’s CIO to express her concerns that her daughter’s records were being accessed (and potentially altered) improperly. Marshall Health apparently had no automated monitoring of employee access to records and never provided any notification about its employee’s activity during the time it allegedly occurred, although it did confirm the unauthorized access in a letter responding to the plaintiff’s concerns. She is suing for compensatory and punitive damages.

It’s not entirely clear what the legal or statutory basis for this lawsuit might be. Like most states, West Virginia has enacted laws covering the protection of consumer information, including requirements for entities holding computerized personal information when breaches of security that information occur. The applicable sections of the West Virginia code, however, define a security breach to mean unauthorized access to personal information that “has caused or will cause identity theft or other fraud” to a resident of West Virginia. The unauthorized access is not in dispute here, but the alleged harm doesn’t seem to related to identity theft or fraud. Although the facts of the case raise issues that sound relevant under the Security Rule and the Privacy Rule of the Health Information Portability and Accountability Act (HIPAA), there is no private right of action under HIPAA, so the plaintiff can’t bring suit under federal rules protecting the security and privacy of health-related personal information. It’s possible that the suit rests on a negligence claim, since the plaintiff claims that Marshall Health had a duty to protect the confidentiality of patient information and that it breached that duty when it failed to prevent unauthorized access by one of its employees to that information. The difficultly with that legal path is that, under U.S. tort law, to succeed with a claim of negligence the plaintiff must show actual damages as a direct result of the action (or inaction) that constitutes the breach of duty.

Under the HIPAA Privacy Rule covered entities like Marshall Health are required to maintain an accounting of disclosures of protected health information, but the regulations currently in force include an exception for disclosures related to treatment, payment, or health care operations. The employee implicated in this lawsuit may not have been engaged in any of those activities, but the exception for these “routine” types of disclosure often means that covered entities don’t produce detailed data access logs for their employees who have permission to access health record systems. The Health Information Technology for Economic and Clinical Health (HITECH) Act included a provision that would change accounting of disclosure rules to remove the exception for treatment, payment, and health care operations purposes, but that provision has never been implemented. As part of its consideration of that provision, the U.S. Department of Health and Human Services (HHS) actually proposed going further than the language in the law to add a requirement for covered entities to be able to provide an “access report” to individuals that would indicate who has accessed their electronic health information. The access report idea was contained in a Notice of Proposed Rulemaking published in 2011, but neither the access report nor changes to the accounting of disclosures regulation was included in the HITECH Omnibus Rule finalized in early 2013.

If the plaintiff’s allegations are true, then Marshall Health may in fact be in violation of HIPAA rules, some of which could serve to articulate the specific duty it owed to protect patient records from unauthorized access. The HIPAA Security Rule requires covered entities to “regularly review records of information system activity” including audit logs and access reports. The simple fact that Marshall Health didn’t regularly monitor employee access to its systems may not, in and of itself, be sufficient justification for a breach of duty since the regulations do not specify that “regularly” means. Because it seems that Marshall Health admits they employee’s access was unauthorized, it presumably bears some fault that the unauthorized access occurred. Without a showing of the specific harm that resulted from the unauthorized access, however, the plaintiff can not expect to prevail even if there is clear evidence that Marshall Health acted (or failed to act) in a way that it should have to prevent its employees from accessing patient data that is not explicitly needed for the performance of their job duties.

Cyber insurance transfers risk but doesn’t replace due care

The ongoing series of high-profile data breaches reported by companies across multiple industry sectors – including major retailers (Target and Home Depot), health insurers (Anthem and Premera), online service vendors (Uber), hotels (Mandarin Oriental and Hilton HHonors), and entertainment (Sony) – has raised awareness of the diverse and sophisticated nature of threats that organizations face and increased interest among executive teams of ways to reduce risk exposure from data breaches. One increasingly popular option is cyber insurance, particularly to cover corporate liability from breaches and ensuing harm to consumers and to pay the costs of responding to breaches such as notifying affected individuals and providing credit monitoring services. Firms that underwrite cyber insurance and the companies that seek such coverage are separating cyber liability coverage from conventional commercial general liability policies. This separation provides policy holders greater confidence that the potential damages from a cyber incident will be covered, but also allows insurers to clearly define exactly what types of incidents and damages are covered and to prescribe conditions under which claims will be honored. Those shopping for cyber insurance should also be aware that while there are now dozens of insurers offering such policies, the terms of coverage vary widely.

For organizational executives and risk managers looking for a means to transfer (instead of mitigate or accept) risks related to IT security and privacy, cyber liability insurance may be a terrific option. These companies should be mindful, however, that securing cyber insurance coverage does not diminish their obligations to ensure adequate protective measures are in place for customer data and other IT assets. Adding insurance as a response to identified risks should not therefore be seen as a substitute for implementing many types of available security and privacy controls, as these measures may be necessary to satisfy the standard of due care. The standard of due care in American tort law says that organizations can be held liable if they fail to implement readily available technologies or practices that could mitigate or prevent loss or damage. The legal precedent for this traces back more than 80 years to a 1932 decision by the U.S. Second Circuit Court of Appeals, familiarly known as the T.J. Hooper case. This case involved two tugboats (the T. J. Hooper and the Montrose) that were towing coal barges that sank off the New Jersey coast in a storm. The cargo owner sued the barge company and the tugboat operators to cover its loss. The court ruled that both the barges towed by the tugboats and the tugboats themselves were “unseaworthy,” because with respect to the tugs they were not equipped with radios that could have been used to alert the tugboat pilots to the impending storm. Although the court noted that the use of such radios was not yet widespread, it nevertheless found the tugboat operators liable because radios were available and, had they been in place, the bad weather and the subsequent loss of cargo could have been avoided. The modern lesson is that where technology is available that can reasonably be expected to prevent or reduce the likelihood of loss or damage, under the standard of due care an organization may be held responsible for implementing that technology. This means, for instance, that organizations that have not established security monitoring or intrusion detection or prevention controls may find their cyber insurers unwilling to accept claims for breaches and resulting damages.

Installing Snort on Windows

On March 12, the Sourcefire team announced the release of Snort 2.9.7.2, the latest update to one of the most popular (and open source) network IDS tools. Detailed instructions for installing Snort on either Ubuntu Linux or Windows 7 are available under the Learning tab of this website. All things equal, installing Snort on Linux is preferred to Windows, especially for real-world use, but for learning about the tool or experimenting with rule-writing and alert generation either operating system is workable. The Windows approach is often preferred for less technical users looking to understand the basics of Snort because Windows installation is more automated and takes much less time than it does on Linux. As you can see from the video linked above, from start to finish the Windows installation process can be completed in as little as 20 minutes.

Is Clinton’s use of a private email server a big deal or not?

A little more than a week after the New York Times first reported that she used a personal email address rather than a government account during her tenure as Secretary of State, Hillary Clinton addressed the situation in a press conference in New York on March 10. Amid the many swift, often partisan, reactions to the somewhat unsatisfying explanation she offered are two broader questions that may turn out to be more relevant than whether Clinton was, intentionally or inadvertently, keeping from public scrutiny any details about her work as Secretary of State. While it seems fairly obvious that her decision to handle her email on her own was a poor choice, the significance of that decision by itself can only be measured with the benefit of as-yet-undetermined details about whether she complied with federal record-keeping regulations and whether the private server and the communications it handled were secured sufficiently to provide adequate protection, particularly against unauthorized disclosure.

The first of these is whether Clinton’s use of a personal account and corresponding privately managed email server violated federal regulations (particularly including but not limited to the Federal Records Act) or State Department or other executive branch requirements. The Times initially reported the situation using language that strongly implied Clinton might have violated federal law, but subsequent articles more accurately described government regulations and State Department guidelines and email preservation capabilities that were in place during Clinton’s time as Secretary. The revised consensus opinion seems to be that personal email use was discouraged but not forbidden, although individuals using personal instead of government email accounts were clearly obligated to ensure appropriate security measures were in place to protect email communications. Federal records management regulations do require agencies to create and preserve documentary materials (in paper or electronic form) that relate to the conduct of official duties by agency personnel or to the transaction of public business. By furnishing her government-related emails to the State Department, Clinton would be doing precisely what federal regulations require. There is a separate but related question as to whether Clinton should have turned over the entire contents of the email server to the government for review, instead of first removing what she considered to be personal communications. While Clinton opened herself to scrutiny by preemptively separating (and apparently deleting) her personal email, the relevant records management regulations clearly distinguish “federal records” from “personal files,” the latter being defined as “documentary materials belonging to an individual that are not used to conduct agency business” and explicitly “excluded from the definition of Federal records.” (36 CFR §1220.18)

The second question is whether Clinton’s private email system should be assumed to be insecure or at least less secure than the system operated by the State Department. There are at least two dimensions to consider on this point, because the answer depends both on how effectively the Clinton email server was initially configured and maintained over time and on the security of the government email that she should presumably have used instead. Most industry observers and government security types assume that the stringent security requirements derived from FISMA and other applicable regulations make it unlikely that a private email server – even one set up at the request of a former President of the United States – could match the security controls in place for an executive agency. The State Department, however, is not in the strongest position to make such a comparison, since suspected intrusions into its own email system prompted State to temporarily shut down the entire system last November and again as recently as yesterday. Potential breaches notwithstanding, maintaining the security of an Exchange server is not a one-time undertaking, but instead requires regular maintenance, monitoring, and updates. It remains unclear what level of day-to-day operational support Clinton’s email system has or who actually manages the server on the Clintons’ behalf.

Clinton invited some skepticism when she stated during the press conference that “there were no security breaches” of the email server, which was reportedly installed and maintained within the Clinton’s personal residence. It seems likely that, if the server had been implemented incorrectly or in a manner that exposed security vulnerabilities, someone might have drawn attention to any such weaknesses, particularly in the time since Clinton’s use of the clintonemail.com domain was publicized in 2013. The Clintons have not provided any rationale for choosing a Microsoft Exchange server (although it may have been something of a default since Exchange is widely used across the government). The email server, which remains active and Internet-reachable via Outlook Web App, can easily be found, researched, and presumably subjected to scans or attempted penetration attempts, yet to date there is only speculation as to how secure (or insecure) the server might be. It does appear that the Clinton email server permits the use of username and password credentials for access, in contrast to the two-factor authentication in place at the U.S. House of Representatives, for instance, which requires users to have a RSA SecurID token to authenticate. There are many federal civilian agencies that rely solely on usernames and passwords, so if the Clintons chose to do the same that would not be outside the government norm. Security analysts might be more interested to know what sort of intrusion detection system or network monitoring, if any, is in place to watch the server for signs of unauthorized access attempts.