Retiring an email server with sensitive data on it? Learn some lessons from Clinton

In the latest chapter in the ongoing saga concerning Hillary Clinton’s use of a private email server for herself and some of her staff during her tenure as Secretary of State, the Washington Post reported that the IT services company Clinton hired to manage the server after she left the State Department has “no knowledge of the server being wiped” despite suggestions by Clinton and her attorney that the contents of the server had been permanently erased. Much has been made of the important technical distinction between deleting files or data on a computer and wiping the hard drive or other storage on a computer. As many people aside from Clinton seem to be aware, merely deleting files does not actually erase or remove them, but simply makes the storage space they take up available to be overwritten in the future. Depending on the use of a computer afterward, deleted data can remain in storage and may be retrievable through simple “undelete” commands or through forensic analysis. In contrast, wiping is meant to permanently remove data from storage by overwriting the space it occupied with random data; data erasure methods used by many government and private sector entities overwrites the data multiple times to better ensure that the original data cannot be retrieved or pieced back together.

Those who have been following the historical accounts of the Clinton email server may recall that there have actually been two servers in use – the first was set up and maintained at the Clintons’ home in Chappaqua, New York, while the second was put into service when Clinton moved her email system management to Platte River Networks. (Historical analysis of DNS records associated with clintonemail.com suggest the switch to a third-party host may have occurred in 2010 rather than 2013). If tasks like server wiping were left to the Clinton team and not handled by Platte River, then it seems at least possible that the original server may not have been properly wiped when the data on it was transferred to the new server. According to Post reports, Platte River took possession of the original server and stored it at a data center facility in New Jersey until it handed the server over to the FBI at Clinton’s request. News accounts of the Platte River relationship explain that emails covering Clinton’s entire service as Secretary of State were on the original server and were migrated to a new server. The contents of the second server were subsequently copied to removable
media in 2014 and either deleted or removed from that server. The latest details suggest that neither of the two servers may have been wiped, but since they ostensibly contain the same data (at least from the 2009-2013 time period when Clinton was at State), if either server was not sanitized then many if not all of Clinton’s emails could be retrieved. Because the server and its data were migrated to a new server in 2013, there is little practical value in keeping the original server, especially if its contents had been securely erased. Clinton’s team should now feel some measure of relief that they did not dispose of the the original server if it turns out that is wasn’t wiped.

From a security best practice standpoint, if in fact the Clinton email server was not wiped as Clinton and her team apparently intended, then this failure to permanently remove Clinton’s personal emails and any other data she didn’t wish to share with government investigators provides another good example of operational security controls that would presumably be in place with a government-managed email server that were lacking in Clinton’s private setup. The National Institute of Standards and Technology (NIST) refers to data wiping by the more formal term “media sanitization” and requires the practice for all information contained in federal information systems, regardless of the sensitivity level of the data. While it is certainly likely that at least some public and private sector organizations fail to perform data wiping on servers, computer workstations, and other hardware that includes writeable storage, it is a very common security practice among organizational and individual computer users.

The possibility that Clinton’s email hasn’t been, as her attorney and spokespeople have asserted, completely removed from the server may make it a bit harder for her critics to argue that Clinton’s deliberate action to wipe the server is a sign that she has something to hide, although it may be that she and her staff intended to permanently remove the emails and just didn’t have the technical knowledge to do it properly. This is troubling in part because of the implication that – notwithstanding the security skills of the State Department staffer the Clintons privately paid to manage the server they kept at their home – routine security practices may not have been put in place. When the use of the private server became widely known, several sources used publicly available information about the clintonemail.com domain and the Microsoft Exchange server used to provide email services for Clinton and others. It’s hard to know whether even basic security recommendations from Microsoft were followed, but some have pointed to server and operating system fingerprinting results indicating the server was running Windows Server 2008 (and had not been upgraded to the more secure 2012 version). Aside from potential vulnerabilities associated with the OS and the Exchange 2010 software that may or may not have been patched, the server was also apparently configured to allow remote connections both via Outlook Web Access and an SSL VPN, both of which used self-signed digital certificates to establish secure connections. It makes sense that Clinton would want and need access to the server from anywhere, although a more secure approach would limit connections to Outlook email clients or ActiveSync-enabled devices. Regardless of how well (or poorly) the server was secured while it was operational, the steps taken to secure the data once the server was no longer in use provide a good example of what not to do.

Want to reduce unauthorized login attempts? Use Google Authenticator

If you have a public website, you should know that your site is regularly scanned and otherwise accessed, both by web “crawlers” from Google, Bing, and similar search engines and by individuals or agents with less benign intentions than cataloging your site’s pages. Websites running on popular platforms like WordPress or Joomla that expose their administrator and user login pages to public accessible with standards, predictable URL patterns are often targeted by intruders who attempt brute force login attacks to try to guess administrator passwords and gain access. There are many ways, on your own or through the use of available plugin applications, to keep these types of unauthorized attempts from being successful, but it is relatively difficult to prevent the attempts from occurring at all. The most effective methods for defending against brute force attacks and other types of unauthorized access attempts tend to focus on adding and configuring one or more .htaccess files to a website to control access to directories, files, and web server functionality. Many popular WordPress security plugins, for example, enable features that modify .htaccess files in combination with scripts that track the number of login attempts from an IP address or associated with a single username. This can provide login lockout functionality that both limits the number of failed attempts that can be made (thwarting brute force attacks) and prevents future access from IP addresses or agents that tried to log in.

duoWith restrictions in place like limiting the number of login attempts and blacklisting IP addresses or address ranges, a website administrator can be relatively confident than an unauthorized user will not be able to gain access (assuming of course that good passwords are also employed for authorized user accounts). It can nevertheless be very difficult to significantly reduce unauthorized login attempts, particularly when attackers use automated botnets or distributed attack tools to vary their source IP addresses. It may seem like an improvement a site only allows one failed attempt per IP address, but if an attack uses hundreds or thousands of source computers, the volume of failed attempts can still cause performance problems for a targeted site (not to mention filling up the administrator’s email inbox with failed login notices). One good way to eliminate a greater proportion of these attempts than is possible through IP blacklisting is to implement some type of two-factor authenticationOTPauth There are several commercial and open-source alternatives available, including Duo Security, OTP Auth, and Google Authenticator, all of which have PHP-based implementations available that make them suitable for use with WordPress and many other web server and content management platforms. All of these tools work in essentially the same way, where the website or application and its users run a pseudo-random number generator initiated with the same seed value (so the series of numbers they produce match). Adding this type of two-factor authentication (2FA) to a login page means users will need to enter their username, password, and one-time code generated by software on a device under their control (typically a computer workstation or smartphone).

Google-AuthThis post discusses Google Authenticator as a representative example (the tool has apps for both Android and iOS devices, is used on Google sites and many third-party services, and can be added to WordPress sites via a free plugin). Setting up 2FA on a Google-enabled site entails generating a shared secret stored by Google Authenticator and the end user’s device. The smartphone apps allow users to scan a QR code or manually enter the secret, as shown below (note: neither the secret nor the QR code in the image are actual Google Authenticator settings).

GoogleAuthSetupOn WordPress sites, installing the Google Authenticator plugin modifies the login page to add a third field for the one-time code (which changes every 30 seconds). While no means a perfect solution, the way many automated probes and scripts seem to work, encountering a login page with an additional 2FA field prevents the submission of the login form or (depending on the specific tools) generates a form error separate from the failed login or HTTP 404 errors commonly associated with unauthorized access attempts.

It’s (past) time for two-factor authentication

With the general unease about relying on usernames and passwords for authentication, conventional wisdom in information security seems to agree that organizations should add a second (or third, or fourth …) means of authentication is an obvious step to enhance security for systems, networks, and (especially) web applications. In an approach commonly termed strong authenticationtwo-factor authentication (2FA), or multi-factor authentication (MFA), the idea is to add “something you have” or, less often, “something you are” to the password-based credentials that are already “something you know.” There are certainly dangers to depending too much on authentication, no matter how strong, as a control to protect information assets, but industry and government seem to agree that two-factor authentication helps to address the threat posed by the compromise of user credentials – a cause cited in numerous high-profile breaches, including the ones at Anthem, Target, Home Depot, and the Office of Personnel Management (OPM).

In commercial domains, two-factor authentication is familiar to organizations subject to the Payment Card Industry Data Security Standards (PCI DSS), which requires merchants to use 2FA for remote network access. Major social media and online service providers now offer optional two-factor authentication to user accounts; these include Amazon Web Services, Apple, Dropbox, Facebook, Google, Microsoft, and Twitter. In some cases, including Apple, Dropbox and Twitter, making 2FA available to users was a direct result of user account compromises, data breaches, or exposure of related security vulnerabilities. While 2FA is by no means foolproof, for most users adding some form of two-step verification in the authentication process makes their accounts much less susceptible compromise to unauthorized users, even if they are tricked by a phishing email or other social engineering tactic.

Two-factor authentication is hardly a new concept, as requirements to use it in some industries and public sector systems date to at least 2005, when the Federal Financial Institutions Examination Council (FFIEC) first issued guidance to banks recommending two-factor authentication for online banking services and when the National Institute for Standards and Technology (NIST) released its first version of Special Publication 800-53, “Recommended Security Controls for Federal Information Systems.” At that time, NIST required multi-factor authentication (specifically, the control and its enhancements are under IA-2 within the Identification and Authentication family) only for federal agency systems categorized has “high impact” – a designation most often associated with critical infrastructure, key national assets, protection of human life, or major financial systems. The following year, when NIST first revised 800-53, it added multi-factor authentication as a requirement for “moderate impact” systems, but only for remote access. By 2009, Revision 3 of 800-53 extended multi-factor authentication as a requirement for network access to privileged accounts for all federal systems, and required MFA for non-privileged access to moderate- or high-impact systems.

Despite these long-standing requirements and a formal mandate in early 2011 from the Office of Management and Budget (OMB) directing federal agencies to implement strong authentication using personal identity verification (PIV) ID cards to complement usernames and passwords, many agencies have been slow to enable multi-factor authentication. In its annual report to Congress for fiscal year 2014, required under the Federal Information Security Management Act (FISMA), OMB reported an overall government implementation rate of 72 percent (up from 67 percent in 2013) for strong authentication. Several agencies, however, apparently made no progress at all in 2013 or 2014, and 16 agencies were called out for allowing “the majority of unprivileged users to log on with user ID and password alone, which makes unauthorized network access more likely as passwords are much easier to steal through either malicious software or social engineering.” Perhaps unsurprisingly, OPM is among these 16 agencies; OPM’s own Inspector General noted in the agency’s 2014 FISMA audit that although 95 percent of OPM user workstations required PIV-based authentication, none of the 47 major applications in OPM’s FISMA inventory require this type of strong authentication. Not mentioned in this report are access to systems by contractors, many of whom are not issued PIV cards and who must therefore use alternate MFA methods, assuming OPM or other agencies make such methods available.

4th Circuit rules that obtaining cell site location data requires a warrant

The U.S. Court of Appeals for the Fourth Circuit ruled this week in United States v. Graham that requests by law enforcement authorities to obtain and examine historical cell site location data for individual cellular subscribers constitutes a search under the Fourth Amendment and therefore requires a search warrant. In its 2-1 ruling, the Circuit Court explicitly rejected the third-party doctrine approach that both the Fifth and Eleventh Circuits have relied on in contrary rulings finding that reviewing cell site location data – which, the doctrine holds, subscribers voluntarily give to cellular network providers and therefore in which they can have no reasonable expectation of privacy – doesn’t constitute a search. In the specific case the Fourth Circuit heard, the fact that the majority believed the government erred by obtaining a court order under the Stored Communications Act (SCA) instead of a search warrant as required under the Fourth Amendment did not materially impact the outcome for the appellants, as the Circuit Court upheld their convictions and essentially forgave the government because it acted in good faith when it relied on the SCA.

The split among appellate courts on the privacy of cell site location data (among other issues) raises the likelihood that this issue will need to be addressed by the U.S. Supreme Court, just as it has done with global positioning system (GPS) data. In United States v. Graham, the judges seemed to be willing to put cell site location data on the same logical footing as GPS data. Despite the fact that cell site location data is far less precise in determining an individual’s location, the fact that cellular devices are small and usually carried by an individual as they go from public to private locations means, according to this court at least, that cellular device users do have some reasonable expectation of privacy when the cell site location data covers a significant period of time (no court has yet provided a clear standard as to how long is “long enough” to reach the level when a search warrant should be required).

Threat of phishing attacks shows no signs of diminishing

Phishing

A memo issued by the FBI on July 16 warning federal agencies that government employees are being targeted by a phishing campaign seeking to exploit known vulnerabilities in Adobe Flash is only the most recent indication that phishing has become a favored method of attack against government agencies. As part of their cyber-security awareness efforts, multiple agencies – led by the Department of Homeland Security’s United States Computer Emergency Response Team (US-CERT) – encourage individuals and organizations to report phishing emails, which are a common approach used by hackers to infect government systems with malware or to try to obtain personal or technical information that could be useful in other types of attacks. Network monitoring and intrusion detection and prevention systems employed by the government are often helpful in identifying signs that malware has been introduced into agency environments (such as by noting network traffic flows from government agency sources to foreign or known-to-be-bad destinations) but they don’t appear to be very effective at flagging phishing emails that trick users into clicking on links or opening attachments that cause the infection.

It should come as no surprise that government employees are targeted in phishing scams as much or more often than commercial sector workers, given that many agencies publish employee directories online that include telephone numbers and email addresses. According to research reported by threat intelligence firm Recorded Future, user information including email addresses and login credentials from 47 different U.S. government agencies can be found online. The timeframe during which this information was available spans many months predating the disclosure of the large-scale compromise of government employee and contractor information from the Office of Personnel Management (OPM).

To address to the phishing threat, many agencies are augmenting their security awareness training with exercises, often termed “phishing expeditions,” that entail sending fake phishing emails to employees and contractors and tracking how users respond. These fictional messages are specially designed to look like phishing emails and including tell-tale signs that users have ostensibly been trained to recognize as suspicious. Based on outside observation, the results appear mixed. A small but troubling minority of users (as many as 10 to 15 percent in some agencies) click on links embedded in fake phishing emails and an even smaller number take the preferred action of reporting the suspicious email to an agency’s IT group or incident response team. On a somewhat more positive note, in the wake of the OPM breach, many government employees showed a heightened level of sensitivity towards potential email-based scams when they responded with alarm to email messages they received from the contractor OPM hired to notify individuals affected by the breach, thinking that these legitimate emails were actually phishing attempts. In all likelihood many agencies, particularly in the defense and intelligence arenas, blocked these externally-sourced messages and prevented their employees from receiving them in the first place. It turns out government worker suspicions were well founded:  on June 30 US-CERT issued an alert indicating the existence of phishing campaigns related to the OPM breach, presumably capitalizing on the potential confusion regarding notification to affected personnel and identity protection services being made available to them. Because the OPM hack has generally been characterized as intended to harvest personal information for future use – in subsequent spear phishing attacks or to try to coerce individuals to divulge organizational information – the added awareness of phishing attacks in the wake of the OPM incidents may serve to reduce the likelihood that employees and contractors fall for phishing attacks in the future.