If you use Facebook, don’t wait to change your privacy settings

In a privacy policy change announced recently and effective on December 9, social networking supersite Facebook made significant changes to the default privacy settings for all Facebook users. In some cases the default settings announced disclose more information to more of the Facebook user population and expose that information to search engines like Google, while in other cases (at least according to Facebook’s statements about the changes) they are merely continuations of existing disclosure standards, albeit now with more fine-grained access control settings available to users to constrain the visibility of their own information. The changes that have garnered the most attention in the press relate to a core set of personal information that Facebook now makes available to everyone, regardless of preferences users might have had for controlling disclosure of that information in the past.

“Certain categories of information such as your name, profile photo, list of friends and pages you are a fan of, gender, geographic region, and networks you belong to are considered publicly available to everyone, including Facebook-enhanced applications, and therefore do not have privacy settings. You can, however, limit the ability of others to find this information through search using your search privacy settings.”

While the level and granularity of privacy settings where users may set preferences has increased, in addition to the basic set of information items now considered “publicly available” regardless of a user’s preferences, some global privacy settings that were previously available to users have been removed, such as the single setting that used to allow users to prevent any of their information to be made available to Facebook applications. Most troubling among privacy advocates seems to be the explicit move by Facebook towards openly sharing users’ information. Facebook has angered users in the past, saying at the same time that users “own all of the content and information you post on Facebook” but claiming unrestricted rights to do just about anything Facebook wants with that data. The company’s stance has softened somewhat in the past few months, and language in the current privacy policy is not as strong as it was back in February, but it is also understandable that some users are considering canceling their accounts entirely in response to the latest changes and re-categorization of key profile information as “public.”

Even among those aware that changes have occurred, many Facebook users may not realize that unless and until a user takes explicit action to modify privacy settings, the new changes have overwritten any previous disclosure preferences expressed by those users. The global default seems to make profile information and content users store on Facebook available to all friends and friends of friends (a setting Facebook calls “Friends and Network”) which for many users is a substantial increase in the user population that now has access to their information. Also, because the changes went into effect for all users, the new settings remain in effect until a user changes his or her own privacy settings, something users are prompted to do the first time they log in to Facebook since the change occurred.

There is a precedent for Facebook reconsidering moves broadly deemed to be too invasive of privacy, and there are explicit terms within Facebook’s Statement of Rights and Responsibilities (see part 13, “Amendments”) that allow for unpopular changes to be put to a vote of the membership, although for the vote to be binding requires 30% of active users (or approximately 105 million based on current total user estimates) to participate. A couple of years ago, Facebook ultimately chose to cancel its controversial Beacon program after widespread outcry over the advertising application’s reach into online behavioral tracking. It remains to be seen whether enough users are sufficiently upset by the latest Facebook changes to mount a coordinated effort to roll back to the previous privacy settings and approach.

House passes Data Accountability and Trust Act

Legislation passed by the House of Representatives this week (H.R. 2221, the Data Accountability and Trust Act) includes provisions both for national standards on data breach notifications and adding new responsibilities and consumer empowerment protections to require data brokers and other holders of personal information to verify the accuracy of the information they hold on individuals.

With parallel action on data breach disclosure bills in the Senate, a lot of the current coverage on the House passage focuses on the breach notification provision in H.R. 2221, which simply and clearly says that anyone

“that owns or possesses data in electronic form containing personal information shall, following the discovery of a breach of security of the system maintained by such person that contains such data notify each individual who is a citizen or resident of the United States whose personal information was acquired or accessed as a result of such a breach of security.” (H.R. 2221 §3)

The proposed law extends breach notification requirements beyond the owners of the data to third party agents who maintain or process the data or service providers who transmit, route, or store the data. In cases involving more than 5000 individuals the notification must be made not only to the individuals affected and the Federal Trade Commission, but also to the major credit reporting agencies. Unless a delay in notification is warranted by law enforcement or national security concerns, notifications are to be made within 60 days of the discovery of the breach.

Separate language in Section 2 of the bill addresses requirements for ensuring the accuracy of personal information collected, assembled, or maintained by an information broker, and for providing access to consumers to review (at least annually) the personal information about the consumer held by the information broker, and to post instructions for consumers explaining how to request access to review their information. There is also a provision, consistent with most major privacy principle frameworks, that requires information brokers to correct any inaccuracies in personal information, and specifically obligates them to make changes in the data communicated to them by individuals whose data they hold, as long as the individual’s identity is verified and the request isn’t believed to be frivolous or irrelevant. Even in cases where the broker believes the information to be correct, where the disputed information isn’t part of the public record, at minimum the information broker must note the disputation and make an effort to independently verify the information. Despite the potential for difficulty with subjective terms like “irrelevant,” this provision gives the presumption for saying what is accurate to individual consumers, rather than the information broker. The only exception is when the disputed information the broker has is part of the public record (and has been correctly reported matching the public record), in which case the broker is required to tell the individual where he or she should direct a request to correct the information in the public record.

Holding data brokers accountable for making sure their data is accurate before it gets sold or passed on to other entities who might assume the validity of the data is a step in the right direction towards creating mechanisms for asserting data integrity. Such assertions would raise the confidence level receivers or secondary users of information might have when making decisions or otherwise using the information they receive. The lack of any sort of statement (much less a guarantee) of the accuracy of data used in information exchanges can invalidate data analyses based on data of unknown integrity and can lead to erroneous decisions. In the health information exchange context, for instance, these errors can and do cause real harm, such as when the wrong medication doses appear in health records. This problem certainly exists in paper-based record-keeping, but as more and more industries move towards electronic data exchange and data integration solutions, any assumptions about the integrity of the data received through electronic channels are just those — assumptions. Making data owners and aggregators responsible for determining the accuracy of the information they hold should in theory improve the integrity and therefore reliability of the information they sell. In this sense the legal requirement, if enacted, could actually improve the saleability of the data offered by information brokers.

Data loss lessons from TSA disclosure

As reported on Wednesday in the Washington Post and elsewhere, the Transportation Security Administration (TSA) inadvertently disclosed sensitive information about its airline passenger screening practices by posting a document containing this information online. The mistakes involved occurred at several levels, including human errors and poor choices in technology, so even where it seems theTSA was trying to do things the right way (recognizing the sensitivity of the information and therefore redacting the secrets before publishing it), the net result is the same. TheTSA’s unfortunate experience illustrates several considerations of which any organization managing and using sensitive data ought to be aware.

  • Understand that data is an asset, and must be treated and protected as such. This is especially true of sensitive information like the TSA’s ostensibly secret procedures and guidelines, and of intellectual property any organization has the comprises information about confidential business strategies, operational details, or competitive advantages.
  • Know what data you have, and attach data sensitivity categorizations to it. Pretty much everyone is familiar with the military classification system, but in any context it is important to be fully aware of what data you have, the nature of that data, where it is stored, and what it’s sensitivity level is, whether that’s based on internal value or on the potential impact to the organization should the information be disclosed.
  • Where sensitive data must be shared, take appropriate measures to ensure only those with appropriate authorizations can access it, including the use of encryption and other approaches to protect data in transit, in use, and at rest.
  • Choose appropriate tools and technologies to protect sensitive data. Even without knowing the specific technology used to redact the sensitive material in the TSA document that was published, what is clear is that the underlying data wasn’t changed, but some sort of digital mask or overlay was put in place. Using a graphical blackout function may be fine to prevent “shoulder surfing” in much the same way a password field in an online form shows “******” instead of the characters actually entered, but is not the same thing as rendering the data unreadable. An “old school” approach such as blacking out the sensitive information in a paper copy of the document and then scanning the redacted version to create a digital copy seems unsophisticated, but would not have allowed the disclosure that occurred using whatever digital redaction tool TSA employed.
  • Monitor the flow of information out of your organization. If the simple exercise of copying redacted text and pasting into a different application was sufficient to expose the sensitive data, it’s hard to imagine that a content inspection tool through which the TSA document might have passed wouldn’t have been able to recognize that the full contents of the document were in fact readable. This is not intended to be a wholesale endorsement of content inspection or any data loss prevention (DLP) technology, but in cases like this where personnel are trying to follow policy and just happen to do that with an ineffective tool, a secondary line of defense provides an added measure of assurance.

Perhaps more disappointing than the disclosure itself is the response to the incident by TSA and DHS officials, who suggested that since the document was widely circulated among airline industry organizations that this new disclosure did not represent significant new risk to airline safety. This is essentially saying that since lots of (authorized) people have access to the information already, it probably isn’t that hard for an unauthorized person to get access to it. If that is really the case, then the TSA isn’t doing enough on its own or with its industry to secure its sensitive information.

Progress in securing health records, but still a long way to go

An excellent article this week in InformationWeek by Mitch Wagner provides an nice overview of the privacy and security issues related to widespread deployment of electronic medical records, noting both the recent progress made in these areas and highlighting key challenges that remain. Some of the new privacy rules put into place with the HITECH Act portion of the American Recovery and Reinvestment Act — such as the application of HIPAA enforcement and penalties against individuals, rather than just organizations — are accurately characterized as incremental but still important steps in reaching the point where all personal health information is protected by the appropriate policies and safeguards, including technical controls to make sure those policies are actually followed. Similar steps to strengthen rules such as accounting of disclosures (basically keeping track of all the times and circumstances an individual’s health record is accessed) and ramp up enforcement mechanisms available to the government agencies responsible for investigating violations of the laws, should in the aggregate help consumers feel at least a little more comfortable about having their personal medical data stored electronically. With the additional attention now being placed on collecting and honoring patient preferences for information disclosure — in the form of explicit consent — it appears that the people responsible for working to overcome some of these challenges do understand the nature and extent of the problem, and continue to solicit input and collaboration from all sides of the issues. It remains to be seen whether the privacy and security concerns can be mitigated sufficiently to allow the rollout of electronic health records to proceed on the timetable set by the current administration.

A follow-up article by Wagner addresses many of the same issues, but provides more perspective on privacy concerns, especially opinions by some privacy advocates that the privacy measures to date (even the enhanced ones in the HITECH Act) just don’t go far enough. The Health IT privacy debate provides an interesting contrast to similar but differently focused conversations about societal expectations about privacy sparked by Facebook’s recent change in privacy policy.

Sometimes a breach is data theft, sometimes it’s business as usual

Among the latest unauthorized disclosures of personal information making headlines is the admission last week by T-Mobile that thousands of its British customers had essentially become pawns in a “black market” for mobile service subscriber information sold to T-Mobile competitors. It seems that one or more T-Mobile employees sold lists of subscribers nearing the end of their contracts to other mobile service providers; the T-Mobile customers were then contacted by salesmen for the competing carriers who tried to get the T-Mobile customers to switch providers. This case raises a couple of interesting ideas in the debate over the protection of personal information.

While it appears clear from statements from T-Mobile and U.K. authorities that the incident described represents data theft from T-Mobile and is therefore illegal, without the key element of rogue employees misusing corporate data assets for their own gain, the nature of the data sale by itself would not necessarily violate current privacy laws, particularly those in the U.S. that are generally less stringent than data protection regulations in the European Community. The data disclosed — name, mobile number, and contract expiration dates — certainly comprises personally identifiable information (PII) under just about any current definition of the term. The specific data fields in question however are not ones usually characterized as “sensitive” in domain or regulatory contexts such as financial services, health care, education, or public records, although most people do treat mobile telephone numbers as more private or sensitive than landline numbers, in part because mobile numbers are not generally available through public directories. If the sale of the customer data had taken place above-board, conducted by authorized T-Mobile personnel (for instance, to an affiliated third party such as a mobile handset vendor), it’s not at all clear that such a disclosure would violate any American privacy laws (British privacy laws, like those generally applicable in the EU, tend to require customer consent or “opt-in” before any secondary use or additional processing of personal information, even by the company that collected it). Take a look at the privacy policy of just about any large consumer bank or retailer and you will see language asserting a right to share personal customer information with third parties. For example, the Citibank privacy policy for Citi.com states, “Information collected by a Citigroup affiliate through Citi.com may be shared with several types of entities, including other affiliates among the family of companies controlled by Citigroup Inc, as well as non-affiliated third parties, such as financial services providers and non-financial organizations, such as companies engaged in direct marketing.” So according to such a privacy policy, and in full compliance with FTC rules, an American company could do what T-Mobile’s thieving employees did without violating any laws or regulations. If you’re thinking, “that doesn’t seem right” then you are seeing the implications of the sectoral approach to data privacy in the United States, in strong contrast to approached favored in other parts of the world, particularly the European Union’s Directive 95/46/EC.