WPF Report: Many Failures – A Brief History of Privacy Self-Regulation | Report Home Page
The report Many Failures: A Brief History of Privacy Self-Regulation, was published October 14, 2011.
Report authors: Robert Gellman and Pam Dixon
You are at the report main page, where you can download the full report or navigate to parts of the report. The Background and Executive Summary is in the text below.
Read the Report Front Matter, Table of Contents, and Executive Summary, below
Jump to other sections of the report: I. Introduction and Summary | II. Industry-Supported Self-Regulatory Programs for Privacy | III. Government Privacy Self-Regulatory Activities |IV. Combination Self-Regulatory Efforts | V. Conclusion
Brief Summary of Report
Major efforts to create self-regulatory, or voluntary, guidelines in the area of privacy began in 1997. Industry promoted privacy self-regulation at the time as a solution to consumer privacy challenges. This report reviews the leading efforts of the first self-regulatory wave from 1997 to 2007, and includes a review of the life span, policies, and activities of the Individual Reference Services Group, Privacy Leadership Initiative, Online Privacy Alliance, Network Advertising Initiative, BBBOnline Privacy Program, US-EU Safe Harbor Framework, Children’s Online Privacy Protection Act, and the Platform for Privacy Preferences. A key finding of this report is that the majority of the industry self-regulatory programs that were initiated failed in one or more substantive ways, and, many disappeared entirely. The report concludes with a discussion of possible reforms for the process, including a defined and permanent role for consumers, independence, setting benchmarks, and other safeguards.
About the Authors
Robert Gellman is a privacy and information policy consultant in Washington DC. (www.bobgellman.com.) Pam Dixon is the Executive Director of the World Privacy Forum. Gellman and Dixon are the authors of Online Privacy A Reference Handbook (ABC CLIO, 2011.)
About the World Privacy Forum
The World Privacy Forum is a non-profit consumer education and public interest research group. It focuses on a range of privacy matters, including financial, medical, employment and online privacy. The World Privacy Forum was founded in 2003. www.worldprivacyforum.org.
Table of Contents
Characteristics Common to Privacy Self-Regulation
Summary of Self-Regulatory Privacy History
- Industry-Supported Self-Regulatory Programs
- Government-Supported Self-Regulatory Efforts
- Combination Self-Regulatory Efforts
Individual Reference Services Group
The Privacy Leadership Initiative
The Online Privacy Alliance
The Network Advertising Initiative (1999-2007 version)
BBBOnline Privacy Program
Department of Commerce Safe Harbor Framework
Children’s Online Privacy Protection Act (COPPA)
Platform for Privacy Preferences Project (P3P)
What is at stake: Implications for current privacy self-regulatory efforts
What Could Improve the Process?
I. Introduction and Summary
Current online privacy debates focus on respecting the privacy interests of Internet users while accommodating business needs. Formal and informal proposals for improving consumer privacy offer different ideas for privacy regulation and privacy self-regulation, sometimes called codes of conduct.  Some in the Internet industry continue to advance or support ideas for privacy self- regulation. Many of these same players proposed and implemented privacy self-regulatory schemes that started in the late 1990s.
Missing from current debates on self-regulation in the online privacy arena is a basic awareness of what happened with the first round of industry self-regulation for privacy. Also missing are the lessons that that should have been learned from the failures of past privacy self-regulatory efforts.
This report reviews the history of the leading efforts that comprised that early wave of privacy self-regulation, which occurred from 1997 to about 2007. One purpose of this report is to document the facts about that first wave of self-regulation. The other purpose of this report is to inform current discussions about the recent past. A key finding of this report is that the majority of the industry self-regulatory organizations that were initiated have now disappeared. The disappearance of a self-regulatory organization constitutes a failure of the self-regulatory scheme.
This is not the first World Privacy Forum report on privacy self-regulation. In 2007, the World Privacy Forum (WPF) issued a report on the National Advertising Initiative’s early efforts at business-operated self-regulation for privacy. The report was The NAI: Failing at Consumer Protection and at Self-Regulation.  In 2010, the WPF issued a report on privacy activities of the Department of Commerce, The US Department of Commerce and International Privacy Activities: Indifference and Neglect.  The Commerce report reviewed in some detail the government supervised self-regulatory Safe Harbor Framework for personal data exported from Europe to the US. Unlike most other privacy self-regulatory efforts, the Safe Harbor Framework continues to exist, largely because of the government role. But the Safe Harbor Framework is deficient in enforcement and some other areas, and it cannot be counted as successful.
The privacy self-regulation programs reviewed in this report were effectively a Potemkin Village of privacy protection. Erected quickly, the schemes were designed to look good from a distance. Upon closer inspection, however, the protections offered were just a veneer. The privacy Potemkin Village fell down soon after the gaze of potential regulators drifted elsewhere. Efforts such as the Individual Reference Service Group (IRSG) and the National Advertising Initiative (NAI) are examples of classic, failed privacy self-regulatory efforts. These and other poorly designed privacy self-regulation schemes had limited market penetration and insufficient enforcement. Still, that was enough to fend off regulators until political winds blew in other directions.
Many participants to the debate are new to the issue and are unaware of recent history. Even the Federal Trade Commission has a short memory. The FTC appeared to acknowledge the limits of self-regulation when, it concluded in 2000 that self-regulatory programs fell “well short of the meaningful broad-based privacy protections the Commission was seeking and that consumers want.” But in 2010, a staff report from the FTC continued to show support for self-regulation as an alternative to legislation, seemingly ignoring the Commission’s own experience from ten years earlier.  The pressure to believe that “this time, things will be different” remains significant. This belief is fueled by industry pressure, industry desire for no formal regulation, a continually shifting political environment, and the absence of meaningful rulemaking authority at the Federal Trade Commission.
This report offers a simple and clear history lesson. Industry self-regulation for privacy as it has been done in the past has failed. Past industry self-regulatory programs for privacy have lacked credibility, sincerity, and staying power. This report does not propose a new model for self- regulation, but it does conclude with some suggestions for a different approach that is based on a a defined role for consumers, more transparency, better definitions, and firmer commitments by those subject to self-regulation. 
It is beyond the scope of this report to consider whether the public’s demands for greater privacy protections should be met with legislation, self-help mechanisms, some yet untested form of activity (regulatory, co-regulatory, or otherwise), or nothing at all.  This report is offered as a resource to help those who are debating these questions today.
Characteristics Common to Privacy Self-Regulation
This report reviews early industry self-regulatory activities for privacy during the years just before and after 2000. This period was the high watermark for privacy self-regulation. This report distinguishes between industry efforts at self-regulation, and government efforts. For most industry-supported self-regulatory efforts for privacy, a clear pattern developed in the years covered by this review. Feeling pressure from Federal Trade Commission scrutiny and from legislative interest, industry self-regulatory efforts for privacy developed quickly in an attempt to avoid any formal regulation. It can be observed that the self-regulatory activities typically were characterized by some or most of the following qualities:
- Self-regulatory organizations were most often based in Washington, D.C, where potential regulators are.
- Self-regulatory organizations formulated their rules in secret, typically with no input from non-industry stakeholders.
- The governing boards of privacy self-regulatory organizations typically had no non-industry board members of these groups. There were typically few or no consumer representatives.
- Privacy self-regulatory rules covered only a fraction of an industry or covered an industry subgroup, leaving many relevant business practices and many players untouched.
- Privacy self-regulation organizations were short-lived, typically surviving for a few years, and then diminishing or disappearing entirely when pressure faded.
- Privacy self-regulation organizations were loudly promoted despite their limited scope and substance.
- Privacy self-regulation organizations were structurally weak, lacking meaningful ability to enforce their own rules or maintain memberships. Those who subscribed to self-regulation were usually free to drop out at any time.
- Privacy self-regulation organizations were typically underfunded, and industry financial support in some cases appeared to dry up quickly. There was no long-term plan for survival or transition.Not all of these characteristics were present in government supervised self-regulatory efforts, although those efforts were not necessarily any more successful.
Summary of Privacy Self-Regulatory History
Self-regulatory efforts do not fall neatly into narrow categories. However, some generalizations may be made that efforts fell into two broad categories, industry-supported and government- supported. One exception exists that is a mix of government, civil society, industry, and academia.
Industry-Supported Self-Regulatory Programs
The early industry-supported privacy self-regulatory efforts included:
• The Individual Reference Services Group was announced in 1997 as a self- regulatory organization for companies providing information that identifies or locates individuals. The group terminated in 2001, deceptively citing a recently- passed regulatory law as making the group’s self-regulation unnecessary. However, that law did not cover IRSG companies.
• The Privacy Leadership Initiative began in 2000 to promote self-regulation and to support privacy educational activities for business and for consumers. The organization lasted about two years.
• The Online Privacy Alliance began in 1998 with an interest in promoting industry self-regulation for privacy. OPA’s last reported substantive activity appears to have taken place in 2001, although its website continues to exist and shows signs of an update in 2011, when FTC and congressional interest recurred. The group does not accept new members. 
• The Network Advertising Initiative had its origins in 1999, when the Federal Trade Commission showed interest in the privacy effects of online behavioral targeting. By 2003, when FTC interest in privacy regulation had diminished, the NAI had only two members. Enforcement and audit activity lapsed as well. NAI did not fulfill its promises or keep its standards up to date with current technology until 2008, when FTC interest increased. 
• The BBBOnline Privacy Program began in 1998, with a substantive operation that included verification, monitoring and review, consumer dispute resolution, a compliance seal, enforcement mechanisms and an educational component. Several hundred companies participated in the early years, but interest did not continue and BBBOnline stopped accepting applications in 2007. The program has now disappeared.
Government-Supported Self-Regulatory Efforts
Not all privacy self-regulatory efforts were solely industry supported. Some were government sponsored in some manner, and there is one effort that involved consumers, academics, public interest groups as well as industry. These efforts included:
• The US-EU Safe Harbor Framework began in 2000 to ease the export of data from Europe to US companies that self-certified compliance with specified Safe Harbor standards. Three studies have documented that compliance was spotty, with many and perhaps most companies claiming to be in the Safe Harbor not meeting the requirements. The Department of Commerce continues to run the program but has undertaken negligible oversight or enforcement. Thus, the Safe Harbor Framework is a form of government-supervised self-regulation but with little evidence of active supervision. Some EU data protection authorities recently rejected reliance on the Safe Harbor framework because of its lack of reliability.
• The Children’s Online Privacy Protection Act (COPPA), which passed in 1998, involves both legislation and self-regulation. It is technically a form of government-supervised self-regulation. The COPPA law provides for a safe harbor provision  that is sometimes cited as a self-regulatory program. Industry participation in the COPPA safe harbor program is not widespread. Under COPPA, the same statutory standards apply whether a business is in the COPPA safe harbor program or not.
Combination Self-Regulatory Efforts
This report does not aim to be comprehensive. We have limited the scope to the early, leading efforts. Some privacy self-regulatory efforts developed or revived more recently.  The Network Advertising Initiative began in 1999 and nearly disappeared a few years later. NAI revived around 2008, when FTC interest in online privacy reawakened, and industry felt threatened once again by regulation and legislation. This report discusses the early iteration of the NAI. The NAI issued a new set of self-regulatory principles in 2008, and membership increased. The revival of NAI follows the earlier pattern so far. Because the new NAI effort is still underway, this report does not attempt to evaluate the NAI’s post-1998 efforts. The new NAI looks a lot like the old NAI, however. Also not reviewed in this report is TRUSTe. 
II. Discussion: Industry-Supported Self-Regulatory Programs for Privacy
This section offers a historical review of privacy self-regulation that occurred in the years just before and just after 2000. For a variety of reasons, it is not necessarily fully comprehensive. Some self-regulatory efforts may have disappeared without a trace. Activities within existing trade associations are difficult or impossible to assess from evidence available to those outside the associations. However, this discussion captures the leading organizations of the time. 
This review does not generally attempt to complete a comprehensive analysis of the quality of each self-regulatory effort. The standards promulgated by the self-regulatory programs were often general and quickly became outdated because of technology and other changes. It appears that audits or reviews of compliance with self-regulatory standards were often not attempted, not completed, not credible, or not transparent. Finding original documents is often difficult or impossible now. However, there is enough available information to describe the programs, their rise, their activities, and in some cases, their demise.
Individual Reference Services Group
The creation of the Individual Reference Services Group (IRSG) was announced in June 1997 at a workshop held by the Federal Trade Commission.  According to a document filed with the FTC, the group consisted of companies that offered individual reference services that provided information that identifies or locates individuals.  The IRSG reported fourteen “leading information industry companies” as members, including US Search.com, Acxiom, Equifax, Experian, Trans Union, and Lexis-Nexis. 
The IRSG described its self-regulatory activities in this manner:
The core of the IRSG’s self-regulatory effort is the self-imposed restriction on use and dissemination of non-public information about individuals in their personal (not business) capacity. In addition, IRSG members who supply non-public information to other individual reference services will provide such information only to companies that adopt or comply with the principles. The principles define the measures that IRSG members will take to protect against the misuse of this type of information. The restrictions on the use of non-public information are based on three possible types of distribution that the services provide. 
A principal purpose of the IRSG plan appeared to be to avoid any real regulation. It was successful in achieving that goal. In its 1999 report to Congress, the FTC recommended that the industry be left to regulate itself despite some significant shortcomings:
A. Recommendations Regarding the IRSG Principles
The Commission recommends that the IRSG Group be given the opportunity to demonstrate the viability of the IRSG Principles.
The present challenge is to protect consumers from threats to their psychological, financial, and physical well-being while preserving the free flow of truthful information and other important benefits of individual reference services. The Commission commends the initiative and concern on the part of the industry members who drafted and agreed to the IRSG Principles, an innovative and far- reaching self-regulatory program. The Principles address most concerns associated with the increased availability of non-public information through individual reference services. With the promising compliance assurance program, the Principles should substantially lessen the risk that information made available through the services is misused, and should address consumers’ concerns about the privacy of non-public information in the services’ databases. Therefore, the Commission recommends that the IRSG Group be given the opportunity to demonstrate the viability of the IRSG Principles. ***
The Commission looks to industry members to determine whether errors in the transmission, transcription, or compilation of public records and other publicly available information are sufficiently infrequent as to warrant no further controls. While the Commission believes the IRSG Principles address most areas of concern, certain issues remain unresolved. Most notably, the Principles fail to provide individuals with a means to access the public records and other publicly available information that individual reference services maintain about them. Thus, individuals cannot determine whether their records reflect inaccuracies caused during the transmission, transcription, or compilation of such information. The Commission believes that this shortcoming may be significant, yet recognizes that the precise extent of these types of inaccuracies and associated harm has not been established. An objective analysis could help resolve this issue. The IRSG Group has acknowledged the Commission’s position, and has demonstrated its awareness of this problem by (1) stating that it will seriously consider conducting a study of this issue and (2) agreeing to revisit the issue in eighteen months. The Commission looks to industry members to undertake the necessary measures to establish whether inaccuracies and associated harm resulting from errors in the transmission, transcription, or compilation of public records and other publicly available information are sufficiently infrequent as to warrant no further controls. 
One of the IRSG principles called for an annual “assurance review” for compliance with IRSG standards.  The IRSG also required that a summary of the report and any subsequent actions taken be publicly available. While the IRSG website contains some evidence that at least some IRSG members conducted reviews, the IRSG did not make the reports public on its website so it is not possible to determine whether the reviews were properly conducted, comprehensive, or otherwise meaningful. 
Once the threat of regulation evaporated or diminished, the IRSG continued in existence for a few years. In September 2001, approximately four years after it was established, the IRSG announced its termination.  The stated reason was that legislation made the self-regulatory principles no longer necessary.
“We are operating in a much different regulatory environment than we were when the IRSG was created in 1997,” said Ron Plesser with Piper Marbury Rudnick & Wolfe LLP, whose firm represents the IRSG. “It doesn’t make sense to maintain a self-regulatory program when this information is now regulated under the Gramm-Leach-Bliley Act.” 
However, the legislation cited as the reason for termination (The Gramm-Leach-Bliley Act) did not in fact regulate IRSG members. The Gramm-Leach-Bliley (GLB) Act provided that each financial institution has an “affirmative and continuing obligation to respect the privacy of its customers and to protect the security and confidentiality of those customers’ nonpublic personal information.”  A financial institution is a company that offers financial products or services to individuals, like loans, financial or investment advice, or insurance.  The IRSG companies – companies that provide information that identifies or locates individuals – are not financial institutions under GLB. It is also noteworthy that GLB became law almost two years before it was cited as the reason for the end of the IRSG. GLB was a fig leaf that covered the lack of continuing industry support for the IRSG.
Why did the IRSG issue a deceptive statement about the reason for its termination? According to reports current at the time, the members of IRSG lost interest in supporting an expensive self- regulatory organization because they no longer felt threatened by legislation or regulatory activities.
The IRSG.org website is now owned by a link farm. 
The Privacy Leadership Initiative
A group of industry executives with members including IBM, Procter & Gamble, Ford, Compaq, and AT&T established the Privacy Leadership Initiative (PLI) in June 2000.  PLI promptly began an ad campaign in national publications to promote industry self-regulation of online consumer privacy. According to a contemporary news account, the PLI initiative “follows a recent Federal Trade Commission recommendation that Congress establish legislation to protect online consumer privacy.” 
A description of the PLI from its website in 2001 stated:
The Privacy Leadership Initiative was formed by leaders of a number of different companies and associations who believe that individuals should have a say in how and when their personal information can be used to their benefit.
The purpose of the PLI is to create a climate of trust which will accelerate acceptance of the Internet and the emerging Information Economy, both online and off-line, as a safe and secure marketplace. There, individuals can see the value they receive in return for sharing personally identifiable information and will understand the steps they can take to protect themselves. As a result of sharing, individuals will have the power to enhance the quality of their lives through personalized information, products and services. 
Another statement from the PLI website provides a more expansive statement of the origin and purpose of the organization:
Why We Formed
The PLI was formed to provide consumers with increased knowledge and resources to help them make informed choices about sharing their personal information. We also help businesses, both large and small — in all industries — develop and maintain good privacy practices. Trust and choice are the foundation of good privacy practices, yet research shows that there is currently a lack of trust between consumers and businesses. Individuals must trust responsible businesses to use personal information in ways that benefit them — such as better, less expensive and personalized products and services — while also providing them with choices about how much personal information is gathered and by whom. Through the establishment of a common understanding about the benefits of exchanging personal information and how it can be safeguarded, the PLI will begin to restore consumer confidence.
What We’re Doing
Given that privacy is a question of trust and behavior, the PLI is developing an “etiquette”–model practices for the exchange of personal information between businesses and consumers. We will help create this code of conduct by engaging in a multi-year, multi-level effort to educate consumers and businesses. Specifically, the PLI will:
1. Conduct original research to measure and track attitudes and behavior changes among consumers and to better understand how the flow of information affects the economy and people’s lives on a day-to-day basis;
2. Compile and refine existing privacy guidelines and create The Privacy Manager’s Resource Center, a new service for that assists businesses in developing their privacy programs
3. Design an interactive Web site — understandingprivacy.org — to make privacy simpler for consumers, businesses, trade groups, journalists, academics, policymakers and all other interested parties; and
4. Educate consumers about technology and tools that protect their interests without diminishing the benefits of exchanging personal preferences with responsible companies.
Whether online or off, the flow of information is critical to the growth and success of our economy. Members of the PLI recognize that businesses must take an active role in ensuring that privacy practices evolve to meet consumer needs. While there is no simple answer for an issue this complex, for PLI members that means understanding what individuals want, tackling those challenges and initiating change, while being accountable and building confidence. These are the keys to creating a climate of trust between responsible businesses and consumers. 
Other accounts from the time support the notion that PLI was intended to promote self- regulation. A 2001 story on Internet privacy from a publication of the Wharton School at the University of Pennsylvania focused on the self-regulation goal:
While Congress debates legislation on Capitol Hill, the business community is actively promoting other options. Chief among these is self-regulation.
Earlier this month, for example, the Privacy Leadership Initiative (PLI) – a group of executives from such companies as AT&T, Dell Computer, Ford, IBM and Procter & Gamble – announced a $30-$40 million campaign aimed at showing consumers how they can use technology to better protect their privacy online. 
By the middle of 2002, the threat of regulation has diminished enough so that PLI “transitioned” its activities to others. The BBBOnLine, a program of the Better Business Bureau system,  took over the PLI website (understandingprivacy.org). The BBBOnline privacy program, which lasted longer than the PLI, is no longer operational, and its details are discussed elsewhere in this paper.
By the middle of September 2002, the transition of the website to BBBOnLine appeared to be complete.  However, by January 2008, the understandingprivacy.org website had changed entirely, offering visitors an answer to the question Can microwave popcorn cause lung disease?  By the beginning of 2011, the understandingprivacy.org website was controlled by Media Insights, a creator of “content-rich Internet publications.”  Other Media Insights websites include BunnyRabbits.org, Feathers.org and PetBirdReport.com.  It is an ignominious end point.
The Online Privacy Alliance
The Online Privacy Alliance36 was created in 1998 by former Federal Trade Commissioner Christine Varney.  OPA’s earliest available webpage described the organization as a cross- industry coalition of more than 60 global corporations and associations. 
The first paragraph of the background page on its website stated clearly its interest in promoting self-regulation:
Businesses, consumers, reporters and policy makers at home and abroad are watching closely to see how well the private sector fulfills its commitment to create a credible system of self-regulation that protects privacy online. One of the most important signs that self-regulation works is the growing number of web sites posting privacy policies. 
In July 1998, OPA released a paper describing Effective Enforcement of Self-regulation.  In November 1999, a representative of the OPA appeared at an FTC workshop on online profiling and participated in a session on the role of self-regulation.  OPA self-regulatory principles were cited by industry representatives before the FTC and elsewhere. 
It is difficult to chart with precision the deterioration of the OPA. By all appearances, the OPA is defunct. It no longer accepts members, and the primary evidence of its activity is continuing small changes to their website. A review of webpages available at the Internet Archive shows a decline of original OPA activities starting in the early 2000s. For example, the first webpage available for 2004 prominently lists OPA news, but the first item shown is dated March 2002 and the next most recent item is dated November 2001.  The OPA news on the first webpage available for 2005 shows four press stories from 2004, but the most recent OPA item was still November 2001.  By 2008, The OPA news on the first webpage available for that year shows 2 news stories from 2006, and no reported OPA activity more recent than 2001.  There is little or no evidence after 2001 of OPA activities or participation at the Federal Trade Commission. 
The threat that fostered the creation of the OPA apparently had disappeared. Wikipedia categorizes OPA under defunct privacy organizations. 
The OPA website continues to exist and appears to have been reformatted and updated at some time after 2008. The website has some links to recent new items, but a More OPA News link at the bottom connects to a webpage that shows no item more recent than 2001.  The main OPA webpage also includes links to old OPA documents such as Guidelines for Online Privacy Policies (approximately 533 words) and Guidelines for Effective Enforcement of Self-Regulation (approximately 1269 words). The website continues to offer old items, such as an OPA Commentary to the Mission Statement and Guidelines dated November 19, 1998. 
The list of members on its website as recently as May 2011 included at least one company (Cendant) that no longer existed at that time.  The membership page was not dated, and members number approximately 30, or less than half the number reported in 1998. The website now reports that membership is “closed”.
The Network Advertising Initiative  (1999-2007 version)
The network advertising industry announced the formation of the Network Advertising Initiative at an FTC workshop in 1999. NAI issued its standards, a 21-page document, the next year.  The core concept – the opt-out cookie – has been criticized as a technical and policy failure, and it remains highly controversial.  The NAI is of particular note because the Federal Trade Commission voted on its creation.
When it began, NAI membership consisted of 12 companies, which was a fraction of the industry engaging in behavioral ad targeting. By 2002, membership hit a low of two companies.  This was a significant lack of participation by the industry. When the NAI created a category of associate members who were not required to be in full compliance with the NAI standards, membership increased, with associate members outnumbering regular members by 2006. Eventually, NAI eliminated the associate membership category. 
The NAI delegated enforcement of its standards to TRUSTe, an unusual action given that TRUSTe was a member of NAI for one year.  Over several years, the scope of TRUSTe public reporting on NAI complaints decreased consistently until 2006, when separate reporting about NAI by TRUSTe stopped altogether.  There is no evidence that the audits of NAI members that were required by NAI principles were conducted. No information about audits of members was ever made public. 
Much of the pressure that produced the NAI came from the Federal Trade Commission. Industry reacted in 1999 to an FTC behavioral advertising workshop, and the NAI self-regulatory principles were drafted with the support of the FTC.  Pressure from the FTC diminished or disappeared quickly, and by 2002, only two NAI members remained. When the FTC again showed interest in online behavioral advertising in 2008, the NAI began to take steps to fix the problems that had developed with its 2000 principles.  One of those steps was “promoting more robust self-regulation by today opening a 45-day public comment period concurrent with the release of a new draft 2008 NAI Principles.”  NAI never sought public comment on the original principles.
Because we remain in a period of renewed Federal Trade Commission and congressional interest in privacy, it is too soon to evaluate the new NAI efforts. Only when the pressure for better privacy rules has faded will it be possible to evaluate the new NAI activities fairly.
There were substantive problems with the original NAI principles as well. The conclusion of the World Privacy Forum Report summarizes the NAI failures:
The NAI has failed. The agreement is foundationally flawed in its approach to what online means and in its choice of the opt-out cookie as a core feature. The NAI opt-out does not work consistently and fails to work at all far too often. Further, the opt-out is counter-intuitive, difficult to accomplish, easily deleted by consumers, and easily circumvented. The NAI opt-out was never a great idea, and time has shown both that consumers have not embraced it and that companies can easily evade its purpose. The original NAI agreement has increasingly limited applicability to today’s tracking and identification techniques. Secret cache cookies, Flash cookies, cookie re-setting techniques, hidden UserData files, Silverlight cookies and other technologies and techniques can be used to circumvent the narrow confines of the NAI agreement. Some of these techniques, Flash cookies in particular, are in widespread use already. These persistent identifiers are not transparent to consumers. The very point of the NAI self- regulation was to make the invisible visible to consumers so there would be a fair balance between consumer interests and industry interests. NAI has not maintained transparency as promised.
The behavioral targeting industry did not embrace its own self-regulation. At no time does it appear that a majority of behavioral targeters belong to NAI. For two years, the NAI had only two members. In 2007 with the scheduling of the FTC’s new Town Hall meeting on the subject, several companies joined NAI or announced an intention to join. Basically, the industry appears interested in supporting or giving the appearance of supporting self-regulation only when alternatives are under consideration. Enforcement of the NAI has been similarly troubled. The organization tasked with enforcing the NAI was allowed to become a member of the NAI for one year. This decision reveals poor judgment on the part of the NAI and on the part of TRUSTe, the NAI enforcement organization. Further, the reporting of enforcement has been increasingly opaque as TRUSTe takes systematic steps away from transparent reporting on the NAI. If the enforcement of the NAI is neither independent nor transparent, then how can anyone determine if the NAI is an effective self-regulatory scheme? The result of all of these and other deficiencies is that the protections promised to consumers have not been realized. The NAI self-regulatory agreement has failed to meet the goals it has stated, and it has failed to meet the expectations and goals the FTC laid out for it. The NAI has failed to deliver on its promises to consumers. 
The NAI self-regulatory effort that began in 1999 was a demonstrable failure within a few years.
BBBOnline Privacy Program
The BBBOnline Privacy Program began in 1998, in response to “the need identified by the Clinton Administration and businesses for a major self-regulation initiative to protect consumer privacy on the Net and to respond to the European privacy initiatives.”  Founding sponsors included leading businesses, such as AT&T, GTE, Hewlett-Packard, IBM, Procter & Gamble, Sony Electronics, Visa, and Xerox.  The program was operated by the Council of Better Business Bureaus through its subsidiary, BBBOnLine. There may have been some consumer group participation in the development of the BBBOnLine privacy program.
It is difficult to determine how many companies participated in the BBBOnline privacy program. A 2000 Federal Trade Commission report on online privacy said that “[o]ver 450 sites representing 244 companies have been licensed to post the BBBOnLine Privacy Seal since the program was launched” in March 1999.  Whether the numbers increased in subsequent years is unknown, but the number reported in 2000 clearly represent a tiny fraction of websites and companies. It may be that the more rigorous requirements that BBBOnline asked its members to meet was a factor in dissuading many companies from participating.
BBBOnline stopped accepting applications for its privacy program sometime in 2007.  The specific reasons the program terminated are not clear, but it seems likely that it was the result of lack of support, participation, and interest. Self-regulation for the purpose of avoiding real regulation is one thing, but the active and substantial self-regulation offered by BBBOnline may have been too much for many potential participants. BBBOnline continues to operate other programs, including an EU Safe Harbor dispute resolution service,  but there is no evidence on its website of the original BBBOnline privacy program. Interestingly, some companies continue to cite the now-defunct BBBOnline privacy program in their privacy policies. 
III. Discussion: Government Privacy Self-Regulatory Activities
This section reviews several other privacy self-regulatory activities that share some characteristics with the industry self-regulatory programs discussed above, but these activities differ in various ways. The most noticeable differences are the role of the government in the programs. The Department of Commerce is involved in the Safe Harbor Framework, and the Federal Trade Commission is involved in the Children’s Online Privacy Protection Act.
Department of Commerce Safe Harbor Framework 
The Safe Harbor Framework operated by the Department of Commerce started in 2000 with an agreement between the Department and the European Commission.  The Safe Harbor Framework differs somewhat from the other self-regulatory activities discussed in this report because of the role played by the Department. However, the Department’s role in the Safe Harbor Framework did not prevent the deterioration of the Safe Harbor over time or stop the lack of compliance by companies that participated in the Safe Harbor.
With the adoption of the European Union’s Data Protection Directive  in 1995 and its implementation in 1998, much of the concern about transborder data flows of personal information centered on the export restriction policies of the Directive. Article 25 of the Directive generally provides that exports of personal data from EU Member States to third countries are allowed if the third country ensures an adequate level of protection.  While the EU determined that some countries (e.g., Argentina, Canada, and Switzerland) provide an adequate level of privacy protection according to EU standards, the United States has never been evaluated for adequacy or determined to be adequate.
Restrictions on exports of personal data from Europe created some significant problems and uncertainties for both US and EU businesses, including online businesses. Pressured by the American business community, the Commerce Department intervened to resolve the threats to US business presented by the Data Protection Directive.
The International Trade Administration of the Department of Commerce now operates the Safe Harbor framework. The Commerce Department website maintains a list of organizations that filed self-certification letters. Only organizations that are subject to the jurisdiction of the Federal Trade Commission or the Department of Transportation are eligible to participate. This limitation means that many companies and organizations that transfer personal information internationally cannot qualify for participation either in whole or in part.
Three studies of the Safe Harbor Framework were conducted since the start of Safe Harbor. The first study was conducted in 2001 at the request of the European Commission Internal Market DG.  The second study, completed in 2004, was also conducted at the request the European Commission Internal Market DG. An international group of academics conducted the study.  The third study was prepared by Chris Connolly, director of an Australian management consulting company with expertise consultants in privacy, authentication, electronic commerce, and new technology. 
Overall, the three studies found the same problems with Safe Harbor. Companies that claim to meet the Safe Harbor requirements are not actually in compliance with those requirements. Evidence from the three reports suggests that the number of companies not in compliance has increased over time.
There is no evidence of improvement in the administration of the Department’s Safe Harbor activities. Perhaps the most prominent response to the reports of noncompliance was the addition of a disclaimer on the Department’s Safe Harbor website indicating that Department cannot guarantee the accuracy of the information it maintains.  It appears that the Department has made some changes to its website over the years, but there remains a lack of evidence of any substantive efforts by the Department to monitor or enforce compliance.
While the Safe Harbor Framework is not a pure industry-run self-regulatory activity because of the role of the Department of Commerce, it shares characteristics of industry self-regulatory activities, namely interest in the Safe Harbor Framework diminished over time, and business support and participation deteriorated. Enforcement has been rare, and the Department never conducted or required audits of participants.
The shortcomings of the Safe Harbor Framework have come to the attention of some data protection authorities in Europe. In April 2010, the Düsseldorfer Kreis, a working group comprised of the 16 German federal state data protection authorities with authority over the private sector, adopted a resolution applicable to those who export data from Germany to US organizations that self-certified compliance with the Safe Harbor Framework. The resolution tells German data exporters that they must verify whether a self-certified data importer in the US actually complies with the Safe Harbor requirements. 
Essentially, the action by the German state data protection authorities rejects in significant part the Safe Harbor Framework, particularly the self-certification as it appears on the Department of Commerce website. The Düsseldorfer Kreis makes this clear when it states that the reason for its action is that “comprehensive control of US-American companies’ self-certifications by supervisory authorities in Europe and in the US is not guaranteed…” 
The Department has ignored repeated evidence that many or most Safe Harbor participants are not in compliance with the requirements. Instead, in a recent green paper, the Department claimed that the Safe Harbor Framework was “successful.”  It is not clear what standard the Department used to measure the success of the Safe Harbor Framework. All available evidence strongly suggests a substantial lack of compliance with the Safe Harbor Framework.
Children’s Online Privacy Protection Act (COPPA)
The safe harbor provision in the Children’s Online Privacy Protection Act (COPPA)  is sometimes cited as a self-regulatory program. For that reason, COPPA is discussed here. However, it is crucial to note that COPPA self-regulation is significantly different from the others discussed in this report. The companies in a COPPA safe harbor must follow all the substantive standards established in the COPPA statute and FTC regulations, meaning that a participant in a safe harbor program must do everything that a non-participant must do plus bear the cost of the safe harbor. The standards cannot be changed by the participants in the self- regulatory program. The FTC formally oversees and approves COPPA safe harbor programs, a characteristic that other self-regulatory programs reviewed here lacked. 
In effect, the COPPA safe harbor programs mostly engage in limited enforcement of the statute and relieve the Commission of some of the burden. This may have some benefits overall. It should not be surprising that industry participation in the safe harbor aspect of COPPA is limited. Whether COPPA self-regulation is a success or failure is a subject for reasonable debate, but COPPA has fewer characteristics of failure than the industry self-regulation discussed earlier. For example, there is a formal input procedure for consumers, the safe harbor program has not disappeared, and there has been COPPA enforcement by the FTC. The COPPA model does not appear to be a model in current use outside of this instance. The reason may be that self- regulatory activities under a legislative scheme have little attraction when the principal purpose of industry self-regulation for privacy has been avoidance of regulation in the first place.
IV. Discussion: Combination Self-Regulatory Efforts
The self-regulatory efforts in this category include projects that have many components, including input from government, industry, academia, and civil society.
Platform for Privacy Preferences Project (P3P)
The Center for Democracy and Technology (CDT) supported the early work that eventually resulted in P3P.  CDT convened an Internet Privacy Working Group that drafted a mission statement, with companies, trade associations, and consumer groups participating. A presentation of a prototype was presented at an FTC Workshop in 1997. 
Later in the same year, P3P became a project of the World Wide Web Consortium (W3C), the main international standards organization for the World Wide Web. The working group included representatives of companies, academia, and government.  The work of drafting the formal specification took some time, and version 1.0 was finally published at the end of 2000.  A later specification was published in 2006. 
Microsoft included some support for P3P in its browser, Internet Explorer.  The Firefox browser from Mozilla also provides some support.  The E-Government Act of 2002  included a requirement that federal agency websites translate privacy policies into a standardized machine- readable format,  and P3P is the only specification that meets the requirements.  It was a promising start.
However, the extent to which commercial websites and even government websites attempted to implement P3P or succeeded in doing so in the long term is highly uncertain. A 2008 published review of P3P by Professor Lorrie Faith Cranor found P3P adoption increasing overall but that P3P adoption rates greatly vary across industries. Other findings are that P3P had been deployed on 10% of the sites returned in the top-20 results of typical searches, and on 21% of the sites in the top-20 results of e-commerce searches. Review of over 5,000 web sites in both 2003 and 2006 found that P3P deployment increased over that period, although there were decreases in some sectors. The review also found high rates of syntax errors among P3P policies, but much lower rates of critical errors that prevent a P3P user agent from interpreting them. Privacy policies of P3P-enabled popular websites were found to be similar to the privacy policies of popular websites that do not use P3P. 
An analysis published two years later by the CyLab at Carnegie Mellon University looked at over 33,000 websites using P3P compact policies and “detected errors on 11,176 of them, including 134 TRUSTe-certified websites and 21 of the top 100 most-visited sites.”  The study also found thousands of sites using identical invalid compact policies (CP) that had been recommended as workarounds for Internet Explorer cookie blocking. Other sites had CPs with typos in their tokens, or other errors. Fully 98% of invalid CPs resulted in cookies remaining unblocked by Internet Explorer under its default cookie settings. The analysis concluded that it “appears that large numbers of websites that use [compact policies] are misrepresenting their privacy practices, thus misleading users and rendering privacy protection tools ineffective.”  The study concluded that companies do not have sufficient incentives to provide accurate machine-readable privacy policies. 
In other words, the self-regulatory aspects of P3P do not appear to be working, with the CyLab study suggesting that lack of enforcement by regulators is a problem.  Neither P3P nor any industry trade association offers a P3P enforcement method.
P3P has some of the indicia of industry self-regulation in that it was inspired in part by FTC interest and motivated in part by an industry interest in avoiding legislation or regulation.  The involvement in P3P’s development and promotion by consumer groups and the White House together with industry representatives differentiates P3P from the other industry efforts discussed earlier in this report. Another differentiator is the legislative requirement that federal agencies use P3P or similar technology. P3P shares sufficient characteristics with the self-regulatory programs discussed in this report to warrant its inclusion here.
Some privacy groups opposed P3P from the beginning, largely because of concerns that it would prevent privacy legislation from passing. Company views of the project also varied.  It is not clear how much attention P3P has received in recent years from companies or privacy groups.
Unlike some of the self-regulatory activities discussed in Part II of this analysis, P3P remains in use. However, given the findings of the 2010 study of widespread misrepresentation of privacy policies by those using P3P, it is hard to call P3P any kind of success. Further, the study provides strong evidence of deliberate deception in implementation of P3P at some websites. Internet users appear to have little knowledge of P3P, although public awareness may not be essential since the controls are built into browsers and users appear to be concerned about the privacy policies that P3P is designed to convey.  Like the Commerce Department’s Safe Harbor Framework, P3P continues to exist, but both programs are so lacking in rigor and compliance that neither is fulfilling its original purpose.
Is there any reason to think that privacy self-regulation will work today when it did not work in the past? Privacy self-regulation done in the same way that it has been done in the past, without sufficient consumer participation, and with the same goals of simply evading real regulation and effective privacy controls will continue to fail.
What should be done if privacy self-regulation cannot succeed is beyond the scope of this report. This report does not advocate for regulation or against improved self-regulation. The point is that there is no reason to believe that this time will be different when it comes to privacy self- regulation done in ways that have been proved to lead to failure. New approaches are needed if the goal is to offer consumer valuable, effective, and balanced privacy protections that last.
What is at stake: Implications for current privacy self-regulatory efforts
If privacy self-regulation today is constructed in the same way as in the past, will it fail in the same way as before? Questions abound. Should self-regulation cover website advertisers? Internet service providers? Data brokers? Social networking sites? Companies using location information? Apps providers? All websites? Defining the Internet universe is daunting, and even within slices of that universe, definitions and boundaries will be difficult to establish. The past history of even the best-intentioned of self-regulatory efforts shows how quickly policy can be outdated by industry and Internet developments.
The web is changing too rapidly to expect that any given form of traditional industry-supported privacy self-regulation will make sense in a year or two. Companies track the activities of individuals today in ways that were not contemplated even a year or two ago. Companies often have no reason to expose to public view their data processing functions for definition or measurement lest they reveal a marketplace advantage.
In most areas of online activity that involve personal information, the number of companies is unknown and highly variable. To determine the penetration of self-regulation coverage, there has to be both a known, demonstrable denominator of companies that fall within the self-regulatory scheme and a numerator of those companies that are participating in the scheme. Without this basic information, there is no real way to measure the penetration of privacy self-regulation. For example, if a list of Internet advertising companies exists at all, that list will go out of date almost immediately. Thus, it is difficult to determine what percentage of the defined universe has agreed to any specific self-regulatory scheme. Even if it were possible to calculate these numbers for past privacy self-regulatory activities, the penetration would likely be low and highly variable over time.
Measuring activity though another measure (rather than the number of companies) would probably require access to information that industry would argue to be proprietary. Thus, it is harder than ever to even make basic judgments about the scope and effect of any industry- supported privacy self-regulation.
There is more at stake financially today. Revenues from personal data activities are huge. If a self-regulatory scheme had any real effect on revenues or profits, those who stayed out of the scheme could profit at the expense of those who participated. It is hard to see how a race to the bottom effect would be avoided. Still, because there are so many companies and so much money involved in the Internet space, only a small percentage of companies need to participate in a privacy self-regulatory scheme to provide an impressive amount of resources that will make the self-regulation look better than it is. Millions for show, but pennies for substance.
A poorly designed privacy self-regulation scheme that has limited market penetration and insufficient enforcement may be good enough to fool potential regulators once again. Industry is well aware that a little will go a long way for public relations purposes. Industry knows that it only needs to keep a self-regulatory program alive for a limited period. Current debates about privacy self-regulation do not place the burden on industry to prove how proposed self- regulatory privacy programs are going to be substantively different than past efforts, at least in public view.
The Federal Trade Commission has no effective means of issuing privacy regulations because of current limits on its statutory authority. This is a structural problem that essentially compels the agency to look favorably at self-regulation because it has no alternative to offer. The FTC can always recommend legislation, but it is not clear that an FTC recommendation will be influential, that privacy legislation can pass the Congress, or that the FTC can manage to support any legislative recommendation.
Privacy self-regulation as supported by industry today suffers from the same lack of tension as in the past. Without meaningful, independent participation (e.g., by privacy and consumer advocates) in the development and oversight of privacy self-regulation, the self-regulatory standards and enforcement will be just as insufficient as they were in the past. Industry-financed oversight will not succeed because industry does not want it to be effective. For-profit privacy standards will not succeed because the pressure for profits overwhelms the efforts of would-be enforcers.
Privacy self-regulation cannot be meaningful if companies are free to drop out of any self- regulatory scheme at will or to join a different self-regulatory scheme that has weaker standards. Would-be self-regulators are not likely to sue former members. Privacy commitments typically come with a caveat that they can be changed at will at any time without notice. For-profit companies overseeing privacy standards will not be likely to discipline paying members effectively lest they lose revenues or deter participation from new players.
The threat of Federal Trade Commission action is loudly touted by self-regulators as an effective enforcement method. Reliance on Commission enforcement of self-regulation is a challenge, as industry knows that the Commission does not have the resources to enforce a self-regulation scheme covering hundreds or thousands of companies.
This is the case notwithstanding the absence of meaningful Commission activity against those who ignored or discontinued privacy self-regulation. How can the Commission take action against an industry-supported self-regulatory program that has lost all industry support?
The history lesson here poses challenges to the present efforts for codes of conduct or self- regulation. Self-regulation, done in the same ways as it has been done in the past, is not a hopeful way forward. However, the history lesson is not without hope. This report notes key factors that have been salient in the self-regulatory failures. These factors need to be studied and avoided. This report also notes factors that might lay groundwork for success, gleaned from observation of what has not worked. No matter what, one thing is quite certain: there is no need to repeat the past again.
What Could Improve the Process?
It is not the primary purpose of this report to put forward a set of criteria for a meaningful and effective privacy self-regulatory regime. However, it is clear from past experience that some approaches are more likely to produce more positive results and some are not likely to result in a change from the past. In looking at past challenges to success (lack of membership, short duration, no consumer representation, etc.) we are able to set out some basic qualities needed for improvement.
Tension in the Process
Successful privacy self-regulation requires standards responsive to the actual problems, robust policies, meaningful enforcement, and effective remedies. Privacy self-regulation of industry, by industry, and for industry will not succeed. Tension in self-regulation can be provided by a defined and permanent role for consumers who are the intended beneficiaries of privacy protection. Government may also be able to play a role, but government cannot be relied upon as the sole overseer of the process. The past has shown that the interest of the FTC waxed and waned with the political cycle, and the Department of Commerce did not provide sufficient oversight.
The scope of a self-regulatory regime must be clearly defined at the start. It must apply to a reasonable segment of industry, and it must attract a reasonable percentage of the industry as participants. There must be a method to assess the penetration of the self-regulatory regime in the defined industry.
Fair Information Practices
Any self-regulatory regime should be based on Fair Information Practices (FIPs). Implementation of FIPs will vary with the industry and circumstances, but all elements of FIPs should be addressed in some reasonable fashion.
Open Public Process
The development of basic policies and enforcement methods should take place to a reasonable degree in a public process open to every relevant perspective. The process for development of privacy self-regulatory standards should have a reasonable degree of openness, and there should be a full opportunity for public comment before any material decisions become permanent. Consumers must be able to select their own representatives. Neither government nor those who are to be regulated should select consumer participants – the selection should be up to the consumers.
The organization that operates a privacy self-regulatory system needs to have some independence from those who are subject to the self-regulation. Those who commit to comply with privacy self-regulation must make a public commitment to comply for a term of years and a financial commitment for that entire period.
Past self-regulatory efforts and codes of conduct lack benchmarks for success. What constitutes success? Is it membership? Market share? Is it actual enforcement of the program? Without specific benchmarks for a privacy program, it is much more difficult to gauge success in real- time. Without the ability to accurately assess activities within a current program, both success and failure are more difficult to ascertain and may only be gleaned in hindsight.
A Note on Methods
This historical review of privacy self-regulation is based on an extensive literature review, both online and offline, and includes information that was publicly available. This report covers the leading self-regulatory efforts. Some self-regulatory efforts may have disappeared without leaving a public record. Also, privacy seal programs arose during the period of this review, but some disappeared entirely and none developed sufficient credibility or public recognition to warrant investigation in this report beyond those noted in the report. Some activities within existing trade associations are difficult or impossible to assess from evidence available to those outside the associations.
This report was published October 14, 2011. The full report is available at www.worldprivacyforum.org/pdf/WPFselfregulationhistory.pdf. Any updates to the report will be posted to this URL.
 Also, privacy seal programs arose during the period of this review, but some disappeared entirely. None beyond BBBOnline and TRUSTe developed sufficient credibility, reliability, or public recognition to warrant investigation in this report.
 Federal Trade Commission, Individual Reference Services, A Report to Congress (1997), http://www.ftc.gov/bcp/privacy/wkshp97/irsdoc1.htm (last visited 9/20/11).
 Individual Reference Services Group, Industry Principles — Commentary (Dec. 15, 1997), http://www.ftc.gov/os/1997/12/irsappe.pdf (last visited 9/20/11).
 http://web.archive.org/web/19990125100333/http://www.irsg.org (last visited 9/20/11).
 Federal Trade Commission, Individual Reference Services, A Report to Congress (1997) (Commission Recommendations), http://www.ftc.gov/bcp/privacy/wkshp97/irsdoc1.htm (last visited 9/20/11).
 http://web.archive.org/web/20020210151622/www.irsg.org/html/3rd_party_assessments.htm (last visited 9/20/11).
 See http://web.archive.org/web/20020215163015/www.irsg.org/html/irsg_assessment_letters–2000.htm (last visited 9/20/11). Whether the reports were made public in other ways has not been explored.
 http://web.archive.org/web/20020202103820/www.irsg.org/html/termination.htm (last visited 9/20/11).
 15 U.S.C. § 6801(a).
 15 U.S.C. § 6809(3). See also Federal Trade Commission, In Brief: The Financial Privacy Requirements of the Gramm-Leach-Bliley Act (2002), http://business.ftc.gov/documents/bus53-brief-financial-privacy-requirements- gramm-leach-bliley-act (last visited 9/20/11).
 See www.irsg.org (last visited 9/20/11).
 See Marcia Savage, New Industry Alliance Addresses Online Privacy, Computer Reseller News (06/19/00), http://technews.acm.org/articles/2000-2/0621w.html#item13 (last visited 9/20/11).
 http://web.archive.org/web/20010411210453/www.understandingprivacy.org/content/about/index.cfm (last visited 9/20/11).
 http://web.archive.org/web/20010419185921/www.understandingprivacy.org/content/about/fact.cfm (last visited 9/20/11).
 Up for Sale: How Best to Protect Privacy on the Internet, Knowledge@Wharton (March 19, 2001), http://knowledge.wharton.upenn.edu/article.cfm?articleid=325 (last visited 9/20/11).
 Press Release, Privacy Leadership Initiative Transfers Initiatives to Established Business Groups (July 1, 2002), http://goliath.ecnext.com/coms2/gi_0199-1872940/Privacy-Leadership-Initiative-Transfers-Initiatives.html (last visited 9/20/11).
 http://web.archive.org/web/20020914095335/www.bbbonline.org/understandingprivacy (last visited 9/20/11).
 http://web.archive.org/web/20080118171946/http://www.understandingprivacy.org (last visited 9/20/11).
 http://www.mediainsights.com (last visited 9/20/11).
 The main webpages for the organization are at www.privacyalliance.org. However, for a brief period starting in 2005, the Internet Archive shows that the organization also maintained webpages at www.privacyalliance.com. The first pages reported by the Internet Archive for www.privacyalliance.org are dated December 2, 1998.
 http://web.archive.org/web/19990209062744/www.privacyalliance.org/join/background.shtml (last visited 9/20/11).
 http://web.archive.org/web/19990209062744/www.privacyalliance.org/join/background.shtml (last visited 2/8/11).
 http://web.archive.org/web/19981202200600/http://www.privacyalliance.org (last visited 9/20/11).
 http://www.ftc.gov/bcp/workshops/profiling/991108agenda.htm (last visited 9/20/11).
 See, e.g., Statement of Mark Uncapher, Vice President and Counsel, Information Technology Association of America, before the Federal Trade Commission Public Workshop on Online Profiling (October 18, 1999), http://www.ftc.gov/bcp/workshops/profiling/comments/uncapher.htm (last visited 9/20/11).
 http://web.archive.org/web/20040122052508/http://www.privacyalliance.org (last visited 9/20/11).
 http://web.archive.org/web/20050104085718/http://www.privacyalliance.org (last visited 9/20/11).
 http://web.archive.org/web/20080201111641/http://www.privacyalliance.org (last visited 9/20/11).
 www.ftc.gov (last visited 9/20/11)
 http://en.wikipedia.org/wiki/Online_Privacy_Alliance (last visited 9/20/11).
 http://www.privacyalliance.org/news (last visited 9/20/11).
 http://www.privacyalliance.org/news/12031998-4.shtml (last visited 9/20/11).
 http://web.archive.org/web/20110512024943/http://www.privacyalliance.org/members (last visited 9/20/11)
 This summary is adapted from a comprehensive review of the Network Advertising Initiative (NAI) published by the World Privacy Forum in 2007. The WPF report is THE NETWORK ADVERTISING INITIATIVE: Failing at Consumer Protection and at Self-Regulation. The WPF report contains citations and support for the conclusions presented here. http://www.worldprivacyforum.org/pdf/WPF_NAI_report_Nov2_2007fs.pdf (last visited 9/20/11).
 Id. at 7-8.
 Id. at 14-16.
 Id at 28-29.
 Id. at 29-30.
 Id. at 25.
 Id. at 33-36.
 Id. at 37.
 Id. at 9.
 See, e.g., Network Advertising Initiative, Written Comments in Response to the Federal Trade Commission Staff’s Proposed Behavioral Advertising Principles (April 2008), http://www.ftc.gov/os/comments/behavioraladprinciples/080410nai.pdf (last visited 9/20/11).
 World Privacy Forum NAI Report at 39.
 New Release, Better Business Bureau, BBBOnLine Privacy Program Created to Enhance User Trust on the Internet (June 22, 1998), http://www.bbb.org/us/article/bbbonline-privacy-program-created-to-enhance-user-trust- on-the-internet-163 (last visited 2/10/11).
 The earliest web presence for the BBB Online Privacy Program appeared at the end of 2000. http://web.archive.org/web/20010119180300/www.bbbonline.org/privacy (last visited 9/20/11).
 http://web.archive.org/web/20010201170700/http://www.bbbonline.org/privacy/how.asp (last visited 9/20/11).
 http://web.archive.org/web/20030407011013/www.bbbonline.org/privacy/dr.pdf (last visited 9/20/11).
 See, e.g., http://web.archive.org/web/20070124235138/www.bbbonline.org/privacy/dr/2005q3.asp (last visited 9/20/11). While the BBBOnline privacy program dispute procedures were better and more transparent than other comparable procedures, the BBBOnline dispute resolution service was controversial in various ways. In 2000, for example, questions were raised when the BBBOnline Privacy Program, under pressure from the subject of a complaint, vacated an earlier decision and substituted a decision more favorable to the complaint subject.
 Federal Trade Commission, Privacy Online: Fair Information Practices in the Electronic Marketplace, A Report To Congress 6 (2000), http://www.ftc.gov/reports/privacy2000/privacy2000.pdf (last visited 9/20/11).
 http://web.archive.org/web/20070830164536rn_1/www.bbbonline.org/privacy (last visited 2/10/11).
 http://www.bbb.org/us/european-union-dispute-resolution (last visited 9/20/11). It is not clear if BBBOnline has actually handled any US-EU Safe Harbor complaints.
 This summary is adapted from an analysis of the Department of Commerce’s international privacy
activities published by the World Privacy Forum in 2010. The WPF report is The US Department of
Commerce and International Privacy Activities: Indifference and Neglect. The WPF report contains
additional citations and support for the conclusions presented here. See: http://www.worldprivacyforum.org/pdf/USDepartmentofCommerceReportfs.pdf (last visited 9/20/11).
 All Safe Harbor documents can be found at http://www.export.gov/safeharbor/eg_main_018237.asp (last visited 9/20/11).
 Council Directive 95/46, art. 28, on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of such Data, 1995 O.J. (L 281/47), http://eur- lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:EN:HTML (last visited 9/20/11).
 Other grounds for data exports are not relevant here.
 http://www.export.gov/safeharbor/eu/eg_main_018476.asp (last visited 9/20/11).
 http://www.export.gov/safeharbor/eu/eg_main_018493.asp (last visited 9/20/11).
 The Functioning of the US-EU Safe Harbor Privacy Principles, (September 21, 2001). This study was reportedly published by the European Commission, but a copy has not been located on the EU’s data protection webpage or elsewhere on the Internet. The study author is not identified in the document, but a Commission official publicly identified Professor Joel R. Reidenberg, Fordham University Law School, as the author, and the 2004 Study also identified Professor Reidenberg as the author. See 2004 Study at note 2.
 Safe Harbour Decision Implementation Study (2004), http://ec.europa.eu/justice/policies/privacy/docs/studies/safe-harbour-2004_en.pdf (last visited 9/20/11). As identified in the paper, the authors are Jan Dhont, María Verónica Pérez Asinari, and Prof. Dr. Yves Poullet (Centre de Recherche Informatique et Droit, University of Namur, Belgium) with the assistance of Prof. Dr. Joel R. Reidenberg (Fordham University School of Law, New York, USA) and Dr. Lee A. Bygrave (Norwegian Research Centre for Computers and Law, University of Oslo, Norway).
 The US Safe Harbor – Fact or Fiction? (2008), http://www.galexia.com/public/research/assets/safe_harbor_fact_or_fiction_2008/safe_harbor_fact_or_fiction.pdf (last visited 9/20/11).
 Supreme Supervisory Authorities for Data Protection in the Nonpublic Sector (Germany), Examination of the Data Importer’s Self-Certification According to the Safe-Harbor-Agreement by the Company Exporting Data (revised version of Aug. 23, 2010), http://www.datenschutz- berlin.de/attachments/710/Resolution_DuesseldorfCircle_28_04_2010EN.pdf?1285316129 (last visited 9/20/11).
 Department of Commerce Internet Policy Task Force, Commercial Data Privacy and Innovation in the Internet Economy: A Dynamic Policy Framework at 44 (undated; released in December 2010), http://www.commerce.gov/sites/default/files/documents/2010/december/iptf-privacy-green-paper.pdf (last visited 9/20/11).
 15 U.S.C. §§ 6501-6506.
 15 U.S.C. § 6503.
 For a fuller history of P3P and details on the actual technical standard, see Lorrie Faith Cranor, Web Privacy with P3P (2002).
 Id. at 45.
 Id. at 46.
 Id. at 53.
 http://www.w3.org/TR/P3P11 (last visited 9/20/11).
 See http://msdn.microsoft.com/en-us/library/ms537343%28VS.85%29.aspx (last visited 9/20/11).
 See http://www-archive.mozilla.org/projects/p3p (last visited 9/20/11).
 Public Law 107-347.
 See Office of Management and Budget, Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002 (2003) (M-03-22), http://www.whitehouse.gov/omb/memoranda_m03-22 (last visited 9/20/11).
 See, e.g., Department of Health and Human Services, HHS-OCIO Policy for Machine-Readable Privacy Policies at 4.2 (Policy 2010-0001, 2010), http://www.hhs.gov/ocio/policy/hhs-ocio-2010_0001_policy_for_machine- readable_privacy_policies.html (last visited 9/20/11).
 Lorrie Faith Cranor et al., P3P Deployment on Websites, 7 Electronic Commerce Research and Applications 274- 293 (2008).
 Pedro Giovanni Leon et al, Token Attempt: The Misrepresentation of Website Privacy Policies through the Misuse of P3P Compact Policy Tokens (CMU-CyLab-10-014 2010), http://www.cylab.cmu.edu/files/pdfs/tech_reports/CMUCyLab10014.pdf (last visited 9/20/11).
 Id. at 9.
 See, e.g., Simson Garfinkel, Can a labeling system protect your privacy?, Salon (July 11, 2000), http://www.salon.com/technology/col/garf/2000/07/11/p3p (last visited 9/20/11) (“But P3P isn’t technology, it’s politics. The Clinton administration and companies such as Microsoft are all set to use P3P as the latest excuse to promote their campaign of “industry self-regulation” and delay meaningful legislation on Internet privacy.”).
 Lorrie Faith Cranor, Web Privacy with P3P 56 (2002).
 See Serge Egelman et al., Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators (2009), http://www.guanotronic.com/~serge/papers/chi09a.pdf (last visited 9/20/11).