Privacy Enhancing Technologies: Their Necessity and Future

Austin Heller




            In an online environment, many different communities both interact and share ideas on a global scale. Due to the rise in the sharing of personal information, it’s clear that privacy has taken a step back in order to make room for innovation and prosperity. This does not need to be the cost people pay to improve society. Too often, privacy rights are violated needlessly and in many ways reinforced by an unknowledgeable public. In this way, violations continue to occur unnoticed and give the impression that the public simply does not care. Some of these invasions of privacy can be stopped and are being opposed even today.

With the use of privacy enhancing technologies (PETs), any user that interacts with the global network has the ability to maintain their privacy where privacy-protecting measures are not inherently in place. The creation and wide-spread use of PETs has changed the way communication occurs online and how governments react to the Internet. There is a balance to be had between privacy enhancing tools like Tor, an anonymous Internet browser, or Pretty Good Privacy (PGP) encryption, a popular and power encryption algorithm, and the national security concerns that result from their use (Levy, 1995; Schultz, 2012). Two such concerns, at least for a regime that cannot handle criticism, are whistle-blowing and anonymous communication.

Whistle-blowers have a place in our world, but they cannot exist without the ability to spread information anonymously. When governments fear that the community they govern provides too safe an environment for whistle-blowers, they undoubtedly restrict communication until they’re satisfied. When this happens, it becomes untenable to maintain a free marketplace of ideas online. Free expression can be quickly stifled by a government’s regulation and intervention in ways that may permanently taint the ability for users of the Internet to share ideas with other users, of who could be residing under a very different governing body with valuable points of view. There is more to be gained by anonymous communication than there is to lose in way of reputation and public government image. PETs are the current answer to the concerns of the knowledgeable few, but privacy-protecting legislation that encompasses everyone fairly should be the goal. Without the government protecting privacy, it creates a precedent where individuals have to be either paranoid and defensive, or docile and submissive about their demeanor online.


            The concept of privacy can be expressed in many different ways, some of which subjectively cover aspects of privacy better than others. One way that privacy can be considered is in such a way that a person’s personal data is something that they have a right to control (Birnhack & Elkin-Koren, 2011, p. 8). Under this principle, it is primarily about the person’s ability to have complete control over all of the information that describes them. Coupled with the right to control their personal information is the desire to ensure that they have the ability to maintain their own identity. Identity management, in terms of societal impact, dictates a person’s relationship with all others and gives them the power to withhold or make known any information about them. With this control, a person can determine how they would present themselves and in what way they would interact with their environment in order to shape their social role (Phillips, 2004, p. 5). This aspect of privacy (as control) provides one way of understanding how a specific PET impacts and contributes to solving a particular problem.

Another aspect of privacy is having the right to prevent access to a person’s personal information. Privacy as access is different from privacy as control. As control, the individual can choose how the information is presented, to whom, and for how long. As access, the individual is defining themselves as a separate entity from all others, only permitting access to their information willfully and specifically (Birnhack & Elkin-Koren, 2011, p. 8). When information is considered as something that can be accessed, it places the individual in the position of isolating their information with the desire to be free of intrusion. Present concerns that people face today with “peeping Toms, warrantless searches and laws restricting intimate behavior” (Phillips, 2004, p. 5) are ever present and is easily understood when privacy is in terms of access. These concerns are just another reason why it is vital to support the development and spread of PETs; more often than not, once a person’s privacy has been violated online, it becomes a terrible uphill battle to control its spread.

Invasions of Privacy

            Invasions of privacy as control tend to relate more to companies and content providers in regards to patents and intellectual property rights. This is clear from the understanding of how privacy as control figures into an intellectual property right holder’s perspective on their want to control their information. Although that side of the argument is interesting in its own right, in regards to PETs, invasions of privacy as access play a much stronger role in the reason for their development and use. Due to the very nature of the Internet, creating a bubble of protection around an individual’s information is far easier than maintaining control over that same information across the unfathomable expanse called cyberspace. As users venture farther into this expanse, the need for new and innovative PETs has become more and more obvious. Google and Facebook are even beginning to travel about the public domain, collecting information that might not otherwise alarm the average person if it weren’t so pervasive (Hill, 2012). This should have knowledgeable users wondering how they can protect their information while engaged in public services on the Internet or even outside the home; it should have them wondering if it’s even possible.

            It is well known that Facebook has taken off as a major social networking site. Due to this, Google finally introduced a competing social networking website cleverly named Google+. One problem that this new site suffers from is a result of how it facilitates communication between users. With Google’s idea of social circles, a user types out something to the screen and then selects which circle of pre-selected people will receive the user’s message (Morda, 2011). The main problem with this approach is the level of demand that is placed on the user to ensure accurate communication. They will either have many special circles for specific announcements or will inevitably announce something to someone unintentionally. Some have noted, however, that because it’s possible to create all of the unique circles for each unique circumstance, it’s actually not too bad of a design at all. In fact, they would argue that the only real issue with Google+ is that others can add the user in question to their circles without their consent (Mediati, 2012). There are clearly issues with Google+, but to say that there isn’t a problem so long as users aren’t lazy isn’t realistic. Relying on users to be consistently motivated will only ever result in their submission to a mistaken and unnecessary invasion of privacy.

            With the habit that people have begun to form in the way of freely divulging information online, they may want to ironically utilize Google’s “Me on the Web” service (Thomas, 2011). This service acts very similar to Google Alerts, but it is more geared for the specific goal of finding what information can be known about the user online, if at all. It’s argued, though, that the service was only developed as a means to get people to create Google Profiles and that if a person’s goal is to actually find information about them online, they should probably just stick with Google Alerts (Sullivan, 2011). If Google was actually serious about providing users with the highest level of knowledge about what their search engine knows about them, then Google Takeout, a service that is stated to provide just that, would actually reveal that data in full (Null, 2012). It doesn’t, though, and therefore forces the user to take an extraordinary amount of effort to discover such information when Google could be providing it forthright.

            It’s obvious that Google is trying to perform two contradictory activities at once. On one hand they want all of their users to be able to feel secure online, while on the other they want to collect the IP addresses, queries, and search patterns of their users (Leenes & Koops, 2005, p. 5). The feeling of security that Google provides when both of these positions is considered together can be compared to always having a group of people sinisterly hovering overhead. It’s possible to ask the group questions, but they don’t have to give real answers. In this way, Google should either be more publicly transparent in regards to their actions or own up to their blatant inconsistencies; their users should be able to depend on them to protect their privacy.

Luckily, there are alternatives to using search engines that are centralized and therefore prone to this type of monitoring. The iTrust system is one such solution that has been noted to be “particularly valuable for individuals who fear that the conventional centralized Internet search mechanisms might be subverted or censored” (Chuang, Lombera, Moser, & Melliar-Smith, 2011, p. 7). This particular distributed search and retrieval system is not currently being utilized publicly, but other solutions exist in the meantime such as Faroo (a peer-to-peer web search engine), (a self-proclaimed private search engine), Majestic-12 (a distributed web search based on community support for their results), and YaCy (a distributed web search that is a peer-to-peer web crawler) (Shark, 2007). Each of these services can be utilized by researchers, journalists, and all others curious parties that want to ask questions but don’t want those of ill-intent to know that the questions were asked.

Being able to search through and browse the Internet anonymously only goes so far as a means of protecting a person’s privacy online. With the use of embedded widgets from sites like Facebook, Twitter, and Google+, the usefulness of an anonymous browsing tool becomes nonexistent. Using social networking widgets defeats the purpose of browsing anonymously in the most foolish way imaginable; logging into Facebook in order to leave a comment on a site that the user navigated to anonymously will obviously violate their anonymity. That aside, if the user isn’t browsing anonymously, simply loading the page can send Facebook and others the user’s browsing habits (Eckersley, 2011). In this way, Facebook can know everything about their users regardless of if their services are even being used.

Compared to the world that existed prior to Facebook, it’s a bit unusual that today there’s a social information-gathering giant just a few clicks away. The world has definitely changed as can be seen by the fact that Facebook has gathered over one billion members. With a user base that massive, they would obviously need to be extra cautious about their user’s privacy settings and any changes they make to them programmatically (Fowler, 2012). Regrettably, they aren’t as cautious as their users would like. In fact, on more than one occasion Facebook has changed the privacy preferences of its users from whatever they had personally selected back to the default and more public settings (Doctorow, 2012, p. 1; Mediati, 2012). Facebook’s privacy settings are adjusted in such a way that once a setting’s purpose has been modified, new settings are created or adapted to fill in the gaps. This slicing action that Facebook continues to implement on the privacy settings of their users is a constant source of confusion that quite naturally leads to some users becoming too irritated to fix them (Mediati, 2012). It should make some users curious about the legality of their mistakes. What was once private, now just another element of the public domain online. Sadly, a privacy setting alteration isn’t the only problem that Facebook users have started to become irritated about.

Germany, constantly angry with large Internet companies over privacy, became furious about the new facial recognition feature within Facebook back in August of 2012 (Hill, 2012). It should have been obvious to Facebook that this was going to be Germany’s reaction after the battle that they had with Google over their Street View feature (Murphy, 2011). Starting with one small city, later many Germans felt that they had good reasons to not want their faces, locations, and homes recorded. It should give an American pause that they’re willing to accept what other countries find infuriating and imposing. The Germans’ use of politics as a privacy enhancing tool definitely needs to spread from Europe to North America. Although the motives of the German people are sincere, if every country reacted to the extent that Germany reacts, companies would never be allowed to share supplementary information in a beneficial way, even when maintaining privacy is their top priority.

When companies share client information, it shouldn’t be immediately interpreted as a terrible offense so long as the privacy of the individuals whose data is being shared is secured properly; hospitals need to share patient information with other hospitals and law enforcement bureaus need to collect information relevant to criminal cases. Because of this, an entire subfield of cryptography, known as secure multi-party computation (SMC), has emerged. The purpose of an SMC is to allow different parties to calculate and compute their own results from the private data that the other party members possess while maintaining the privacy of the particular inputs, the user’s personal information (Ishai, Kushilevitz, Ostrovsky, & Sahai, 2009, p. 1). When designing a method to calculate the result, privacy as control plays a very large role in defining the methodology for protecting information. Being able to control exactly how the information is directed to other parties relates directly to the “control” concept of privacy. From the perspective of privacy as access, it has the uniform role of needing to be strictly forbidden from being violated in all ways except to provide the appropriate output per the query or computation.

A common example of an SMC dilemma is Yao’s Millionaires Problem in which a group of millionaires want to know who has the most money, but none of them want to say exactly how much they have (Sheikh, Mishra, & Kumar, 2011, p. 4). This problem can be translated over to the problem that hospitals have when one hospital needs to know if their patient is allergic to a specific chemical, and the information is at another hospital, bundled with all other bits of information on the patient. The obvious need for a secure means of providing that information is apparent, but not as resolved as some people would like. There are novel ways of maintaining privacy among the millionaires and the hospitals, but nothing that transfers over to many different environments easily. This difficulty has led to the development of ingenious, scalable business-to-business (B2B) PETs (Phillips, 2004, p. 10). With B2B PETs being developed for an ever-growing corporate world, it should be hoped that large-scale privacy infringement would become a thing of the past; it frankly doesn’t pan out that way.

It’s undeniable that B2B PETs are nice to have available, but like all PETs, implementing them in a way that allows for the businesses to operate in a trusting and straightforward manner becomes a deterrent to their use (Phillips, 2004, p. 10). Not only that, but because there are only two different models for the implementation of an SMC (Ideal Model Paradigm and Real Model Paradigm) and that both have flaws in the area of trust, designing the perfect algorithm for each situation can be very difficult or impossible depending upon the privacy regulations of either party or the competence of the business itself (Phillips, 2004, p. 10; Sheikh, Mishra, & Kumar, 2011, p. 2). The Ideal Model Paradigm is designed with an intermediary trusted third party (TTP) that executes the request of either party. In this way, neither party has direct access to the information. The Real Model Paradigm is designed with a protocol that either party can use to achieve the desired computation. Trusting the TTP in the Ideal Model or designing a protocol that maintains the privacy of either party’s data in the Real Model present the largest challenge to developing a quality SMC. Although the challenge is complicated when businesses are sharing their client’s personal information, it only makes it more obvious that everyone should learn how they can protect themselves and their information when they venture online.

Protecting Our Privacy

            As the news outlets and blogs continue to report each new privacy infringement from the large Internet power-houses such as Facebook and Google, there are motivated people in the world who find the time and resources to develop PETs (Hill, 2012; Sullivan, 2011). While the developers of these PETs have gained support from informed users online, one particular so-called PET, known as Platform for Privacy Preferences (P3P), has some users questioning its usefulness or if it’s actually a PET at all.

            Ruchika Agrawal, an IPIOP Science Policy Analyst at the Electronic Privacy Information Center (EPIC), just so happens to be one of these people (Zoom, 2012). She’s concluded that “P3P fails as a privacy-enhancing mechanism because P3P does not aim at protecting personal identity, does not aim at minimizing the collection of personally identifiable information, and is on a completely different trajectory than the one prescribed by the definition of PETs” (Agrawal, 2002, p. 4). P3P is a protocol that gives websites the ability to declare how they intend to interact with a user’s information. In this way, the browser can restrict access to sites that don’t meet the user’s predefined preferences. This would be fine enough if it worked properly and was supported by more than just Microsoft’s Internet Explorer (Richmond, 2010). Other browsers, like Firefox and Chrome, instead allow the user to specify which general level of access any-and-all websites should have to their computer.

            When it comes down to it, P3P is all about the cookies. A cookie is a file that a website can store on a user’s computer with the intention of being accessed in the future. The main function of a cookie is to provide the website with the understanding of who the user is and what their history is with the site (Phillips, 2004, p. 7). Cookies remove the burden of saving user-specific information on the web server, so long as the information can be deleted without severely impacting the user experience. This is great for keeping track of information as the user goes from page-to-page, so long as the cookie isn’t designed to expire or is outright blocked from being created (Leenes & Koops, 2005, p. 5). Firefox and Chrome label cookies as being either third-party (used to record long-term histories of the user’s browsing history) or first-party (used to individualize the user online) (Opentracker, 2012). There are many benefits to this aspect of website design, but too often their malicious nature becomes a public nuisance.

            There’s been a lot of negative press about cookies ever since the year 2000 when the White House disclosed that they were tracking which ads had been more effective at migrating users from the site they were just looking over to the drug office’s website (Lacey, 2000). The way in which this was done was by having DoubleClick, an advertising company owned by Google and based out of New York, have their ads keep track of where users were going within the user’s own individual cookie (Google, 2007; Lacey, 2000). This likely scared the public into thinking that the government was always watching their steps and which sites they were navigating to on a regular basis. The public should be scared, because a group of advertising companies, including DoubleClick and 24/7 Media, haven’t stopped collecting data on the way users migrate through the Internet to this day. They’ve been nice enough, though, to establish a site where this profiling can be explained and potentially opted out of (DeLoughry, 1999; Google, 2012). But this only pushes the burden of privacy protection back onto the users; a policy like this only gives those who happen to find out about this spy campaign the ability to react to it. But even then, to opt out they will not only have to install browser specific software, but they will also have the comfort of knowing that the software that they need to install was developed by the same backseat-investigators. Everyone can put their trust in that, right?

            Thankfully, there is another option that accomplishes the same task but from a more respectable source, Adblock Plus (Palant, 2012). With Adblock Plus, a plug-in that can be installed through the user’s browser, ads that would otherwise create and update their cookies will never become rendered in the page, so long as the ad is part of the list that the user is subscribed to. This is a welcomed extension to any popular browser, but it has had the obvious drawback on those websites that depend upon ad-revenue (Evans, 2007). Web masters and small websites are therefore taking an economic hit due, in part, to the fear of the tracking performed by DoubleClick and other online advertising companies.

            To be fair to those websites that use the advertisement business model, there are other options. An anonymizer or proxy server can be utilized and in doing so both parties are satisfied. The web master gets paid because the ads are being rendered, and if the anonymizer doesn’t block cookies, the advertising company gets to track the movement of users online. The only catch is that the advertising company won’t be able to know the IP address of the user they’re monitoring, making it impossible to truly know a user’s identity. That’s the entire purpose of an anonymizer, to make the webpage request come from another location giving the impression that the user’s computer is actually located at the proxy server’s location (Agrawal, 2002, p. 2; Phillips, 2004, p. 7; “Take Control”, 2000). A website like is a perfect example, although rather expensive for the average user (Anonymizer, 2012). There are free anonymizers, but the quality of the service is exactly what the user pays for.

            Using a proxy has its downsides, cost and consistent up-time being relative to each other. Outside of those two, because of its design it is inherently slower than accessing the web server that is hosting the website directly. Another downside is the reliability of the host of the proxy service. If, for instance, a user wanted to use a proxy in order to anonymously inject malicious SQL commands into the Sony Pictures website with the goal of publishing all of their user data, the proxy service can and does submit to court orders to identify the user (Martin, 2011). Obviously breaking the law in that manner should result in legal action, but what if it was against the law to criticize the government?

            There are still options for those who cannot exercise free speech and do not want to use a centralized anonymizing service for fear of being identified and persecuted. Tor, an uncentralized network of volunteer-run Tor routers that allow clients to browse the Internet anonymously by using onion routing, has steadily grown in use right up until about 2008 (AlSabah, Bauer, Goldberg, Grunwald, McCoy, Savage, & Voelker, 2011, p. 1). Tor is definitely a great way to browse the Internet anonymously, but unfortunately Tor has become tainted with child pornography, the illegal sale of guns, and drug transactions (Schultz, 2012). This leads to issues of wanting to communicate anonymously while at the same time not being associated with unethical activities. If this weren’t enough, because of its design, Tor is quite slow at sending and receiving information. Being as slow as it is, there have been studies done to find ways of changing the design of Tor in order to fix this problem (AlSabah et al., 2011, p. 2). Alas, there has been no clear solution to date that doesn’t also cause instability in terms of anonymity.

            Since proxies report to the government and Tor is full of pedophiles, the only other quick solution might just be encryption. Encrypting the connection and request of the user doesn’t hide where the user is or what server they connected to, but it does provide a degree of deniability on the content of the request. Deciding on which encryption method to use can be daunting for the uneducated user, so perhaps a good choice could be the one that almost had its developer thrown in prison for developing and distributing it (Radcliff, 2002). Yes, the Pretty Good Privacy (PGP) encryption algorithm was so profound that while its creator, Phil Zimmermann, was accepting the prestigious Pioneer award from the Electronic Frontier Foundation, he stated “I think it’s ironic ... that the thing I’m being honored for is the same thing that I might be indicted for” (Levy, 1995). Nothing excites a government’s politicians more than a new, easy means for the public to protect themselves.

Government Reaction and Interaction

            It was mainly because of the Senate bill of 1991 that was going to ban cryptography that motivated Phil Zimmerman to release PGP for free (Levy, 1995). The reason that the United States government reacted the way that it did was because of fears that communication previously being monitored would become encrypted beyond their ability to intercept and react to in a timely fashion. This has been the concern of both governments and law enforcement. But while governments are concerned with national security, law enforcement reactions tend to have the effect of demanding self-incrimination.

            This occurs far too often at the borders of the United States. One of the first public cases of this kind that violated the Fifth Amendment of the Constitution occurred on May 6, 2011. By demanding the encryption key of a laptop in order to incriminate the defendant as part of US v. Fricosu, law enforcement now has the impression that this form of self-incrimination is within their right. (Hofmann & Fakhoury, 2011). The case has set a startling precedent that could make a person’s inability to remember their password a criminal offense.

This type of intrusion doesn’t only happen in court rooms, it happens for every person crossing the U.S. border (Schoen, Hofmann, & Reynolds, 2011, p. 4). Even though the Fourth Amendment plainly states that its citizens are protected from unreasonable searches and seizures, law enforcement has had no hindrance in their exercise of accosting traveling citizens into exposing their personal data. “Congress has also weighed several bills to protect travelers from suspicionless searches at the border, but none has yet passed” (Schoen et al., 2011, p. 5). The U.S. government must find it easy to impose itself in this fashion with the example it has from its neighbors in Europe (Green, 2010). There is a phrase for this kind of general citizen surveillance and it’s “Big Brother”.

Every year Freedom Not Fear, an anti-surveillance organization, organizes a broad international protest of Big Brother (Rodriguez, 2012). Their goal is to raise the awareness of and to demand a change in the way that civil liberties are infringed upon when it comes to mass-surveillance. This wouldn’t be necessary if the privacy of individuals weren’t being totally discarded as nation-wide cameras record and centralize public footage like that of the CCTV surveillance cameras on many popular street corners, some officers even calling this outrageous setup of street cameras the “ring of steel” (Bowe, 2012). At this stage, where Internet-related PETs are not applicable, demanding legislation that protects privacy might well be the new anti-totalitarian PET. If only there were a means to fight back directly and anonymously online.

This is, of course, exactly the purpose of, to provide a website where journalists and whistleblowers can anonymously submit their knowledge, however classified it may be, in order to create public awareness. Wikileaks provides an environment to exercise free speech while at the same time maintaining the user’s anonymity. The cost of this protection has given some state officials pause due to the very nature of the knowledge being spread. The Secretary of State, Hillary Clinton, and others have stated that the cost could very well be paid with innocent lives (Connolly, 2010). They would demand that the identities of those that have submitted confidential diplomatic cables be known and that they be prosecuted. The American image is so easily tarnished that, in their embarrassment, they continue to attack Wikileaks’ founder Julian Assange even today (Beckford, 2012). With something as profound as the Constitution, it’s a shame that America isn’t the shining example to be looked up to.

Europe has the upper-hand, it seems, when it comes to legislation and the privacy of shared personal information. The European Commission’s justice department plainly states that “everyone has the right to the protection of personal data” (European, 2012) and to enforce this ideal, they’ve developed the Data Protection Directive (DPD). It matches nothing in American law and is a grand step forward for protecting privacy in Europe by making the act of collecting and moving personal data follow strict guidelines (European, 2012). A draft from the European Commission was unveiled on 25 January, 2012 and is intended to supersede the 1995 draft for a DPD that never made it to law (M Law Group, 2012). The new DPD’s core principles are to allow for both better control over an individual’s data and to make the transfer of that information safer for those individuals that the data refers to. Schartum (2001, p. 10) believes that the European Commission should not only protect personal data by way of information and communication technology, but that they also take care to avoid incidentally creating Big Brother. He was talking about the 1995 DPD, but hopefully the 2012 draft was also written with this concern in mind. The existence and purpose of PETs has evidently reached European politicians and lawmakers as they attempt to make the public’s need of them obsolete. They are far from complete and may never solve every problem, but it’s truly a welcomed step forward.


            Every person who interacts with the Internet runs the risk of exposing facts about them that they otherwise would not have exposed in a physically public domain. To safeguard against this injustice, PETs have been created, shared, and utilized throughout much of the world. Issues with user comprehension, suspicious centralized systems, and unconstitutional intervention from law enforcement steadily plague individuals and flourish best when these people have no knowledge that their privacy is being violated or when they have come to accept these things as commonplace (Chappell, 2012; Green, 2010; Schoen et al., 2011).

            This feeling of tug-of-war between a person’s privacy and online policy can only be shifted toward the individual’s side ever-so-slightly with the application of PETs. In this way, users attempt to isolate themselves from the outside, wearing a digital mask inside of a digital bomb shelter, as it were. But they have no control over the information that is stripped from them, unbeknownst to them purposely, and shared among information-gathering behemoths. Government legislation needs to be the complimentary PET to the online privacy battle so that the people have control of their information, even when and especially if it has been stolen from them in secret. The European governments are clearing moving in that direction as they formulate comprehensive measures to protect publicly tradable client information, but there still the issue of their personal information. Governments have to be more proactive when developing ethical privacy legislation in an exponentially advancing computer age, rather than being retroactive in response to criticism. This needs to happen for everyone; this needs to happen now.



Agrawal, R. (2002). Why is P3P not a PET? W3C Workshop on the Future of P3P. Retrieved from


AlSabah, M., Bauer, K., Goldberg, I., Grunwald, D., McCoy, D., Savage, S., & Voelker, G. (2011). DefenestraTor: Throwing out Windows in Tor. Privacy Enhancing Technologies, 6794, 134-154.


Anonymizer, Inc. (2012, October 20). Hide IP and anonymous web browsing software – Anonymizer. Retrieved from


Beckford, M. (2012, October 8). Julian Assange’s backers told to pay 93,500 pounds over bail breach. The Telegraph. Retrieved from


Birnhack, M. & Elkin-Koren, N. (2011). Does law matter online? Empirical Evidence on Privacy Law Compliance. Michigan Telecommunications and Technology Law Review, 17, 337-384. Retrieved from


Bowe, R. (2012, September 11). Freedom Not Fear: CCTV surveillance cameras in focus. Electronic Frontier Foundation. Retrieved from


Chappell, K. (2012). I always feel like somebody's watching me. Ebony, 67(10), 25-26.


Chuang, YT., Lombera, IM., Moser, L. E., & Melliar-Smith, P. M. (2011). Trustworthy distributed search and retrieval over the Internet. Retrieved from'11/2011%20CD%20papers/ICM3731.pdf


Connolly, K. (2010, December 1). Has release of Wikileaks documents cost lives?. BBC News, Washington. Retrieved from


DeLoughry. (1999). Privacy problems hurt consumers' trust in Net. Internet World, 5(34), 20.


Doctorow, C. (2012). The curious case of Internet privacy. Technology Review, 115(4), 65-66.


Eckersley, P. (2011, March 16). Tracking Protection Lists: A privacy enhancing technology that complements Do Not Track. Electronic Frontier Foundation. Retrieved from


European Commission’s Directorate General for Justice. (2012, April 4). Protection of personal data – Justice. Retrieved from


Evans, M. (2007, September 11). Adblock Plus is still evil. Mark Evans Tech. Retrieved from


Fowler, G. (2012, October 4). Facebook: One billion and counting. The Wall Street Journal. Retrieved from


Google. (2012, October 21). Google advertising cookie opt-out plugin. Retrieved from


Google. (2007, April 13). Google to acquire DoubleClick. Retrieved from


Green, D. (2010, October 130. Passwords and prosecutions. NewStatesman. Retrieved from


Hill, K. (2012, August 16). Germany is freaking out about Facebook’s facial recognition feature (again). Forbes. Retrieved from


Hofmann, M. & Fakhoury, H. (2011, July 8). EFF’s Amicus Brief in support of Fricosu. Electronic Frontier Foundation: Defending your rights in the digital world. Retrieved from


Ishai, Y., Kushilevitz, E., Ostrovsky, R., & Sahai, A. (2009). Zero-knowledge proofs from secure multiparty computation. Society for Industrial and Applied Mathematics, 39(3), 1121-1152.


Lacey, M. (2000, June 22). Drug office ends tracking of web users. New York Times. Retrieved from


Leenes, R., & Koops, B. (2005). ‘Code’: Privacy's death or saviour?. International Review Of Law, Computers & Technology, 19(3), 329-340.


Levy, S. (1995, April 24). The encryption wars: Is privacy good or bad?. Newsweek, 125(17), 55.


M Law Group. (2012, February 2). New draft European data protection regime. Retrieved from


Martin, A. (2011, September 23). LulzSec hacker exposed by the service he thought would hide him. The Atlantic Wire. Retrieved from


Mediati, N. (2012). Social network privacy settings compared. PC World, 30(9), 37-38.


Morda, D. (2011). Five steps to configuring privacy on Google Plus (+). Branded Clever. Retrieved from


Murphy, D. (2011, April 10). Google abandons Street View in Germany. PC Mag. Retrieved from,2817,2383363,00.asp


Null, C. (2012). 'Liberate' your archived data from Google?. PC World, 30(9), 25-26.


Opentracker. (2012, October 21). Third-party cookies vs first-party cookies | Retrieved from


Palant, W. (2012). Adblock Plus for Chrome – for annoyance-free web surfing. Retrieved from


Phillips, D. J. (2004). Privacy policy and PETs. New Media & Society, 6(6), 691-706.


Radcliff, D. (2002, July 22). PGP on shaky ground. Computerworld, 36(30), 33.


Richmond, R. (2010, September 17). A loophole big enough for a cookie to fit through. The New York Times. Retrieved from


Rodriguez, K. (2012, September 14). Freedom Not Fear: Creating a surveillance-free Internet. Electronic Frontier Foundation. Retrieved from


Schartum, D. (2001). Privacy enhancing employment of ICT: Empowering and assisting data subjects. International Review Of Law, Computers & Technology, 15(2), 157-169.


Schoen, S., Hofmann, M., & Reynolds, R. (2011). Defending privacy at the U.S. border: A guide for travelers carrying digital devices. Electronic Frontier Foundation. Retrieved from


Schultz, D. (2012, August 17). A Tor of the Dark Web. Sorry for the Spam: The Adventures of Dan Schultz. Retrieved from


Shark. (2007, December 15). Anonymous web searching (& decentralized search engines). FileShareFreak. Retrieved from


Sheikh, R., Mishra, D., & Kumar, B. (2011). Secure multiparty computation: From millionaires problem to anonymizer. Information Security Journal: A Global Perspective, 20(1), 25-33.


Sullivan, D. (2011, June 15). Google’s “Me on the Web” pushes Google Profiles – take that Facebook?. Search Engine Land. Retrieved from


Take control of your own privacy online. (2000). Consumer Comments, 24(5), 2


Thomas, K. (2011, June 16). Google’s ‘Me on the Web’ tool alerts you to personal data leaks. PC World. Retrieved from


Zoom Information, Inc. (2012, October 20). Ruchika Agrawal, IPIOP Science Policy Analyst, Electronic Privacy Information Center | Retrieved from!search/profile/person?personId=200629371&targetid=profile