Censorship

Mirror Party

Mirror Party is a scalable, censorship-resistant mirror network for the web. It aims to make mirroring more accessible for both clients and hosts, allowing several orders of magnitude greater participation. Mirror Party is different from previous mirroring efforts because it works entirely with software users already have – no additional packages need to be installed for either clients or hosts.

Mirror Party consists of software tools for building mirror networks and a website for matching content with volunteer hosts. It uses JavaScript in the browser to reconstruct the user experience of browsing a single website from content hosted on multiple servers.

A tongue-in-cheek proof of concept is available at oppressedistan.com

Table of Contents

Overview

Technical Details

  • Threat Model – description of possible attacks
  • Browsing – how mirroring works in a web browser
  • Making Mirrors – how mirrored content is downloaded & replicated
  • Social Mirroring – service to automatically match threatened content to volunteer hosts

Addendum

Source: https://github.com/wearpants/wherestheparty/wiki

More at http://mirrorparty.org/


Anonymity will be the next victim of internet censorship

By Eerke Boiten and Julio Hernandez-Castro

The worrying developments in UK internet freedom over the last year make predictions for 2014 gloomy to say the least. But censorship now affects us all so we should be thinking about it. And it’s not politically driven censorship we should be most afraid of.

This year has been characterised by tension between the UK government’s use of terrorism laws and free speech and, more recently, by concern over the unavoidable over-blocking of content in the name of protection. Yet there are greater threats to our internet freedom than the heavy hand of the government.

Oversight versus interference

Both the government and internet service providers have abdicated responsibility for the quality control of the security filters being put in place in a bid to prevent children from accessing pornographic content at home.

ISPs such as BT and Sky have delegated the task of deciding what to block to third party companies. For accountability and oversight that is bad news but in terms of possible political interference it is actually good.

Why censorship?

There have been three main drivers for internet censorship. One is child abuse imagery, the banning of which is in line with the general population’s views. Websites containing child porn can be taken down, for example through the Internet Watch Foundation, and, since November, search engines have returned warnings and reduced results when certain terms have been searched for. Although porn in general is not illegal, the ISPs’ filters will have an impact on the blocking of child abuse by negatively affecting the distribution of borderline illegal material.

The second driver is combating extremism. It is still unclear how censorship will be applied here, but classification is highly problematic. No clear public mandate exists for this censorship, nor are links with legislation on issues such as hate speech or proscription of organisations, made explicit. In its filters, BT does not have an “extremism” category, although some content may fall within its “weapons and violence” or “hate” labels.

The final category is media organisations aiming to protect their copyright. The 2010 Digital Economy Act allows for ISPs to apply sanctions (such as bandwidth restriction and disconnection) to users who have downloaded copyrighted material. ISPs have also been forced to block file sharing websites, such as The Pirate Bay and BT includes the practice in its filtering. But file sharing isn’t always illegal and even when it is, public opinion is divided about whether or not it is acceptable. The heavy-handed measures that can be taken show the impact of the commercial interests in this domain.

Mission creep

It’s important to note that BT is filtering in 14 categories, even though David Cameron promised nothing broader than “porn” filters. The generous explanation for this is that the third party providers being used by ISPs already had a range of filtering options in place for parental controls or use in schools, for example filtering against high bandwidth activities like file sharing and media streaming already.

More worryingly though, it has been reported that the BT filters also restrict access to sites promoting the use of proxies. This is where the next battle over internet censorship will be fought. Restricting the technological means through which internet users can obscure their IP addresses and hide the content they are accessing from others is the next big target.

Again, the excuse may be that the third party providers already have this built into their products for good reasons. In the context of school web filters, for example, circumvention of filters needs to be prevented.

But it looks like these measures could well be broadened. The IWF and the Child Exploitation and Online Protection Centre have been
asked to investigate child abuse imagery in the “Dark Web”. The only predictable, and sensible, recommendation for reducing child porn to come out of this will be to restrict access to the Dark Web. And that has to be done by restricting a user’s ability to disguise their activities.

Media companies and the TTIP

This by itself will not cause the UK government to restrict access to Tor, VPNs, or proxies in general. However, the media copyright lobby will want to make it happen because peer-to-peer networks, content indexed through torrent sites, possibly using some form of anonymous routing along the way, carry the majority of the “illegal” file sharing load.

Media companies stand to gain significant powers, possibly trumping national legislation, through trade agreements such as TTIP. Using these, they will want to close off all avenues of illegal file sharing, and they are unlikely to care about collateral damage to internet privacy. Thus, we have to worry about restrictions on the use of Tor anonymous routing, VPNs, proxies, and any other ways that allow us to be more anonymous and protected on the internet.

This prediction then brings together the two big internet freedom storylines of the last six months. The government’s desire for quick internet censorship solutions will end up impeding our capacity to defend ourselves against overzealous surveillance from intelligence services and tech companies.

The Tor fightback

The good news is that Tor traffic has proved hard to detect and shut down. Many countries have tried and failed. Security companies claiming to have the required technology typically are only able to block older versions.

These days, Tor connections look like normal secure web traffic. Currently only China systematically and openly blocks Tor (with its Great Firewall) for long periods of time. They do this by blocking the eight “directory authorities” that form the entry point to Tor, in combination with Deep Packet Inspection. In response, the Tor project continually develops new camouflage methods, and also very promising tools for detecting internet interference. Russia and Japan have been reported to be considering blocking Tor. All is not lost, but we should be on our guard.

Eerke Boiten is a senior lecturer in the School of Computing at the University of Kent, and Director of the University’s interdisciplinary Centre for Cyber Security Research. He receives funding from EPSRC for the CryptoForma Network of Excellence on Cryptography and Formal Methods.

Julio Hernandez-Castro does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

Source: The Conversation


New EU telecoms rules – the shape of things to come

By Monica Horten

As the lead committee on the Telecoms Regulation, ITRE, will be sitting down to discuss it this afternoon, this posting postulates on the appropriate balance of providers and citizens, inspired by a couple of recent Canadian studies.

How far will Commissioner Kroes’ new telecoms proposals rig the market in favour of large providers? That is a key policy  issue at stake in the Telecoms Regulation (also known as Connected Continent proposals). With the Regulation  now in the European Parliament,  MEPs have a chance to debate and amend it.  A related question, therefore, is how will they tackle the demands of the big providers and what kind of balance will they provide against citizens rights.

Commissioner Kroes and her team have made strong PR claims that the new rules as they propose them will protect net neutrality.   If she is to be believed – and there is a large ‘if’ in this context –  she has protected the open Internet. However, as the ‘if’ suggests, there are many criticisms of her proposals, notably that whilst they do take a stand against restriction of services, they simultaneously permit priorisation and would kill, rather than protect, net neutrality (See Permission to stream – how new EU telecoms rules violate net neutrality ).

What has been less widely understood is the matter of bandwidth caps. The Commission’s proposed Telecoms Regulation  suggests, among other things, that network providers wll be permitted to impose bandwidth caps, and in that context, to be allowed to offer prioritised services.

This takes us into a new dimension of the whole policy argument. It is known that some network providers – Deutsche Telekom, for example – want to use data caps as an artificial way to restrict usage of competiing services.  The data  cap will have certain services ‘included’  or ‘free’   and all the rest will be excluded. So,  users will be able to access the ‘included’ services without using up their data allowance; if they use any other services, that usage will eat into the allowance. Video services will of course, eat up the allowance very quickly.

Do you see where this is going? The data caps will be set very cleverly so that users wanting to watch services that compete with the provider’s own services will find it uneconomic. Users will end up paying extra to access services that are not within the provider’s portfolio. Ultimately, they will stick with the provider’s own services, because they are ‘free’.  Non ‘free’ services would ultimately be killed off for lack of traffic or become a niche service for those who can afford to pay for an un-capped service (bearing in mind the providers could set these prices very high).

Policy-makers who are educated in the old world of broadcast economics do not see what is wrong with this. Because that is exactly how it worked. And it is how, for example, BT wants it to work with its strategy of buying football rights.

But for those brought up on Internet economics, it would wreak a devastating blow to the new ecology of communications. Any videos that contain  non-commercial, political speech would doubtless fall outside data caps, chilling democratic speech with the EU’s  blessing. There is also a concern that Wikipedia and other freely available information, could fall outside the data caps.

[…]

More on IPTegrity: New EU telecoms rules – the shape of things to come


Internet censorship in the People’s Republic of China

Internet censorship in the People’s Republic of China is conducted under a wide variety of laws and administrative regulations. In accordance with these laws, more than sixty Internet regulations have been made by the government of the People’s Republic of China , which have been implemented by provincial branches of state-owned ISPs, companies, and organizations. The apparatus of the PRC’s Internet control is considered more extensive and more advanced than in any other country in the world. The governmental authorities not only block website content but also monitor the Internet access of individuals.

Amnesty International notes that China “has the largest recorded number of imprisoned journalists and cyber-dissidents in the world.” The offences of which they are accused include communicating with groups abroad, signing online petitions, and calling for reform and an end to corruption. The escalation of the government’s effort to neutralize critical online opinion comes after a series of large anti-Japanese, anti-pollution, anti-corruption protests, and ethnic riots, many of which were organized or publicized using instant messaging services, chat rooms, and text messages. The size of the Internet police was reported to be 2 million in 2013.

Background

The political and ideological background of the Internet censorship is considered to be one of Deng Xiaoping’s favorite sayings in the early 1980s: “If you open the window for fresh air, you have to expect some flies to blow in.” The saying is related to a period of the economic reform of China that became known as the “socialist market economy”. Superseding the political ideologies of the Cultural Revolution, the reform led China towards a market economy and opened up the market for foreign investors. Nonetheless the Communist Party of China has wished to protect its values and political ideas from “swatting flies” of other ideologies.

The Internet arrived in China in the year 1994 as an inevitable consequence of, and supporting tool for, the “socialist market economy.” Since then, and with gradual increasing penetration, the Internet has become a common communication platform and an important tool for sharing information. In 1998 the Communist Party of China feared the China Democracy Party (CDP) would breed a powerful new network that the party elites might not be able to control. The CDP was immediately banned followed by arrests and imprisonment. That same year the “Golden Shield project” was started. The first part of the project lasted eight years and was completed in 2006. The second part began in 2006 and ended in 2008. On 6 December 2002, 300 people in charge of the Golden Shield project from 31 provinces and cities throughout China participated in a four-day inaugural “Comprehensive Exhibition on Chinese Information System”. At the exhibition, many western high-tech products including Internet security, video monitoring and human face recognition were purchased. It is estimated that around 30–50,000 police are employed in this gigantic project.

Legislative basis

The government of the PRC defends its right to censor the internet by claiming that the country has the right to govern the internet according to its own rules inside its borders. The white paper, released in June 2010, called the internet “a crystallization of human wisdom”. But in the document the government lays out some of the reasons why its citizens cannot get access to all of that wisdom. It says it wants to curb the harmful effects of illegal information on state security, public interests and children. “Laws and regulations clearly prohibit the spread of information that contains content subverting state power, undermining national unity infringing upon national honor and interests,” it says. Another section of the same white paper reaffirms the government’s determination to govern the internet within its borders according to its own rules. “Within Chinese territory the internet is under the jurisdiction of Chinese sovereignty. The internet sovereignty of China should be respected and protected,” it says. It adds that foreign individuals and firms can use the internet in China, but they must abide by the country’s laws.

Attribution

Article text available under CC-BY-SA
Public domain image source in video


Rangzen: Circumventing Government-Imposed Communication Blackouts

Rangzen_bracelets

Giulia Fanti, Yahel Ben David, Sebastian Benthall, Eric Brewer and Scott Shenker

A challenging problem in dissent networking is that of circumventing large-scale communication blackouts imposed by oppressive governments. Although prior work has not focused on the need for user anonymity, we contend that it is essential. Without anonymity, governments can use communication networks to track and persecute users. A key challenge for decentralized networks is that of resource allocation and control. Network resources must be shared in a manner that deprioritizes unwanted traffic and abusive users. This task is typically addressed through reputation systems that conflict with anonymity. Our work addresses this paradox: We prioritize resources in a privacy-preserving manner to create an attack-resilient, anonymity-preserving, mobile ad-hoc network. Our prioritization mechanism exploits the properties of a social trust graph to promote messages relayed via trusted nodes. We present Rangzen, a microblogging solution that uses smartphones to opportunistically relay messages among citizens in a delay-tolerant network (DTN) that is independent of government or corporate-controlled infrastructure.

Download

http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-128.pdf

Application

We plan to release a beta version of the Android-app by the end of summer 2014. This version will be rigorously tested, peer reviewed and experimented with by students and the general public, leading to bug fixes and feature enhancement in successive releases.

http://rangzen.denovogroup.org/wp/

Note

Privacy and Trust in the Social Web


How to open blocked sites: Bypass Proxy sites, VPNs and other tricks to get around Censorship

detour

“The Internet interprets censorship as damage and routes around it.” — John Gilmore

By Robin Welles, source

A good many internet users search under this term each month. Search engines clearly aren’t just for finding information – they are for venting too. But one can understand the frustration felt by anyone whose internet access is restricted by a censoring authority. What astonishingly few people know, however, is just how many ways there are to view blocked sites in this situation.

Regardless of whether your internet is censored by the government, by content filters on your local area network or by your internet service provider (ISP), there are several effective means of how to view blocked sites on your internet connection, most of which are completely free, easy and safe. It’s not always easy to understand which circumvention method is required, but with the knowledge imparted below you will be better equipped to understand your filter, and how best to circumvent it.

Let’s start with how to access blocked sites behind heavy-duty censorship systems and work our way down to dealing with the more flimsy censoring methods (censorship can have another entirely different meaning, as you’ll see). It will be no surprise that the circumvention methods used to view blocked sites behind high-grade censorship generally require more investment (be it time or money) than those used to beat simpler censorship technologies. Surprisingly, though – certain free and easy internet bypassing tricks can get around internet censorship of even very high calibres.

Part II: How to Open Blocked Sites: Stateful Packet Inspection Filters

Part III: How to Open Blocked Sites: Keyword Blocking Filters

Part IV: How to Open Blocked Sites: IP Block Filters

Part V: How To Open Blocked Sites: DNS Block Filters


How to effectively argue against Internet Censorship ideas

Updated August 11, 2013 with an overview table on blocking methods and a contents section up top
rysio

By Michał “rysiek” Woźniak

During the last few years I have been involved in arguing against several attempts at introducing Internet censorship in Poland. Some of these where very local and went almost unnoticed outside Poland (like Rejestr Stron i Usług Niedozwolonych — the Register of Unlawful Websites and Services, in 2010); some where a part of a larger discussion (like the implementation debate around EU directives, that allowed, but not mandated, introducing child porn filters in EU member states); one made a huge splash around the world (I write about anti-ACTA campaign efforts here).

At this point I have gathered quite some experience in this. Due to censorship ideas gaining support even in apparently democratic countries. I have decided it’s time to get it all in one place for others to enjoy.

The ground rules

There are some very important yet simple things one has to keep in mind when discussing censorship ideas. They can be best summarized by an extended version of Hanlon’s Razor:

`Never attribute to malice that which is adequately explained by incompetence, laziness or stupidity.’

More often than not the fact that somebody proposes or supports Internet censorship is not a result of malicious intent — however tempting such an assumption might be. Usually such support stems from the fact that people (including policymakers):

  • do not understand how Internet works
  • do not see the connection between their idea and censorship
  • do not grasp the technical problems and the cost of implementing such ideas
  • do not see nor understand the danger of implementing them

There are two areas one has to win in in order to have a chance of striking down such ideas:

  • logical argumentation based on technical issues;
  • purely emotional public debate.

The former is the easier one, and can give a good basis for the latter one — which is the endgame, the crucial part of winning such arguments.

The adversaries

There are usually five main groups of people that one has to discuss with in such a debate:

  • politicians;
  • civil servants;
  • law enforcement, uniformed and secret services;
  • genuinely involved (if sometimes misguided) activists;
  • business lobbyists.

There is also a sixth, crucial group that has to be swayed to win: the general public. To communicate with that group, you also need the media.

Politicians are very often the first to call for Internet censorship, and as a rule are in it for short-term political gain, not for long-term social change. The social change bit is just an excuse, the real reason why they float such ideas is more often then not politics and gaining popular support or getting their names out in the mainstream media.

Sometimes it’s enough to convince them personally, sometimes what is needed is the only argument a politician understands always — an appeal to the authority of the general public, that needs to be vocal against censorship. It is usually not wise to assume they have malicious intent (i.e. stifling opposition), this only complicates discussing with them.

Civil servants usually do not have strong feelings one way or the other, or at least they are not allowed to show them; they do what their superiors (the politicians) tell them to do. There is no gain in alienating them — if you get militant or hostile towards them, they might then start actively supporting the other side. They are very often not technical, they might not understand the intricacies of the technology involved; they also might not grasp or see the civil rights implications.

Law enforcement, uniformed and special services treat such ideas as a power grab or at least a chance to get a new tool for doing their jobs. They usually understand the technical issues, and usually don’t care about the civil rights issues involved. They see themselves as the defenders of law and order, and implicitly assume that the end justifies the means — at least in the context of Internet censorship and surveillance. They will not get swayed by any arguments, but do not usually use emotional rhetoric.

Pro-censorship activists feel very strongly about some particular social issue (child porn; gambling; porn in general; etc.) and believe very deeply that Internet censorship is a good solution. They have a very concrete agenda and it is very hard to sway them, but it is possible and worth a try. One should not assume malicious intent on their part, they genuinely feel that Internet censorship would bring capital-G Good to the world.

They usually do not understand the technical issues nor costs involved in implementing such ideas, although they might understand the civil rights issues. If they do not grasp them, explaining these to them might be a very good tactic. If they do, they might make a conscious choice of prioritising values (i.e. “one child that does not see porn on the Internet justifies a small infringement of freedom of speech”).

When made aware of the costs of implementation, they will claim that “no price is too big to pay”.

Business lobbyists tend to be present on both sides. The lobbyists for the ISPs will fight Internet censorship, as it means higher costs of doing business for them — however, as soon as there are cash incentives on the table (i.e. public money for implementing the solutions), many will withdraw their opposition.

There are usually not many pro-censorship lobbyists, at least not on public meetings. They are not possible to sway, and will support their position with a lot of “facts”, “fact sheets”, “reports”, etc., that after closer consideration will turn to be manipulative, to say the least. Taking a close look at their arguments and being prepared to strike them one by one tends to be an effective tactic, if resource-intensive. It might be possible, however, to dispel first few “facts” supplied by them and use that as a reason to dismiss the rest of their position.

General public is easily swayed by emotional arguments — like “think of the children“. However, due to the nature of these and the fact that the general public does not, en masse, understand technical issues involved, it is not easy to make a case against Internet censorship, especially if the public is not at least a bit opposed to censorship and surveillance in general.

It is, nevertheless, crucial to have the public on your side, and for that one needs strong emotional arguments, and very strong factual, technical arguments to weaken the emotional pro-censorship arguments.

In order to be able to communicate with the general public you need media, it is crucial to have high-quality press releases, with all the information needed provided within (so that it is as easy for the media as possible to run with the material). It is also very important to remember that media will distort, cut, twist information and quotes, and take them out of context. Hence, the language has to be thought-through and as clear (and as easy and accessible for the casual reader) as possible. Or more.

Media communiques should be short, succinct and to-the-point. This at the same time helps them being understood by the general public, makes it easier for the media to run the material and makes it harder to distort it.

When communicating with the media it is also helpful to try and keep political neutrality, by focusing on the issues and not on party membership nor programmes; and to provide actionable items from time to time, for example open letters with specific and unambiguous questions to the pro-censorship actors regarding legality, costs, technical issues, civil rights doubts, etc., to which (if run by the media) the actors will be compelled to answer.

Each of these groups, and often each of the actors involved, needs to be considered separately.

Each may be possible to sway with different arguments and in different contexts — public meetings, with press and media, will put pro-censorship politicians in hot water if there is a visible public opposition; more private meetings are a better choice when the public is generally pro-censorship but there are politicians or civil servants that oppose it, or consider opposing it: sometimes all they need is a good argument they could use publicly to support their position.

The excuses

The reasons — or excuses — for a pro-censorship stance are usually twofold:

  • social;
  • political.

Sometimes the social reasons given (i.e. child pornography or pornography in general, gambling, religion-related, public order, etc.) can be taken at face-value as the real, factual reasons behind an Internet censorship idea. This was the case several times in Poland, and probably is the case in most European censorship debates.

Sometimes, however, they are just an excuse to cover the more insidious, real political agenda (like censoring dissent speech and opposition, as in China, Iran, Korea).

The crucial issue here is that it is not easy to tell whether or not there is a political agenda underneath the social argumentation. And while it is counter-productive to /assume/ malice and such political agenda in every case, it is also expedient to be aware of the real possibility it is there, especially when the number of different actors involved in such a debate is taken into account.

Social excuses

There is a number of (often important and pressing) social issues that are brought up as reasons for Internet censorship, including:

  • child pornography (this is by far the most potent argument used by censorship supporters, and it is bound to show up in a discussion sooner or later, even if it starts with a different topic — it is wise to be prepared for its appearance beforehand);
  • pornography in general;
  • gambling;
  • addictions (alcohol, drugs available on the internet, allegedly also to minors);
  • public order (this one is being used in China, among others);
  • religion-related;
  • libel laws;
  • intellectual monopolies,
  • local laws (like Nazi-related speech laws in Germany).

The crucial thing to remember when discussing them is that no technical solution ever directly solved a social problem, and there is no reason to believe that the technical solution of Internet censorship would solve any of the social issues above.

Censorship opponents also have to be prepared for the inevitable adding of new social excuses in the course of the debate. For example, in Poland with the Register of Illegal Sites and Services, the Internet censorship idea was floated due to anti-gambling laws and foreign gambling sites. During the course of the discussion there were other excuses used to justify it, namely child pornography and drug-related sites.

That’s why it is important not only to debate the merits of the excuse, but to show that Internet censorship and surveillance is never justified, regardless of the issue it is supposedly meant to tackle.

It is worth noting, however, that such adding of additional excuses for censorship can backfire for its proponents. If the anti-censorship activists make the pro-censorship actors (i.e. by using the “slippery slope” argument) state clearly at the beginning of the discussion that such censorship shall be used for the stated purpose only, adding additional excuses for it later can be countered by a simple pointing that out and claiming that they are already slipping down this metaphorical slope even before the measures are introduced.

Political reasons

These are fairly straightforward. Being able to surveil and censor all Internet communications (and with each passing day the importance of Internet as a communication medium rises) is a powerful tool in the hands of politicians. It enables them to make dissent and opposition disappear, make it hard or impossible for them to communicate, easily establish the identities of oppositionists.

As Internet censorship requires deep packet inspection, once such a system is deployed, there are no technical issues stopping those in control to modify the communications in transit. That opens the door to even broader set of possibilities for a willing politician, including false flag operations, sowing dissent among the ranks of opposition, and similar actions.

The counter-arguments

There are three main groups of arguments that can be used to fight Internet censorship and surveillance ideas:

  • technical and technology-based;
  • economy- and cost-related;
  • philosophical (including those based in human rights, freedom of speech, etc.).

At the end of this section some useful analogies are also provided.

The good news is, all things considered there are very strong anti-censorship arguments to be made in all three areas. The bad news, however, is that all three kinds need to be translated to or used in emotional arguments to sway the general public at some point.

Again, as a rule neither the general public nor the politicians and civil servants that further the pro-censorship agenda have decent understanding of issues involved. Putting the issues in easily-grasped and emotionally loaded examples or metaphors is an extremely potent tactic.

It is also well worth keeping in mind to make sure (if at all possible in a given local political situation) that the anti-censorship action cannot be manoeuvred into any particular political corner (i.e. so that it’s not called “leftist issue”). Censorship and freedom of speech are issues that are of interest to people from any side of the political spectrum and being able to reach out even to groups that would not be willing to agree with you on other issues is crucial.

Technical arguments

Due to the technical make-up of the Internet there are several strong technical arguments to be made against Internet censorship. The main categories these fall into are:

  • it requires far-reaching infrastructural and topological changes to the network;
  • it requires high-end filtering equipment that will likely not be able to handle the load anyway;
  • it does not work: it is easy to circumvent, it does not block everything it is supposed to, and it blocks things that are not supposed to be blocked.

There are several ways content might be blocked/filtered on the Internet, and several levels that censorship can operate at. Each has its strong and weak points, none can guarantee 100% effectiveness, all have problems with over-blocking and under-blocking, all are costly and all require Internet surveillance.

Effectiveness of Internet censorship measures is never complete, as there are multiple ways of circumventing them (depending on the given measure).

Over-blocking occurs when a legal content that should not be blocked is accidentally blocked by a given censorship measure. Depending on the particular scheme chosen this might be a problem pronounced more or less, but it is always present and inevitable. It does not relate to situations where the block list intentionally contains certain content that should not officially be blocked.

Similarly, under-blocking is content that officially should be blocked, but accidentally isn’t. It is not content accessible by circumvention, but simply content that is accessible without using any special techniques that “slipped through the fingers” of the particular censorship scheme.

Both the resources required (equipment, processing power, bandwidth) and the cost of handling the list of blocked content also vary between censorship schemes and depend on method used.

Whether or not a method employs deep packet inspection (/DPI/) is indicative of both how intrusive and how resource-intensive it is.

Below a short summary of possible blocking methods is provided, with information on the above factors. Possible circumvention methods are summarized at the end of this sub-section.

DNS-based blocking

over-blocking probability: high
under-blocking probability: medium
required resources: small
list handling cost: medium
circumvention: very easy
employs DPI: no

DNS-based blocking requires ISPs (who usually run their own DNS servers, being default for their clients) to de-list certain domains (so that they are not resolvable when using these DNS servers). This means that the costs of implementing it are small.

However, as users can easily use other DNS servers simply by configuring their network connection to do so (not a difficult task), this method is extremely easy to circumvent.

This method has a huge potential for over-blocking, as due to certain content whole domains would be blocked. This means that it has a potential to bring down a website or a forum due to a single entry published on them.

Due to websites purposefully publishing content that is supposed to be blocked changing their domain names often (sometimes in the course of hours!), list handling costs and risk of under-blocking are medium.

IP address-based blocking

over-blocking probability: high
under-blocking probability: medium
required resources: small
list handling cost: medium
circumvention: medium
employs DPI: no

IP-based blocking requires the ISPs to either block certain IP addresses internally or route all the outgoing connections via a central, government-mandated censoring entity. It is only superficially harder to circumvent, while retaining most if not all problems of DNS-based blocking.

Both IP address-based blocking and DNS-based blocking do not employ deep packet inspection.

Websites that purposefully publish content that is supposed to be blocked can circumvent IP-based blocks by changing the IP (which is just a bit more hassle than changing the domain-name); users wanting to access blocked websites can use several methods, admittedly a bit more complex than with DNS-based blocking.

It is possible to improve the effectiveness of an IP-based block (and making it harder to circumvent) by blocking whole IP ranges or blocks; this, however, dramatically rises the probability of over-blocking.

URL-based blocking

over-blocking probability: low
under-blocking probability: high
required resources: medium
list handling cost: high
circumvention: medium
employs DPI: yes

This method employs deep packet inspection.

Because this method blocks only certain, URL-identified content, not whole websites or servers (as do DNS-based and IP-based methods), it has much lower potential for accidental over-blocking. This also entails it has a higher potential for under-blocking, as the content can be available on the same server under many different URLs, and changing just a small part of the name defeats the filter.

Users wanting to access blocked content have also a wealth of methods (including proxies, VPNs, TOR, darknets, all discussed below).

Dynamic blocking (keywords, image recognition, etc.)

over-blocking probability: high
under-blocking probability: high
required resources: very high
list handling cost: low
circumvention: medium
employs DPI: yes

This method uses deep packet inspection to read the contents of data being transmitted, and compares it with a list of keywords, or with image samples or video (depending on the content type).

It has a very serious potential for over-blocking (consider blocking all references to “Essex” based on the keyword “sex”; consider blocking Wikipedia articles or biology texts related to human reproduction), and of under-blocking (website operators can simply avoid using known keywords, or use strange spelling, for instance: “s3x”).

Combating under-blocking with extending keyword lists only exacerbates the over-blocking problem. Combating over-blocking with complicated keyword rule-sets (i.e. “sex, but only if there are white-space characters around it”) only makes it easier to circumvent it for website operators (i.e. “sexuality” instead of “sexual”).

List handling costs are low, but this method requires huge computing and bandwidth resources, as each and every data-stream on the network needs to be inspected, scanned and compared to keywords and samples. It is especially costly for images, videos and other non-text media.

Users still can circumvent the block in several ways.

Hash-based blocking

over-blocking probability: low
under-blocking probability: high
required resources: very high
list handling cost: high
circumvention: medium
employs DPI: yes

Hash-based blocking uses deep packet inspection to inspect the contents of data-streams, hashes them with cryptographic hash functions and compares to a known database of hashes to be blocked. It has a low potential for over-blocking (depending on the quality of hash functions used), but a very high potential for under-blocking, as a single small change to the content entails a change of the hash, and hence content not being blocked.

Resource needs here are very high, as not only all the data-streams need to be inspected in real-time, they also need to be hashed (hash functions are computationally costly) and the hashes compared against a database. Costs of handling the hash-lists are also considerable.

Users can circumvent the block in several ways.

Hybrid solutions (i.e. IP-based + hash-based)

over-blocking probability: low
under-blocking probability: high
required resources: medium
list handling cost: high
circumvention: medium
employs DPI: yes

In order to compromise between high-resource, low-over-blocking hash-based blocking and low-resource, high-over-blocking IP- or DNS-based solutions, a hybrid solution might be proposed. Usually it means that there is a list of IP addresses or domain names for which the hash-based blocking is enabled, hence only operating for a small part of content. This method does employ deep packet inspection.

Required resources and list handling costs are still considerable, and under-blocking probability is high, while circumvention by users is not any harder than for hash-based block.

Overview of blocking types

blocking type over-blocking probability under-blocking probability required resources list handling cost circumvention employs DPI
DNS-based blocking high medium small medium very easy no
IP address-based blocking high medium small medium medium no
URL-based blocking low high medium high medium yes
Dynamic blocking high high very high low medium yes
Hash-based blocking low high very high high medium yes
Hybrid solutions low high medium high medium yes

Circumvention Methods

There are several circumvention methods possible to be employed by users willing to access blocked content.

Custom DNS

Custom DNS server settings can be used to easily circumvent DNS-based blocking. It does not require almost any technical prowess and can be used by anybody. There is a number of publicly available DNS servers, possible to use for this purpose. There is no way to easily block the use of this method without deploying censorship methods other than pure DNS-blocking.

Proxies

Proxy servers, especially anonymous ones, located outside the area where a censorship solution is deployed can be used quite easily to circumvent any blocking method; users can modify their operating system or browser settings, or install browser additions that make using this circumvention method trivial. It is possible to block the proxy servers themselves (via IP-blocking, keyword blocking, etc.), however it is infeasible to block them all, as they are easy to set-up.

VPN

Virtual Private Networks (including “poor man’s VPNs” like SSH tunnels) require more technical prowess and usually a (usually commercial) VPN service (or SSH server) outside the area with blocking deployed. Blocking all VPN/SSH traffic is possible, but requires deep packet inspection and is a serious problem for many legitimate businesses using VPNs (and SSH) as their daily tools of trade, to allow their employees access to corporate networks from outside physical premises, via a secured link on the Internet.

Tor

TOR, or The Onion Router, is a very effective (if a bit slow) circumvention method. It is quite easy to set-up — users can simply download the TOR Browser Bundle and use it to access the Internet. Due to the way it works it is nigh-impossible to block TOR traffic (as it looks just like vanilla HTTPS traffic), to the point that it is known to allow access to the uncensored Internet to those living in areas with most aggressive Internet censorship policies — namely China, North Korea and Iran.

Darknets

None of the censorship solutions is able to block content on darknets — virtual networks accessible anonymously only via specialised software (for instance TOR, I2P, FreeNet), and guaranteeing high resilience to censorship through technical composition of the networks themselves. Because darknets are both practically impossible to block entirely and not allowing for any content blocking within them, they are effectively the ultimate circumvention methods.

The only downside to using darknets is their lower bandwidth.

Indeed, deploying Internet censorship pushes the to-be-blocked content into darknets, making it ever-harder for law enforcement gather evidence and researchers gather data on the popularity of a given type of censored content. This is further discussed in the philosophical arguments sub-section.

TLS/SSL

While not necessarily a circumvention tool, TLS/SSL defeats any censorship method that relies on deep packet inspection, as the contents of data-streams are encrypted and readable only to the client machine and the host it is communicating with — and hence unavailable to the filtering equipment.

TLS/SSL provides end-to-end encrypted, secure communication; initially used mainly by banking and e-commerce sites, now being employed by ever-rising number of websites, including social networks. Accessing websites with `https://’ instead of `http://’ is making use of TLS/SSL; it is however used to provide secure layer of communication also for many other tools and protocols (for instance, e-mail clients or some VoIP solutions).

Once a DPI-based censorship solution is deployed, affected users and services will gradually and naturally gravitate to this simple yet very effective solution. This means that any DPI-based censorship scheme must handle TLS/SSL communication. This can only be done in two ways:

  • block it altogether;
  • perform a man-in-the-middle (or MITM) attack on encrypted data-streams.

Blocking is not hard (TLS/SSL communication streams are quite easy to filter out). However, as TLS/SSL is a valid, legal and oft-used way of providing security for users by legitimate businesses, especially banks, this is not a viable solution, as it will cause outrage of users, security researchers and financial companies (or, indeed, all companies relying on TLS/SSL for their security needs).

Performing a man-in-the-middle attack means getting in a way of an encrypting data-stream, decrypting it, checking the contents, re-encrypting them and sending them to their destination, preferably in a way that neither the client, nor the server notice the intrusion. With properly created and signed certificates this is only viable if the censorship equipment has a special digital certificate allowing for that.

There have been instances where such certificates leaked from compromised Certificate Authorities (or CAs) and were used by oppressive regimes for MITM attacks on TLS/SSL; also, some filtering equipment takes advantage of such certificates — albeit provided wilfully and legally by one of the CAs that is co-operating with a given filtering equipment vendor — to perform clandestine MITM attacks in the surveiled network.

Performing MITM on TLS/SSL is a very resource-intensive operation and only adds costs to the already high-cost DPI-based censorship schemes — filtering devices equipped with digital certificates allowing for performing clandestine MITM are considerably more costly.

A different argument carries more weight here, however. Performing a man-in-the-middle attack is even more intrusive and violating than deep packet inspection. It is a conscious act of breaking encrypted communication in order to get to its contents and then covering one’s tracks in order to make the communicating parties feel safe and unsurveiled. There are not many more hostile digital acts a government can perform on its citizenry.

Moreover, using MITM on all connections in a given network lowers trust level dramatically. This results in citizens not trusting their banking, financial and e-commerce websites, and all other websites that employ TLS/SSL, hence has a huge potential to hurt the whole e-economy.

It also defeats the purpose of using TLS/SSL-encrypted communication to provide security. By doing so, and by lowering users’ trust towards TLS/SSL in general, it makes them more vulnerable and insecure on the Internet.

Finally, clandestine MITM can be discovered by simply removing the Certificate Authority that issued the certificate used by filtering equipment from the certificate store used by client software — and can be performed by the users themselves. This will have a side-effect of making all connections to all websites that also employ certificates from this given CA, and all connections that a MITM attack is performed on, marked with “invalid certificate” error by client software (e.g. browsers).

Economical arguments

The economical arguments to a large extent stem from the technical issues involved. Infrastructural changes needed would be costly, the cost of the required amounts of high-end filtering equipment would be astronomical, and there are labour costs involved, too (hiring people to select content to be blocked, and to oversee the equipment). The costs, of course, differ from scheme to scheme and from country to country, but are always considerable.

It is also very important to underline the hidden costs that ISPs (and hence — their clients) will have to cover in many such schemes. If the ISPs will be required to implement content filtering in their networks, they will have to foot the bill. Once this is made abundantly clear, the ISPs might become strong supporters for the anti-censorship cause.

If the scheme would entail the government paying the ISPs for the implementation of the measures, it will be hard to get them on-board, but then simply estimating the real costs of such measures and getting the word out that this will be paid by the taxpayer is a very strong instrument in and of itself.

Either way, requiring transparency, asking the right questions about the costs and who gets to pay them, making cost estimates and publishing them and the answers is very worthwhile.

It is easy to overlook the broad chilling effects on the whole Internet-related economy due to Internet censorship schemes being rolled out, and general economy costs related. Uncertainty of law, of blocking roles (which cannot be clear and unambiguous, for reasons discussed below), of a website — after all being an investment in many cases — being available at all to the intended public, and of ways of appealing an unjust content blocking will disincentivize businesses to invest in web presence.

Hence, a whole industry will take a blow, and with it the whole economy.

Philosophical arguments

This topic is ripe with philosophical issues. These for the most part boil down to the question whether or not the end (i.e. blocking child pornography, or any other excuse) justifies the means (infrastructure overhaul, huge costs, infringement of freedom to communicate and the right to privacy)?

Of course the main axis of anti-censorship philosophical arguments are civil rights. Right to privacy, freedom of speech, secrecy of correspondence are mostly codified in international treaties and are a very strong basis here.

However, to make censorship proponents (and the general public!) understand the civil rights implications of their ideas, it is crucial to fight the distinction between “real world” and “virtual world”.

For every technically literate person this distinction does not exist and it is clear these are just figures of speech. However, for most Internet censorship proponents, this distinction feels real. Indeed, such an impression is the enabler. It implies that current laws, regulations, civil rights statutes, etc., do not work in the “virtual world”. It is perceived as a tabula rasa, a completely new domain, where rules are only to be created, and hence it is okay to introduce solutions that in the “real world” would be considered unacceptable.

Physical world examples are very helpful here — the classic one being the postal service opening, reading and censoring our paper-mail as a metaphor of Internet censorship and surveillance.

There is also the question of the “real-ness” of the “virtual world” for Internet censorship proponents. The fact that for them the Internet is a “virtual” space means that censorship and surveillance there do not “really” harm anybody, do not “really” infringe upon “real” people’s civil rights. Curiously, pro-censorship actors are incoherent here — as when they start speaking about the harm done by the issue they propose censorship as a solution to (i.e. pornography), they see it as “real” harm done to “real” people.

It is well worth to point out in such a debate that either the harm in the “virtual world” is “real”, and hence Internet censorship is unacceptable; or it is not “real” — in which case it is unneeded.

A question of legality of acts that the content to be blocked is related to is also a valid one. There are two possibilities here:

  • the acts are legal themselves, while the content is not;
  • the acts and the content are both illegal.

The former case is hard to argue for even for the proponents of Internet censorship scheme. Why should certain content be blocked if acts it depicts or relates to are not illegal? The arguments used here will orbit around the idea of making the content censored as the first step to making the acts illegal, and they should be vehemently opposed.

In the latter case (that is, for example, the case of child pornography) one could argue that it is of crucial importance to stop the acts from happening (in this case, sexual abuse of children), and blocking the content is in no way conducive to that aim.

It does not directly help stopping the acts; it does not help find the culprits; it even makes it harder for them to get caught — often information contained in the content (GPS location encoded in the pictures; ambient sound in videos) or related to the means of distribution (owner of server domain name; IP logs on the hosting server) are crucial to establishing the identity of the culprit, and blocking the content removes the possibility to use such data.

Blocking such content is swiping the problem under the rug — also in the sense that the problem becomes less visible but in no way less real. Policy makers and general public can get convinced that the problem is “solved” even though it still exists under the radar (i.e. children are still sexually abused even though it’s harder to find content related to that on the Internet, due to blocking). This results in less drive for finding solutions to the real problem, and less data for researchers and law enforcement.

  • how secure are the lists?
  • what are the rules of blocking the content?
  • who creates, revises and controls them?

If the lists contain addresses, URLs or identifying information on “evil” content, and as there are no blocking means that are thoroughly effective (there are ways around every method), obviously these lists themselves will be in high demand among those interested in such content. Simply put, they will be complete wish-lists for them. And as such they are bound to leak.

There is a good argument to be made that the very creation of such lists (which are necessary for censorship schemes) is in and of itself a reason not to introduce such measures.

Because the lists themselves cannot be made public (due to all reasons mentioned above), there is no public oversight of lists’ contents — and hence there is serious concern of over-blocking or blocking content that in no way fits the intended description of content to be blocked. This is a slippery slope: once such a system is introduced, more and more types of content will get blocked.

As far as rules are concerned, often it is also hard to precisely define the content that is supposed to be blocked. In the context of child pornography, for example, establishing the age of the person on the picture is often a challenge, even for experts; should pictures of young-looking adults also be blocked? And is it pornography if it is not sexually explicit — should any picture of a young naked person be blocked? What about sexually explicit graphics/drawings depicting apparently under-age persons, should they get blocked, too? If so, what about stories? Then we land in a situation where genuine works of art (for example, Vladimir Nabokov’s Lolita) should apparently be blocked.

And if even viewing of the blocked content is illegal, under what legal framework should the list creators be able to review it? They would have to view it, to review it, so they would be breaking the law. If the law would permit them to do it, why and on what grounds? If it’s bad for everybody, it is certainly also bad for them …

Final list-related issue here can be shortened to a well-known quip “who watches the watchers“. People that control the blocking lists have immense power and immense responsibility. As there is no oversight, there is a large possibility for mischief and manipulation. Especially when most vocal proponents of some of the Internet censorship schemes are not exactly the most consistent themselves.

Lists’ secrecy gives birth to yet another issue — that of lack of due process. If rules of blocking are not clear and unambiguous (they can’t be), and hence there is a serious concern for content being blocked that should not have been blocked (there is), how exactly can such incorrectly-blocked content operator appeal their content being blocked if the lists are secret? How do they even get information about the blocking, to distinguish it from a technical error on the network?

This can cause serious financial losses and there should be a way for content operators to get informed that their content is being blocked, why is it blocked and what are their possibilities of challenging such blocking. However, due to the secrecy of the process and the lists, this information cannot be provided, not to mention the additional costs of informing every single entity who’s content is blocked.

Also, a surprisingly large number of pro-censorship actors that do not have ulterior motives treat any civil rights based criticism of their position personally, as if the opponents were suggesting that they do indeed have ulterior motives and are going to use the censorship and surveillance infrastructure for their own political gains.

This is something that eluded me for a long time. Only after a meeting on which I used “the next guy” argument certain pro-censorship actor (high-level representative of the Ministry of Justice) understood that we are not attacking him personally, and that there are indeed valid civil rights issues at hand.

“The next guy” argument is a very nifty way of disarming an emotionally loaded situation like that, and basically states that nobody assumes that the person (politician, civil servant, etc.) we are currently discussing Internet censorship with has ulterior motives and will abuse the system when introduced — however, nobody knows who “the next guy”, the next person to hold that office or position, will be. And it is against their potential abuse we are protesting today.

A special case of government-mandated opt-out Internet censorship is also worth considering. Such schemes have been proposed around the world (most notably in the UK), and are construed in order to answer some of the civil rights issues involved with blocking content that is legal but unsavoury (porn, for instance).

While the proponents of such measures claim that it completely solves these issues, this is not the case, as opt-out means that individuals willing to access the unsavoury content would have to divulge their data to their ISPs or content blocking operators, and hence be formally associated with the unsavoury content. This is not something many would be willing to do, even though they would indeed want to access the content.

A successful line of arguing against opt-out is to propose a similar, but opt-in solution. This would give a block on unsavoury content to those who want it, but would not create the situation described above. However, instituting such a block on a central level could be a stepping stone for mandating a central censorship solution (as the costs and technical difficulties would be similar if not the same), and hence opt-out blocking should be opposed entirely, with opt-in as a last-resort proposition.

Emotional arguments

The basic strategy is to call things by their real names — removal or blocking of content without a court order is censorship, and due to technical make-up of the Internet it is only possible with complete surveillance. There is no way around these terms, and censorship opponents can and should use them widely when speaking about such ideas. Again, using paper-mail censorship surveillance metaphors (private mail opening, reading, censoring on post offices) is very important to convey the seriousness of the issue.

Based on the cost of such solutions an emotional argument can be made that such money could be much better spent, for example on hospitals, road safety programmes, orphanages. There is no shortage of problems that need solving and the money should go there, instead of financing morally and legally questionable, technologically unfeasible censorship ideas.

It can also be argued that Internet censorship introduces collective punishment — all Internet users and Internet businesses are being punished for actions of a small group of criminals. The money and resources used for Internet censorship should be instead used to punish the guilty, not the general public.

Attempting to find organisations that try to solve the problem that the Internet censorship scheme is officially trying to solve (i.e. sexual abuse of children, creation of child pornography), but are against the censorship as a method, is also viable and advised. It is quite possible that such an organisation exists (for instance, in Poland the KidProtect.pl foundation, fighting sexual child abuse, was very vocally opposed to Internet censorship, for many reasons stated in this text), and having them as an ally is extremely effective.

If everything else fails, and as a last resort, an ad personam argument can be made that a given proponent of Internet censorship measures has a hidden agenda and wants to introduce the measures for their own personal aims. Using this argument is not advisable, however, especially early in the debate, as it ensures that such a person (and their community) will most certainly become hostile and even stronger a proponent of censorship measures than before. Using this argument is not recommended at all.

Useful analogies

These analogies are very useful in conveying the technical set-up of the Internet and the technical issues around censoring it.

IP address: a physical street address, it can lead to several different businesses and individuals (i.e. domains).

Domain name: a name (either business or personal) that identifies a particular business or person under a given physical street address (i.e. IP address).

Domain name resolution: a process of “resolving” a personal or business name to a physical street address (i.e. IP address), so that a package (i.e. data) can be delivered to them.

Deep packet inspection: opening physical mail, breaking the envelope and reading the contents in order to be able to decide whether or not to censor it (as opposed to just reading the addressee and the sender data available on the envelope).

Proxy: asking somebody else to send the package (i.e. data) for you and forward you the answer of the addressee.

HTTPS: Sending encrypted snail-mail.

Man-in-the-Middle: Opening encrypted snail-mail, decrypting, reading, re-encrypting it and re-sending it to the addressee. Usually an attempt is made to do it in a clandestine way, so that neither sender nor addressee are aware of it.

Useful Quotes

A very good collection of quotes useful in the context of anti-censorship debates is available on WikiQuote; it is also worth looking through civil rights and free speech related quotes. Some of the highlights below.

`They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.’ — Benjamin Franklin

`I disapprove of what you say, but I will defend to the death your right to say it.’ — Evelyn Beatrice Hall

`If we don’t believe in freedom of expression for people we despise, we don’t believe in it at all.’ — Noam Chomsky

`The Net interprets censorship as damage and routes around it.’ — John Gilmore

Source Rys.io: How to effectively argue against Internet Censorship ideas

          +-------------------------------------------------------------------+
          |           (c) 2011 — 2013 Michał "rysiek" Woźniak                 |
          |           ---------------------------------------                 |
          |     all content, unless specified otherwise,  licensed under      |
          |     Creative Commons - By Attribution - Share Alike - 3.0 PL      |
          +-------------------------------------------------------------------+