post

Tell Congress: We Want Trade Transparency Reform Now!


The failed Trans-Pacific Partnership (TPP) was a lesson in what happens when trade agreements are negotiated in secret. Powerful corporations can lobby for dangerous, restrictive measures, and the public can’t effectively bring balance to the process. Now, some members of Congress are seeking to make sure that future trade agreements, such as the renegotiated version of NAFTA, are no longer written behind closed doors. We urge you to write your representative and ask them to demand transparency in trade.

TAKE ACTION

Demand transparency in trade deals

Representative Debbie Dingell (D-MI) has today introduced the Promoting Transparency in Trade Act  (H.R. 3339) [PDF], with co-sponsorship by Representatives Laura DeLauro (D-CT), Tim Ryan (D-OH), Marcy Kaptur (D-OH), Jamie Raskin (D-MD), Keith Ellison (D-MI), Raúl Grijalva (D-AZ), John Conyers (D-MI), Jan Schakowsky (D-IL), Louise Slaughter (D-NY), Mark DeSaulnier (D-CA), Dan Lipinski (D-IL), Chellie Pingree (D-ME), Brad Sherman (D-CA), Jim McGovern (D-MA), Rick Nolan (D-MN), and Mark Pocan (D-WI). Representative Dingell describes the bill as follows:

The Promoting Transparency in Trade Act would require the U.S. Trade Representative (USTR) to publicly release the proposed text of trade deals prior to each negotiating round and publish the considered text at the conclusion of each round.  This will help bring clarity to a process that is currently off limits to the American people.  Actively releasing the text of trade proposals will ensure that the American public will be able to see what is being negotiated and who is advocating on behalf of policies that impact their lives and economic well-being.

We wholeheartedly agree. Indeed, these are among the recommendations that EFF has been pushing for for some time, most recently at a January 2017 roundtable on trade transparency that we held with stakeholders from industry, civil society, and government. That event resulted in a set of five recommendations on the reform of trade negotiation processes that were endorsed by the Sunlight Foundation the Association of Research Libraries, and OpenTheGovernment.org.

A previous version of the Promoting Transparency in Trade Act was introduced into the previous session of Congress, but died in committee. Compared with that version, this latest bill is an improvement because it requires the publication of consolidated draft texts of trade agreements after each round of negotiations, which the previous bill did not.

Another of our recommendations that is reflected in the bill is to require the appointment of an independent Transparency Officer to the USTR. Currently, the Transparency Officer is the USTR’s own General Counsel, which creates an conflict of interest between the incumbent’s duty to defend the office’s current transparency practices, and his or her duties to the public to reform those practices. An independent officer would be far more effective at pushing necessary reforms at the office.

The Promoting Transparency in Trade Act faces challenging odds to make it through Congress. Its next step towards passage into law will be its referral to the House Committee on Ways and Means, and probably its Subcommittee on Trade, which will decide whether the bill will be sent to the House of Representatives for a vote. The Senate will also have to vote on the bill before it becomes law. The more support that we can build for the bill now, the better its chances for surviving this perilous process.

Passage of this bill may be the best opportunity that we’ll have to avoid a repetition of the closed, secretive process that led to the TPP. With the renegotiation of NAFTA commencing with the first official round of meetings in Washington, D.C. next month, it’s urgent that these transparency reforms be adopted soon. You can help by writing to your representative in Congress and asking them to support the bill in committee.

TAKE ACTION

Demand transparency in trade deals



Source link: https://www.eff.org/deeplinks/2017/07/tell-congress-we-want-trade-transparency-reform-now

post

Librarians Call on W3C to Rethink its Support for DRM


The International Federation of Library Associations and Institutions (IFLA) has called on the World Wide Web Consortium (W3C) to reconsider its decision to incorporate digital locks into official HTML standards. Last week, W3C announced its decision to publish Encrypted Media Extensions (EME)—a standard for applying locks to web video—in its HTML specifications.

IFLA urges W3C to consider the impact that EME will have on the work of libraries and archives:

While recognising both the potential for technological protection measures to hinder infringing uses, as well as the additional simplicity offered by this solution, IFLA is concerned that it will become easier to apply such measures to digital content without also making it easier for libraries and their users to remove measures that prevent legitimate uses of works.

[…]

Technological protection measures […] do not always stop at preventing illicit activities, and can often serve to stop libraries and their users from making fair uses of works. This can affect activities such as preservation, or inter-library document supply. To make it easier to apply TPMs, regardless of the nature of activities they are preventing, is to risk unbalancing copyright itself.

IFLA’s concerns are an excellent example of the dangers of digital locks (sometimes referred to as digital rights management or simply DRM): under the U.S. Digital Millennium Copyright Act (DMCA) and similar copyright laws in many other countries, it’s illegal to circumvent those locks or to provide others with the means of doing so. That provision puts librarians in legal danger when they come across DRM in the course of their work—not to mention educators, historians, security researchers, journalists, and any number of other people who work with copyrighted material in completely lawful ways.

Of course, as IFLA’s statement notes, W3C doesn’t have the authority to change copyright law, but it should consider the implications of copyright law in its policy decisions: “While clearly it may not be in the purview of the W3C to change the laws and regulations regulating copyright around the world, they must take account of the implications of their decisions on the rights of the users of copyright works.”

EFF is in the process of appealing W3C’s controversial decision, and we’re urging the standards body to adopt a covenant protecting security researchers from anti-circumvention laws.



Source link: https://www.eff.org/deeplinks/2017/07/librarians-call-w3c-rethink-its-support-drm

post

Do Last Week's European Copyright Votes Show Publishers Have Captured European Politics?


Three European Parliament Committees met during the week of July 10, to give their input on the European Commission’s proposal for a new Directive on copyright in the Digital Single Market. We previewed those meetings last week, expressing our hope that they would not adopt the Commission’s harmful proposals. The meetings did not go well.

All of the compromise amendments to the Directive proposed by the Committee on Culture and Education (CULT) that we previously catalogued were accepted in a vote of that committee, including the upload filtering mechanism, the link tax, the unwaivable right for artists, and the new tax on search engines that index images. Throwing gasoline on the dumpster fire of the upload filtering proposal, CULT would like to see cloud storage services added to the online platforms that are required to filter user uploads. As for the link tax, they have offered up a non-commercial personal use exemption as a sop to the measure’s critics, though it is hard to imagine how this would soften the measure in practice, since almost all news aggregation services are commercially supported.

The meeting of the Industry, Research and Energy (ITRE) Committee held in the same week didn’t go much better than that of the CULT Committee. The good news, if we can call it that, is that they softened the upload filtering proposal a little. The ITRE language no longer explicitly refers to content recognition technologies as a measure to be agreed between copyright holders and platforms that host “significant amounts” (the Commission proposal had said “large amounts”) of copyright protected works uploaded by users. On the other hand, such measures aren’t ruled out, either; so the change is a minor one at best.

There is no similar saving grace in the ITRE’s treatment of the link tax. Oddly for a committee dedicated to research, it proposed amendments to the link tax that would make life considerably harder for researchers, by extending the tax to become payable not only on snippets from news publications but also those taken from academic journals, and whether those publications are online or offline. The extension of the link tax to journals came by way of a single word amendment to recital 33 [PDF]:

Periodical publications which are published for scientific or academic purposes, such as scientific journals, should  n̶o̶t̶  also be covered by the protection granted to press publications under this Directive.

This deceptively small change would open up a whole new class of works for which publishers could demand payment for the use of small snippets, apparently including works that the author had released under an open access license (since it’s the publisher, not the author, that is the beneficiary of the new link tax).

The JURI Committee also met during the week, although it did not vote on any amendments. Even so, the statements and discussions of the participants at this meeting are just as important as the votes of the other committees, given JURI’s leadership of the dossier. The meeting (a recording of which is available online) was chaired by German MEP Axel Voss, who has recently replaced the previous chair Theresa Comodini as rapporteur. Whereas MEP Comodini’s report for the committee had been praised for its balance, Voss has taken a much more hardline approach. Addressing him as Chair, Pirate Party MEP Julia Reda stated during the meeting:

I have never seen a Directive proposal from the Commission that has been met with such unanimous criticism from academia. Europe’s leading IP law faculties have stated in an open letter, and I quote, “There is independent scientific consensus that Articles 11 and 13 cannot be allowed to stand,” and that the proposal for a neighboring right is “unnecessary, undesirable, and unlikely to achieve anything other than adding to complexity and cost”. 

The developments in the CULT, ITRE and JURI committees last week were disappointing, but they do not determine the outcome of this battle. More decisive will be the votes of the Civil Liberties, Justice and Home Affairs (LIBE) Committee in September, followed by negotiations around the principal report in the JURI Committee and its final vote on October 10. Either way, by year’s end we will know whether European politicians have been utterly captured by their powerful publishing lobby, or whether the European Parliament still effectively represents the voices of ordinary European citizens.



Source link: https://www.eff.org/deeplinks/2017/07/last-weeks-european-copyright-votes-show-publishers-captured-european-politics

post

EFF to Minnesota Supreme Court: Sheriff Must Release Emails Documenting Biometric Technology Use


A Minnesota sheriff’s office must release emails showing how it uses biometric technology so that the community can understand how invasive it is, EFF argued in a brief filed in the Minnesota Supreme Court on Friday.

The case, Webster v. Hennepin County, concerns a particularly egregious failure to respond to a public records request that an individual filed as part of a 2015 EFF and MuckRock campaign to track biometric technology use by law enforcement across the country.

EFF has filed two briefs in support of web engineer and public records researcher Tony Webster’s request, with the latest brief [.pdf] arguing that agencies must provide information contained in emails to help the public understand how a local sheriff uses biometric technology. The ACLU of Minnesota joined EFF on the brief.

As we write in the brief:

This case is not about whether or how the government may collect biometric data and develop and domestically deploy information-retrieval technology as a potential sword against the general public. That is just one debate we must have, but critical to it and all public debates is that it be informed by public [records]

The case began when Webster filed a request based on EFF’s letter template with Hennepin County, a jurisdiction that includes Minneapolis, host city of the 2018 Super Bowl.  He sought emails, contracts, and other records related to the use of technology that can scan and recognize fingerprints, faces, irises, and other forms of biometrics.

After the county basically ignored the request, Webster sued. An administrative law judge ruled in 2015 that the county had violated the state’s public records law both because it failed to provide documents to Webster and because it did not have systems in place to quickly search and disclose electronic records.

An intermediate appellate court ruled in 2016 that the county had to turn over the records Webster sought, but it reversed the lower court’s ruling that the county did not have adequate procedures in place to respond to public records requests.

Both Webster and the county appealed the ruling to the Minnesota Supreme Court. In its appeal, the county argues that public records requesters create undue burden on agencies when they specify that they search for particular key words or search terms.

EFF’s brief in support of Webster points out the flaws in the county’s search term argument. Having requesters identify specific search terms for documents they seek helps agencies conduct better searches for records while narrowing the scope of the request. This ultimately reduces the burden on agencies and leads to records being released more quickly.

EFF would like to thank attorneys Timothy Griffin and Thomas Burman of Stinson Leonard Street LLP for drafting the brief and serving as local counsel.



Source link: https://www.eff.org/deeplinks/2017/07/eff-minnesota-supreme-court-sheriff-must-release-emails-documenting-biometric

post

Australian PM Calls for End-to-End Encryption Ban, Says the Laws of Mathematics Don't Apply Down Under


“The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia”, said Australian Prime Minister Malcolm Turnbull today. He has been rightly mocked for this nonsense claim, that foreshadows moves to require online messaging providers to provide law enforcement with back door access to encrypted messages. He explained that “We need to ensure that the internet is not used as a dark place for bad people to hide their criminal activities from the law.” It bears repeating that Australia is part of the secretive spying and information sharing Five Eyes alliance.

But despite the well-deserved mockery that ensued, we shouldn’t make too much light of the real risk that this poses to Internet freedom in Australia. It’s true enough, for now, that a ban on end-to-end encrypted messaging in Australia would have absolutely no effect on “bad people”, who would simply avoid using major platforms with weaker forms of encryption, in favor of other apps that use strong end-to-end encryption based on industry standard mathematical algorithms. It would hurt ordinary citizens who rely on encryption to make sure that their conversations are secure and private from prying eyes.

However, as similar demands are made elsewhere around the world, more and more app developers might fall under national laws that require them to compromise their encryption standards. Users of those apps, who may have a network of contacts who use the same app, might hesitate to shift to another app that those contacts don’t use, even if it would be more secure. They might also worry that using end-to-end encryption would be breaking the law (a concern that “bad people” tend to be far less troubled by). This will put those users at risk.

If enough countries go down the same misguided path, that sees Australia following in the steps of Russia and the United Kingdom, the future could be a new international agreement banning strong encryption. Indeed, the Prime Minister’s statement is explicit that this is exactly what he would like to see. It may seem like an unlikely prospect for now, with strong statements at the United Nations level in support of end-to-end encryption, but we truly can’t know what the future will bring. What seems like a global accord today might very well start to crumble as more and more countries defect from it.

We can’t rely on politicians to protect our privacy, but thankfully we can rely on math (“maths”, as Australians say). That’s what makes access to strong encryption so important, and Australia’s move today so worrying. Law enforcement should have the tools they need to investigate crimes, but that cannot extend to a ban on the use of mathematical algorithms in software. Mr Turnbull has to understand that we either have an internet that “bad people” can use, or we don’t have an Internet. It’s actually as simple as that.



Source link: https://www.eff.org/deeplinks/2017/07/australian-pm-calls-end-end-encryption-ban-says-laws-mathematics-dont-apply-down

post

Payment Processors Are Profiling Heavy Metal Fans as Terrorists


If you happen to be a fan of the heavy metal band Isis (an unfortunate name, to be sure), you may have trouble ordering its merchandise online. Last year, Paypal suspended a fan who ordered an Isis t-shirt, presumably on the false assumption that there was some association between the heavy metal band and the terrorist group ISIS.

Then last month Internet scholar and activist Sascha Meinrath discovered that entering words such as “ISIS” (or “Isis”), or “Iran”, or (probably) other words from this U.S. government blacklist in the description field for a Venmo payment will result in an automatic block on that payment, requiring you to complete a pile of paperwork if you want to see your money again. This is even if the full description field is something like “Isis heavy metal album” or “Iran kofta kebabs, yum.”

These examples may seem trivial, but they reveal a more serious problem with the trust and responsibility that the Internet places in private payment intermediaries. Since even many non-commercial websites such as EFF’s depend on such intermediaries to process payments, subscription fees, or donations, it’s no exaggeration to say that payment processors form an important part of the financial infrastructure of today’s Internet. As such, they ought to carry corresponding responsibilities to act fairly and openly towards their customers.

Unfortunately, given their reliance on bots, algorithms, handshake deals, and undocumented policies and blacklists to control what we do online, payment intermediaries aren’t carrying out this responsibility very well. Given that these private actors are taking on responsibilities to help address important global problems such as terrorism and child online protection, the lack of transparency and accountability with which they execute these weighty responsibilities is a matter of concern.

The readiness of payment intermediaries to do deals on those important issues leads as a matter of course to their enlistment by governments and special interest groups to do similar deals on narrower issues, such as the protection of the financial interests of big pharma, big tobacco, and big content. It is in this way that payment intermediaries have insidiously become a weak leak for censorship of free speech.

Cigarettes, Sex, Drugs, and Copyright

For example, if you’re a smoker, and you try to buy tobacco products from a U.S. online seller using a credit card, you’ll probably find that you can’t. It’s not illegal to do so, but thanks to a “voluntary” agreement with law enforcement authorities dating back to 2005, payment processors have effectively banned the practice—without any law or court judgment.

Another example that we’ve previously written about are the payment processors’ arbitrary rules blocking sites that discuss sexual fetishes, even though that speech is constitutionally protected. The congruence between the payment intermediaries’ terms of service on the issue suggests a degree of coordination between them, but their lack of transparency makes it impossible to be sure who was behind the ban and what channels they used to achieve it.

A third example is the ban on pharmaceutical sales. You can still buy pharmaceuticals online using a credit card, but these tend to be from unregulated, rogue pharmacies that lie to the credit card processors about the purpose for which their merchant account will be used. For the safer, regulated pharmacies that require a prescription for the drugs they sell online, such as members of the Canadian International Pharmacy Association (CIPA), the credit card processors enforce a blanket ban.

Finally there are “voluntary” best practices on copyright and trademark infringement. These include the RogueBlock program of the International Anti-Counterfeiting Coalition (IACC) in 2012, about which information is available online, along with a 2011 set of “Best Practices to Address Copyright Infringement and the Sales of Counterfeit Products on the Internet,” about which no online information is found. The only way that you can find out about the standards that payment intermediaries use to block websites accused of copyright or trademark infringement is by reading what academics have written about it.

Lack of Transparency Invites Abuse

The payment processors might respond that their terms of service are available online, which is true. However, these are ambiguous at best. On Venmo, transactions for items that promote hate, violence, or racial intolerance are banned, but there is nothing in its terms of service to indicate that including the name of a heavy metal band in your transaction will place it in limbo. Similarly, if you delve deep enough into Paypal’s terms of service you will find out that selling tickets to professional UK football matches is banned, but you won’t find out how this restriction came about, or who had a say in it.

Payment processors can do better. In 2012, in the wake of the payment industry’s embargo of Wikileaks and its refusal to process payments to European vendors of horror films and sex toys, the European Parliament Committee on Economic and Monetary Affairs made the following resolution

[The Committee c]onsiders it likely that there will be a growing number of European companies whose activities are effectively dependent on being able to accept payments by card; [and] considers it to be in the public interest to define objective rules describing the circumstances and procedures under which card payment schemes may unilaterally refuse acceptance.

We agree. Bitcoin and other cryptocurrencies notwithstanding, online payment processing remains largely oligopolistic. Agreements between the few payment processors that make up the industry and powerful commercial lobbies and governments, concluded in the shadows, can have deep impacts on entire online communities. When payment processors are drawing their terms of service or developing algorithms that are based on industry-wide agreements, standards, or codes of conduct—especially if these involve governments or other third parties—they ought to be developed through a process that is inclusive, balanced and accountable.

The fact that you can’t use Venmo to purchase an Isis t-shirt is just one amusing example. But the Shadow Regulation of the payment services industry is much more serious than that, also affecting culture, healthcare, and even your sex life online. Just as we’ve called other Internet intermediaries to account for the ways in which their “voluntary” efforts threaten free speech, the online payment services industry needs to be held to the same standard. 



Source link: https://www.eff.org/deeplinks/2017/07/payment-processors-are-profiling-heavy-metal-fans-terrorists

post

Net Neutrality Won't Save Us if DRM is Baked Into the Web


Yesterday’s record-smashing Net Neutrality day of action showed that the Internet’s users care about an open playing field and don’t want a handful of companies to decide what we can and can’t do online.

Today, we should also think about other ways in which small numbers of companies, including net neutrality’s biggest foes, are trying to gain the same kinds of control, with the same grave consequences for the open web. Exhibit A is baking digital rights management (DRM) into the web’s standards.

ISPs that oppose effective net neutrality protections say that they’ve got the right to earn as much money as they can from their networks, and if people don’t like it, they can just get their internet somewhere else. But of course, the lack of competition in network service means that most people can’t do this.

Big entertainment companies — some of whom are owned by big ISPs! — say that because they can make more money if they can control your computer and get it to disobey you, they should be able to team up with browser vendors and standards bodies to make that a reality. If you don’t like it, you can watch someone else’s movies.

Like ISPs, entertainment companies think they can get away with this because they too have a kind of monopoly –copyright, which gives rightsholders the power to control many uses of their creative works. But just like the current FCC Title II rules that stop ISPs from flexing their muscle to the detriment of web users, copyright law places limits on the powers of copyright holders.

Copyright can stop you from starting a business to sell unlicensed copies of the studios’ movies, but it couldn’t stop Netflix from starting a business that mailed DVDs around for money; it couldn’t stop Apple from selling you a computer that would “Rip, Mix, Burn” your copyrighted music, and it couldn’t stop cable companies from starting businesses that retransmitted broadcasters’ signals.

That competitive balance makes an important distinction between “breaking the law” (not allowed) and “rocking the boat” (totally allowed). Companies that want to rock the boat are allowed to enter the market with new, competitive offerings that go places the existing industry fears to tread, and so they discover new, unmapped and fertile territory for services and products that we come to love and depend on.

But overbroad and badly written laws like Section 1201 of the 1998 Digital Millennium Copyright Act (DMCA) upset this balance. DMCA 1201 bans tampering with DRM, even if you’re only doing so to exercise the rights that Congress gave you as a user of copyrighted works. This means that media companies that bake DRM into the standards of the web get to decide what kinds of new products and services are allowed to enter the market, effectively banning others from adding new features to our media, even when those features have been declared legal by Congress.

ISPs are only profitable because there was an open Internet where new services could pop up, transforming the Internet from a technological curiosity into a necessity of life that hundreds of millions of Americans pay for. Now that the ISPs get steady revenue from our use of the net, they want network discrimination, which, like the discrimination used by DRM advocates, is an attempt to change “don’t break the law” into “don’t rock the boat” — to force would-be competitors to play by the rules set by the cozy old guard.

For decades, activists struggled to get people to care about net neutrality, and their opponents from big telecom companies said, “people don’t care, all they want is to get online, and that’s what we give them.” The once-quiet voices of net neutrality wonks have swelled into a chorus of people who realize that an open web was important to their future. As we saw yesterday, the public broadly demands protection for the open Internet.

Today, advocates for DRM say that “People don’t care, all they want is to watch movies, and that’s what we deliver.” But there is an increasing realization that letting major movie studios tilt the playing field toward them and their preferred partners also endangers the web’s future.

Don’t take our word for it: last April, Professor Tim Wu, who coined the term “net neutrality” and is one of the world’s foremost advocates for a neutral web, published an open letter to Tim Berners-Lee, inventor of the web and Director of the World Wide Web Consortium (W3C), where there is an ongoing effort to standardize DRM for the web.

In that letter, Wu wrote:

I think more thinking need be done about EME’s potential consequences for competition, both as between browsers, the major applications, and in ways unexpected. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know. That’s the principal concern of net neutrality, and has been a concern when it comes to browsers and their associated standards. It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.

EME, of course, brings the anti-circumvention laws into play, and as you may know anti-circumvention laws have a history of being used for purposes different than the original intent (i.e., protecting content). For example, soon after it was released, the U.S. anti-circumvention law was quickly by manufacturers of inkjet printers and garage-door openers to try and block out aftermarket competitors (generic ink, and generic remote controls). The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected.

This week, Berners-Lee made important and stirring contributions to the net neutrality debate, appearing in this outstanding Web Foundation video and explaining how anti-competitive actions by ISPs endanger the things that made the web so precious and transformative.

Last week, Berners-Lee disappointed activists who’d asked for a modest compromise on DRM at the W3C, one that would protect competition and use standards to promote the same level playing field we seek in our Net Neutrality campaigns. Yesterday, EFF announced that it would formally appeal Berners-Lee’s decision to standardize DRM for the web without any protection for its neutrality. In the decades of the W3C’s existence, there has never been a successful appeal to one of Berners-Lee’s decisions.

The odds are long here — the same massive corporations that oppose effective net neutrality protections also oppose protections against monopolization of the web through DRM, and they can outspend us by orders of magnitude. But we’re doing it, and we’re fighting to win. That’s because, like Tim Berners-Lee, we love the web and believe it can only continue as a force for good if giant corporations don’t get to decide what we can and can’t do with it.



Source link: https://www.eff.org/deeplinks/2017/07/net-neutrality-wont-save-us-if-drm-baked-web

post

Industry Efforts to Censor Pro-Terrorism Online Content Pose Risks to Free Speech


In recent months, social media platforms—under pressure from a number of governments—have adopted new policies and practices to remove content that promotes terrorism. As the Guardian reported, these policies are typically carried out by low-paid contractors (or, in the case of YouTube, volunteers) and with little to no transparency and accountability. While the motivations of these companies might be sincere, such private censorship poses a risk to the free expression of Internet users.

As groups like the Islamic State have gained traction online, Internet intermediaries have come under pressure from governments and other actors, including the following:

  • the Obama Administration;
  • the U.S. Congress in the form of legislative proposals that would require Internet companies to report “terrorist activity” to the U.S. government;
  • the European Union in the form of a “code of conduct” requiring Internet companies to take down terrorist propaganda within 24 hours of being notified, and via the EU Internet Forum;
  • individual European countries such as the U.K., France and Germany that have proposed exorbitant fines for Internet companies that fail to take down pro-terrorism content; and,
  • victims of terrorism who seek to hold social media companies civilly liable in U.S. courts for providing “material support” to terrorists by simply providing online platforms for global communication.

One of the coordinated industry efforts against pro-terrorism online content is the development of a shared database of “hashes of the most extreme and egregious terrorist images and videos” that the companies have removed from their services. The companies that started this effort—Facebook, Microsoft, Twitter, and Google/YouTube—explained that the idea is that by sharing “digital fingerprints” of terrorist images and videos, other companies can quickly “use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.”

As a second effort, the same companies created the Global Internet Forum to Counter Terrorism, which will help the companies “continue to make our hosted consumer services hostile to terrorists and violent extremists.” Specifically, the Forum “will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN.” The Forum will focus on technological solutions; research; and knowledge-sharing, which will include engaging with smaller technology companies, developing best practices to deal with pro-terrorism content, and promoting counter-speech against terrorism.

Internet companies are also taking individual measures to combat pro-terrorism content. Google announced several new efforts, while both Google and Facebook have committed to using artificial intelligence technology to find pro-terrorism content for removal.

Private censorship must be cautiously deployed

While Internet companies have a First Amendment right to moderate their platforms as they see fit, private censorship—or what we sometimes call shadow regulation—can be just as detrimental to users’ freedom of expression as governmental regulation of speech. As social media companies increase their moderation of online content, they must do so as cautiously as possible.

Through our project Onlinecensorship.org, we monitor private censorship and advocate for companies to be more transparent and accountable to their users. We solicit reports from users of when Internet companies have removed specific posts or other content, or whole accounts.

We consistently urge companies to follow basic guidelines to mitigate the impact on users’ free speech. Specifically, companies should have narrowly tailored, clear, fair, and transparent content policies (i.e., terms of service or “community guidelines”); they should engage in consistent and fair enforcement of those policies; and they should have robust appeals processes to minimize the impact on users’ freedom of expression.

Over the years, we’ve found that companies’ efforts to moderate online content almost always result in overbroad content takedowns or account deactivations. We, therefore, are justifiably skeptical that the latest efforts by Internet companies to combat pro-terrorism content will meet our basic guidelines.

A central problem for these global platforms is that such private censorship can be counterproductive. Users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules if, for example, their post includes an image of one of more than 600 “terrorist leaders” designated by Facebook. In one instance, a journalist from the United Arab Emirates was temporarily banned from the platform for posting a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s content moderators failed to grasp.

A more fundamental problem is that having narrow definitions is difficult. What counts as speech that “promotes” terrorism? What even counts as “terrorism”? These U.S.-based companies may look to the State Department’s list of designated terrorist organizations as a starting point. But Internet companies will sometimes go further. Facebook, for example, deactivated the personal accounts of Palestinian journalists; it did the same thing for Chechen independence activists under the guise that they were involved in “terrorist activity.” These examples demonstrate the challenges social media companies face in fairly applying their own policies.

A recent investigative report by ProPublica revealed how Facebook’s content rules can lead to seemingly inconsistent takedowns. The authors wrote: “[T]he documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.” The report emphasized the need for companies to be more transparent about their content rules, and to have rules that are fair for all users around the world.

 Artificial intelligence poses special concerns

 We are concerned about the use of artificial intelligence automation to combat pro-terrorism content because of the imprecision inherent in systems that automatically block or remove content based on an algorithm. Facebook has perhaps been the most aggressive in deploying AI in the form of machine learning technology in this context. The company’s latest AI efforts include using image matching to detect previously tagged content, using natural language processing techniques to detect posts advocating for terrorism, removing terrorist clusters, removing new fake accounts created by repeat offenders, and enforcing its rules across other Facebook properties such as WhatsApp and Instagram.

This imprecision exists because it is difficult for humans and machines alike to understand the context of a post. While it’s true that computers are better at some tasks than people, understanding context in written and image-based communication is not one of those tasks. While AI algorithms can understand very simple reading comprehension problems, they still struggle with even basic tasks such as capturing meaning in children’s books. And while it’s possible that future improvements to machine learning algorithms will give AI these capabilities, we’re not there yet.

Google’s Content ID, for example, which was designed to address copyright infringement, has also blocked fair uses, news reporting, and even posts by copyright owners themselves. If automatic takedowns based on copyright are difficult to get right, how can we expect new algorithms to know the difference between a terrorist video clip that’s part of a satire and one that’s genuinely advocating violence?

Until companies can publicly demonstrate that their machine learning algorithms can accurately and reliably determine whether a post is satire, commentary, news reporting, or counter-speech, they should refrain from censoring their users by way of this AI technology.

Even if a company were to have an algorithm for detecting pro-terrorism content that was accurate, reliable, and had a minimal percentage of false positives, AI automation would still be problematic because machine learning systems are not robust to distributional change. Once machine learning algorithms are trained, they are as brittle as any other algorithm, and building and training machine learning algorithms for a complex task is an expensive, time-intensive process. Yet the world that algorithms are working in is constantly evolving and soon won’t look like the world in which the algorithms were trained.

This might happen in the context of pro-terrorism content on social media: once terrorists realize that algorithms are identifying their content, they will start to game the system by hiding their content or altering it so that the AI no longer recognizes it (by leaving out key words, say, or changing their sentence structure, or a myriad of other ways—it depends on the specific algorithm). This problem could also go the other way: a change in culture or how some group of people express themselves could cause an algorithm to start tagging their posts as pro-terrorism content, even though they’re not (for example, if people co-opted a slogan previously used by terrorists in order to de-legitimize the terrorist group).

We strongly caution companies (and governments) against assuming that technology will be the panacea in identifying pro-terrorism content, because this technology simply doesn’t yet exist.

Is taking down pro-terrorism content actually a good idea?

Apart from the free speech and artificial intelligence concerns, there is an open question of efficacy. The sociological assumption is that removing pro-terrorism content will reduce terrorist recruitment and community sympathy for those who engage in terrorism. In other words, the question is not whether terrorists are using the Internet to recruit new operatives—the question is whether taking down pro-terrorism content and accounts will meaningfully contribute to the fight against global terrorism.

Governments have not sufficiently demonstrated this to be the case. And some experts believe this absolutely not to be the case. For example, Michael German, a former FBI agent with counter-terrorism experience and current fellow at the Brennan Center for Justice, said, “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.” In fact, as we’ve argued before, censoring the content and accounts of determined groups could be counterproductive and actually result in pro-terrorism content being publicized more widely (a phenomenon known as the Streisand Effect).

Additionally, permitting terrorist accounts to exist and allowing pro-terrorism content to remain online, including that which is publicly available, may actually be beneficial by providing opportunities for ongoing engagement with these groups. For example, a Kenyan government official stated that shutting down an Al Shabaab Twitter account would be a bad idea: “Al Shabaab needs to be engaged positively and [T]witter is the only avenue.”

Keeping pro-terrorism content online also contributes to journalism, open source intelligence gathering, academic research, and generally the global community’s understanding of this tragic and complex social phenomenon. On intelligence gathering, the United Nations has said that “increased Internet use for terrorist purposes provides a corresponding increase in the availability of electronic data which may be compiled and analysed for counter-terrorism purposes.”

In conclusion

While we recognize that Internet companies have a right to police their own platforms, we also recognize that such private censorship is often in response to government pressure, which is often not legitimately wielded.

Governments often get private companies to do what they can’t do themselves. In the U.S., for example, pro-terrorism content falls within the protection of the First Amendment. Other countries, many of which do not have similarly robust constitutional protections, might nevertheless find it politically difficult to pass speech-restricting laws.

Ultimately, we are concerned about the serious harm that sweeping censorship regimes—even by private actors—can have on users, and society at large. Internet companies must be accountable to their users as they deploy policies that restrict content.

First, they should make their content policies narrowly tailored, clear, fair, and transparent to all—as the Guardian’s Facebook Files demonstrate, some companies have a long way to go.

Second, companies should engage in consistent and fair enforcement of those policies.

Third, companies should ensure that all users have access to a robust appeals process—content moderators are bound to make mistakes, and users must be able to seek justice when that happens.

Fourth, until artificial intelligence systems can be proven accurate, reliable and adaptable, companies should not deploy this technology to censor their users’ content.

Finally, we urge those companies that are subject to increasing governmental demands for backdoor censorship regimes to improve their annual transparency reporting to include statistics on takedown requests related to the enforcement of their content policies.



Source link: https://www.eff.org/deeplinks/2017/07/industry-efforts-censor-pro-terrorism-online-content-pose-risks-free-speech

post

Historic Day of Action: Net Neutrality Allies Send 1.6 Million Comments to FCC


When you attack the Internet, the Internet fights back.

Today, the Internet went all out in support of net neutrality. Hundreds of popular websites featured pop-ups suggesting that those sites had been blocked or throttled by Internet service providers. Some sites got hilariously creative—Twitch replaced all of its emojis with that annoying loading icon. Netflix shared GIFs that would never finish loading. PornHub simply noted that “slow porn sucks.”

Together, we painted an alarming picture of what the Internet might look like if the FCC goes forward with its plan to roll back net neutrality protections: ISPs prioritizing their favored content sources and deprioritizing everything else. (Fight for the Future has put together a great collection of examples of how sites participated in the day of action.)

Today has been about Internet users across the country who are afraid of large ISPs getting too much say in how we use the Internet. Voices ranged from huge corporations to ordinary Internet users like you and me.

Together with Battle for the Net and other friends, we delivered 1.6 million comments to the FCC, breaking the record we set during Internet Slowdown Day in 2014. The message was clear: we all rely on the Internet. Don’t dismantle net neutrality protections.

If you haven’t added your voice yet, it’s not too late. Take a few moments to tell the FCC why net neutrality is important to you. If you already have, take a moment to encourage your friends to do the same.

TAKE ACTION

Stand up for net neutrality

Here are just a few examples of what Team Internet has been saying about net neutrality today.

“We live in an uncompetitive broadband market. That market is dominated by a handful of giant corporations that are being given the keys to shape telecom policy. The big internet companies that might challenge them are doing it half-heartedly. And [FCC Chairman] Ajit Pai seems determined to offer up a massive corporate handout without listening to everyday Americans.

“Is this what you want? Does this sound like a path toward better, faster, cheaper internet access? Toward better products and services in a more competitive market? To me, it sounds like Americans need to demand that our government actually hear our concerns, look at our skyrocketing bills, and make real policy that respects us, instead of watching the staff of an unelected official laugh as he ignores us. It sounds like we need to flood the offices of the FCC and Congress with calls and paperwork, demanding to know how giving handouts to huge corporations will help us.”

Nilay Patel, The Verge

“Title II net neutrality protections are the civil rights and free speech rules for the internet. When traditional media outlets refuse to pay attention, Black, indigenous, queer and trans internet users can harness the power of the Internet to fight for lives free of police brutality and discrimination. This is why we’ll never stop fighting for enforcement of the net neutrality rules we fought for and saw passed by the FCC two years ago. There’s too much at stake to urge anything less.”

Malkia Cyril, Co-Founder and Executive Director, Center for Media Justice

“We’re still picking ourselves off the floor from all the laughing we did when AT&T issued a press release this afternoon announcing that it was joining the ‘Day of Action for preserving and advancing the open internet.’

“If only it were true. In reality, AT&T is just a company that is deliberately misleading the public. Their lobbyists are lying. They want to kill Title II — which gives the FCC the authority to actually enforce net neutrality — and are trying to sell a congressional ‘compromise’ that would be as bad or worse than what the FCC is proposing. No thanks.”

Craig Aaron and Candace Clement, Free Press

InternetIRL, presented by Color of Change

“Everyone except these ISPs benefits from an open Internet… that’s it. It’s like a handful of companies. Not only is this about business—and it is about business and innovation—it’s also about freedom of speech.”

Sen. Al Franken

“No matter what, do not get discouraged or retreat into a state of silence and inaction. There are many like me who are listening and the role each of us plays is vital. We are not alone in believing that the FCC should be a governmental agency ‘of the people, by the people, and for the people.’”

Mignon Clyburn, FCC Commissioner

To everyone who has participated in today’s day of action, thank you.

TAKE ACTION

Stand up for net neutrality



Source link: https://www.eff.org/deeplinks/2017/07/net-neutrality-allies-send-16-million-comments-fcc

post

Stalemate Continues in Negotiations Over European Copyright Filters


This week is an important one in the ongoing negotiations over new copyright rules in Europe—which will have reverberations all over the world. As you may recall, the negotiations centre around two worrisome proposals being pushed by publisher and music industry lobby groups for inclusion in a new Digital Single Market Directive: a requirement for mandatory upload filtering by user content platforms (Article 13), and a link tax payable by news aggregators in favor of publishers (Article 11).

The convoluted process of negotiation over new European laws means that not only do three European institutions (the European Parliament, the Council of the European Union, and the European Commission) have to reach an accord on the terms of the Directive, but within the European Parliament itself there are also multiple committees that get to weigh in. The Lead Committee is the Legal Affairs or JURI Committee, but it is required to take account of the opinions, and proposed amendments, of the other committees. This week two of those committees will go to a vote on their opinions and suggested amendments, while the JURI committee will consider its own amendments to the European Commission’s original proposal.

The Committee on Culture and Education (CULT), whose extreme proposals for amendment to the Commission proposal we critiqued in a previous post, will be voting on July 11 on which amendments it will put forward to JURI for inclusion in the Parliament’s final compromise text. Since none of CULT’s suggested amendments to Articles 11 and 13 would improve on the original proposal—in fact, they would make it worse—we are urging Members of the European Parliament (MEPs) who are member of the CULT simply to vote for the deletion of those Articles. In particular, as pointed out by European Digital Rights (EDRi, of which EFF is a member), for CULT to support mandatory filtering of uploads on user content platforms would directly contradict that committee’s own opposition to mandatory filtering of terrorist and other extreme content. 

On the same day, the Industry, Research and Energy (ITRE) Committee will also vote on its draft opinion and amendments. Its takes on the upload filter and link tax proposals are not as extreme as those of CULT. In fact its suggested amendment to the Article 11 link tax would gut that misconceived proposal, replacing it with a relatively unobjectionable provision that simply allows press publishers to stand in for journalists in enforcing their existing copyrights in news articles. ITRE’s suggested amendment to Article 13 doesn’t go so far though, and continues to require platforms to take additional measures such as upload filtering at the behest of copyright holders; therefore we maintain that ITRE should instead vote for deletion of this Article.

Two more European Parliamentary committees are also weighing in on these controversial proposals. The IMCO or Consumer Protection and Internal Market Committee voted on its opinion and amendments on 8 June, with a recommendation against the Article 13 upload filtering plan—this should hopefully be persuasive, as it has a special cooperative status with JURI on this topic. Unfortunately, IMCO did not also vote against the Article 11 link tax, but supported the Commission’s original proposal. Next to vote after this week will be the Civil Liberties, Justice and Home Affairs (LIBE) Committee, which will vote on its opinion and amendments on September 25.

European activists have put together a Save the Meme website which can be used to contact MEPs about the upload filtering and link tax proposals. Today, in advance of the CULT and ITRE votes and JURI’s consideration of its amendments, would be an excellent day for our European members to take advantage of that opportunity and ask their representatives to vote against the Commission’s harmful proposals.



Source link: https://www.eff.org/deeplinks/2017/07/stalemate-continues-negotiations-over-european-copyright-filters