Copyright Law Shouldn’t Pick Winners

Mandatory Filtering Proposals Curb Competition

When looking at a proposed policy regulating Internet businesses, here’s a good question to ask yourself: would this bar new companies from competing with the current big players? Google will probably be fine, but what about the next Google? In the past few years, some large movie studios and record labels have been promoting a proposal that would effectively require user-generated media platforms to use copyright bots similar to YouTube’s infamous Content ID system. Today’s YouTube will have no trouble complying, but imagine if such requirements had been in place when YouTube was a three-person company. If copyright bots become the law, the barrier to entry for new social media companies will get a lot higher.

A Brief History of Copyright Bots

In many ways, the history of copyright bots is really the history of Content ID. Content ID was not the first bot on the market, but it’s the template for what major film studios and record labels have come to expect of content platforms. When Google acquired YouTube in 2006, the platform was under heavy fire from major film studios and record labels, which complained in court and in Congress that the platform enabled widespread copyright infringement. YouTube complied with all of the requirements that the Digital Millennium Copyright Act (DMCA) puts on content platforms—including following the notice-and-takedown procedure when rights holders accuse their users of infringement. The DMCA essentially offers content platforms a trade—if they do their part to tackle infringing activity, they’re sheltered from copyright liability under the DMCA safe harbor rules. Hollywood agreed to those rules back in 1998, but now it wanted to rewrite the deal. In response to legal and commercial pressure from content industries, Google developed Content ID, a program that goes beyond YouTube’s DMCA obligations. Content ID doesn’t replace notice-and-takedown; it creates a system for proactive filtering that often lets rights holders remove allegedly infringing content without even having to send a DMCA takedown request. Rights holders submit large databases of video and audio fingerprints, and YouTube patrols new uploads for closely matching content. Rights holders can choose to have YouTube automatically remove or monetize videos, or they can review them manually and decide what they want YouTube to do with them. There’s a built-in appeals process (which includes escalation to a DMCA takedown, with the fair use consideration the DMCA requires), but it has problems of its own. For better or worse, Content ID changed YouTube. It bought the company some goodwill with big content owners, many of which have now become prolific YouTube adopters.

Writing Bots into the Law

But the success of Content ID has led some rights holders to the dangerous notion that filtering alone can end the copyright wars. Now, copyright bots have begun to show up all over the Internet—often in places where they make no sense, like your private videos on Facebook. And it appears that some major content owners won’t be satisfied until web platforms have no choice but to adopt systems like Content ID – in other words, turning a voluntary system into a mandate. Over the past few years, lobbyists representing large content owners both in the U.S. and in Europe have begun to demand mandatory filtering. These proposals vary, but their goals are the same: a world where social media platforms are vulnerable to massive copyright infringement damages unless they go to extreme measures to police their members’ uploads for potential infringement. The Chinese government has gone all-in on copyright filtering, partnering with Hollywood to scan not just people’s social media posts but even their private devices. For the record, copyright bots can raise major problems even when they aren’t compelled by law. In principle, bots can be useful for weeding out cases of obvious infringement and obvious non­-infringement, but they can’t be trusted to identify and allow many instances of fair use. What’s more, their appeals and conflict-resolution systems are often completely opaque to users and seem designed to favor large content companies. Still, there’s a world of difference between platforms implementing copyright bots as a business decision and being forced to do so by governments. The latter creates a huge, expensive stumbling block for a company to cross before it can ever compete in the market.

Narrow Regulations and Broad Patents

It gets worse. When companies are given only narrow space in which to compete and innovate, it becomes easier for incumbents to set legal traps within those boundaries. Microsoft was recently issued a patent called “Disabling prohibited content and identifying repeat offenders in service provider storage systems.” It’s a patent on copyright bots, and the Patent Office issued it even though its claims were far from novel: Microsoft only filed it in 2013, a full six years after Google introduced Content ID. We don’t know what Microsoft plans to do with its patent, but we do know that patents this broad can wreak havoc on a marketplace, casting doubt over standard and obvious business practices. And with both Hollywood and governments pressuring content platforms to implement filtering, it’s easy to imagine a time when a broad patent like Microsoft’s would apply by definition to essentially every platform that tried to enter the market. It might be tempting to think that software patents on copyright filtering will incentivize innovation in filtering, thus making copyright bots more accessible to small platforms. But a patent as broad and generic as Microsoft’s risks cutting off innovation well short of that goal: overbroad patents blanket an entire field, rarely disclosing any information of value about the underlying technology. Business regulations should provide companies wide berth to innovate, experiment, and differentiate themselves from competitors. Patents should cover specific, narrowly defined inventions. Narrow regulations and broad patents are a dangerous combination.

Keep Safe Harbors Safe

Safe harbor protections are essential to how today’s Internet works—without them, many Internet companies would simply be exposed to too much legal risk to operate. Safe harbors have given us the entire social media boom and many other Internet technologies that we take for granted every day. So any proposal that makes it more burdensome to comply with safe harbor requirements should be examined closely to make sure that it doesn’t close the market to new competitors. Mandatory copyright filtering is likely to do exactly that. If the kind of laws big media companies are proposing today had been in place 12 years ago, it’s doubtful that YouTube could have survived its early days as a startup. And if those laws get implemented today, new players will need tremendous resources just to get started. Mandatory filtering would create a narrower playing field for Internet businesses and let the most successful players use legal tricks to maintain their advantages. It’s a bad idea.

Source link:


As a Provider Fought a Secret Surveillance Order, Court Denied It Access to Relevant Law

The U.S. government’s foreign surveillance law is so secretive that not even a service provider challenging an order issued by a secret court got to access it. 

That Kafkaesque episode—denying a party access to the law being used against it—was made public this week in a FISC opinion EFF obtained as part of a FOIA lawsuit we filed in 2016.

The opinion [.pdf] shows that in 2014, the Foreign Intelligence Surveillance Court (FISC) rejected a service provider’s request to obtain other FISC opinions that government attorneys had cited and relied on in court filings seeking to compel the provider’s cooperation. 

The decision was related to the provider’s ultimately unsuccessful challenge to a surveillance directive it received under Section 702, the warrantless surveillance authority that is set to expire this year.

The decision is startling because it demonstrates how secrecy jeopardizes one of the most fundamental principles of our justice system: everyone gets to know what the law is. Apparently, that principle doesn’t extend to the FISC. 

The provider’s request came up amid legal briefing by both it and the DOJ concerning its challenge to a 702 order. After the DOJ cited two earlier FISC opinions that were not public at the time—one from 2014 and another from 2008—the provider asked the court for access to those rulings.

The provider argued that without being able to review the previous FISC rulings, it could not fully understand the court’s earlier decisions, much less effectively respond to DOJ’s argument. The provider also argued that because attorneys with Top Secret security clearances represented it, they could review the rulings without posing a risk to national security. 

The court disagreed in several respects. It found that the court’s rules and Section 702 prohibited the documents release. It also rejected the provider’s claim that the Constitution’s Due Process Clause entitled it to the documents.

The opinion goes on: “Beyond what is compelled by the Due Process Clause, the Court is satisfied that withholding the Requested Opinions does not violate common-sense fairness.” This was because the Court believed that the DOJ had accurately represented the rulings in its legal briefs and did not mislead the provider about what those rulings said. 

The court also said that even if the opinions were released, they “would be of little, if any assistance” to the merits of the provider’s arguments.  

The court’s opinion notwithstanding, there is nothing fair about withholding important legal cases—which likely interpreted or created law—from one side in a legal dispute. 

The court’s decision is akin to allowing one party to read and cite to a Supreme Court case while prohibiting the other side from doing the same. It fundamentally disadvantages one side in a legal fight, on top of denying it access to the case to ensure that the party in the know is accurately representing the ruling.

In the case of the provider, the deck was always stacked against its ability to challenge the 702 order. The FISC traditionally only hears from one party—the Executive Branch—and is usually sympathetic to claims of national security. 

Although recent changes to the FISC as a result of USA Freedom Act have moved in the right direction, including the ability for outside parties to argue before the court, the DOJ still has many advantages.

In the case of the provider, the trump card was that the DOJ’s lawyers got to read and rely on cases that the provider never got to see. 

To be sure, the unjust result is not entirely the fault of the FISC. As the ruling points out, Congress has provided little to no recourse for a party challenging secret surveillance orders to be able to obtain documents and FISC rulings that are directly relevant to its case.

With Section 702 due to sunset this year, Congress should recognize that the court system it set up to approve surveillance orders and hear challenges to those orders bears little resemblance to our broader justice system. This inequity corrupts our fundamental democratic principles and is yet another reason Congress must end Section 702.

Source link:


Qatar's Crisis is About Freedom of Expression

The tiny Gulf country of Qatar is in crisis. Over the past few weeks, members of the Gulf Cooperation Council have systematically sought to isolate and suffocate the country, accusing Qatar of supporting extremism, severing diplomatic ties, and calling upon their allies to do the same. 

It is not only a diplomatic crisis, but a crisis for free expression in an already restrictive region. As some analysts have pointed out, the singling out of Qatar has as much to do with the country’s alleged support of terrorism as it does with neighboring countries’ desire to shutter Al Jazeera, Qatar’s flagship media organization.

Al Jazeera, a comprehensive media outlet funded by the Qatari government with several international satellite television channels, websites, and online video operations, is not exactly a beacon of free expression—it rarely reports negatively on Qatar or other Gulf countries, for example—but it has stood strong in its reporting on the Arab region and much of the world, covering topics that other outlets often ignore.

Although the country restricts access to some websites and outlaws criticism of its rulers, it has nevertheless set itself apart as a regional media leader. Al Araby Al Jadeed (“The New Arab”) and Huffington Post Arabi are just two of the online media outlets to emerge from the country in recent years.

Its Gulf neighbors—namely Saudi Arabia, Bahrain, and the United Arab Emirates (UAE)—offer a much more restrictive online environment, with each blocking numerous websites, including international media. Now, as they seek to isolate Qatar, they’re homing in on its media and using the internet as a means to an end.

Forced closure

It all began just a few days after President Trump’s May 22 meeting with Gulf leaders in Saudi Arabia, when Qatar News Agency (QNA) published comments critical of the United States attributed to the country’s ruler, Emir Sheikh Tamim bin Hamad Al Thani. Al Jazeera claimed QNA’s site had been hacked, but satellite channels from the UAE and Saudi Arabia reported the comments as legitimate and subsequently blocked Al Jazeera’s main website on May 24.

From there, things escalated quickly: on May 25, Egypt blocked access to Al Jazeera and other Qatari-funded news sites, and took the opportunity to also block local independent site Mada Masr. Saudi Arabia and Jordan followed suit by revoking Al Jazeera’s license and closing its offices.

And now, under the pretext of cybercrime (a favored means of repression in the region), Qatar’s neighbors are seeking to prosecute anyone who speaks favorably about the country. The UAE has threatened up to 15 years in prison or debilitating fines for anyone who shows sympathy to embattled Qatar, while Bahrain’s Ministry of Interior announced penalties of up to five years imprisonment on their website. SaudiNews tweeted that the government of Saudi Arabia would impose up to five years imprisonment for pro-Qatar speech as well, on the grounds of the country’s 2007 cybercrime law, which bans “material impinging on public order.” The kingdom took their restrictions a step further, banning satellite TV from hotels to prevent visitors from watching Al Jazeera. Finally, on June 8, Al Jazeera suffered a massive cyberattack

These restrictions, as well as restrictions on travel to and from Qatar, are pushing the embattled country into isolation and threatening the economy and livelihood of Qatar’s residents and citizens. But they also set a dangerous precedent in an already extremely restrictive environment for freedom of expression: the use of economic and travel sanctions to shut down a powerful media outlet and further, punish anyone who speaks out against that act.

As a media leader in the region, Qatar has an important role of providing news coverage to citizens in the Gulf and beyond. And while press freedom still has a long way to go in Qatar, further suppression of human rights by members of the GCC is not the answer. EFF condemns the Council’s attempts to sever diplomatic ties with the country and silence Qatari media outlets, like Al Jazeera, under the guise of combating terrorism. Supporting Qatar’s media environment, and helping it become more free, is an imperative.

Source link:


EFF to USTR: IP Doesn't Belong in NAFTA. For the Rest, Talk To Us.

The dust has barely settled from the collapse of the Trans-Pacific Partnership (TPP), and already a new trade battle is ahead of us: the renegotiation of the North American Free Trade Agreement (NAFTA). President Trump called the controversial 1994 agreement between the United States, Canada and Mexico “the single worst trade deal ever approved in this country.” But compared to the TPP, there’s a lot to like about the original NAFTA from a digital rights perspective: it doesn’t extend the term of copyright, it doesn’t require countries to prohibit DRM circumvention, and it doesn’t include new and untested rules to regulate the Internet.

That could all change soon. The United States Trade Representative (USTR) has called for comments from industry and the public on goals it should promote as it renegotiates NAFTA, which opens a window for the TPP’s proponents to try to use NAFTA to establish at least some of the dangerous proposals they hoped to impose through the TPP – including the TPP’s so-called intellectual property (IP) rules. EFF’s comment explains why that is a bad idea:

Prescriptive IP rules usually fail to account for developments in technology such as the Internet, or changes in business and social practices such as the sharing economy. Including such rules in trade agreements could inhibit the United States from modernizing its own intellectual property rules in the future.

We also lay out specific, real-world examples of how including IP rules in trade agreements can backfire. For example, the U.S.-Morocco FTA bans parallel importation of copyright material. Yet in the 2013 Kirtsaeng v Wiley case, such parallel importation was found legal. Similarly, the U.S.-Australia FTA included a provision extending copyright protection to temporary copies of data stored in computer memory. But since then, a line of appellate decisions has found that copyright protection may not exist in temporary copies. As we explain:

Enshrining an opposite rule in NAFTA would have caused not only a disconnect between U.S. law and U.S. trade commitments, (possibly rendering it liable to dispute settlement proceedings), the existence of such a rule might have discouraged the emergence of innovative technologies that routinely make such temporary copies in the course of their normal operations. 

In reality, we probably can’t expect copyright rules to be removed from NAFTA altogether. We will face a hard enough task fighting the expansion of such rules. But in the event that such rules are included, they need to be balanced by crucial safeguards such as fair use. If America is to commit itself and its trading partners to upholding high levels of copyright protection, it must also commit them to allowing fair use, to prevent copyright rules from unfairly inhibiting innovation and creativity.

While Canada’s existing fair dealing regime is similar to the U.S. fair use doctrine, Mexico lacks any such concept. Given that Mexico also has the world’s longest term of copyright protection (at life plus 100 years, even longer than in the United States), the inclusion of fair use as a new minimum standard for NAFTA countries would help to balance out this excessive term of protection.

Beyond IP: Net Neutrality, Encryption and Global Orders

IP rules aren’t the only new rules being proposed for NAFTA. Groups such as the Internet Association have also proposed a raft of new digital trade rules [PDF] to promote the free flow of information online. But because many of these proposals have just as much impact on important non-trade interests and values, such as cybersecurity, freedom of expression, and privacy, we’re not convinced that trade rules are the right place for them.

In particular, our submission explains to the USTR why we think that including TPP-like rules on net neutrality, domain names, encryption standards, or limiting the review of software source code, is a profoundly dangerous proposition. Handing over those important and multi-layered topics to the myopic perspective of the trade negotiators and their corporate advisors will produce rules that miss or devalue the perspectives of computer professionals, users, and innovators.

That said, there are a few, narrow, trade-related digital proposals that we think could fit into the framework of a trade agreement, if it was negotiated in a sufficiently open and transparent fashion. At the top of the list is intermediary liability, since Mexico doesn’t currently provide its Internet platforms with a safe harbor from liability for their users’ content, which creates the incentives for providers to censor and restrict their users’ expression online.

We also think that there may be some merit in using trade agreements to address the problem of data protectionism, such as requirements that Internet platforms host data on local servers—provided that these rules contained adequate safeguards to allow countries to protect users’ personal data. And we are also open to considering the idea of rules to prevent countries from issuing injunctions against Internet platforms from outside their borders, who were not joined as parties to the case where the injunction was issued.

Transparency is key

But before the USTR pursues any of these proposals, it must first reform its trade negotiation practices to make them much more open, inclusive, and transparent. In particular,the USTR should release its text proposals, should release consolidated drafts after each round of negotiations, should hold a notice and comment period and public hearing on its proposals, and should reform and open up its trade advisory committees.

You can read EFF’s full submission below.

Source link:


With New Browser Tech, Apple Preserves Privacy and Google Preserves Trackers

Recently Google and Apple announced plans to respond to complaints about online advertising. Both companies will implement changes to their browsers to neutralize some of the most annoying ad formats, but only Apple has chosen to address concerns around user privacy.

Starting sometime in 2018, Google’s Chrome browser will begin blocking all ads on websites that do not follow new recommendations laid down by the industry group the Coalition for Better Ads (CBA). Chrome will implement this standard, known as the Better Ads Standard, and ban formats widely regarded as obnoxious such as pop-ups, autoplay videos with audio, and interstitial ads that obscure the whole page. Google and its partners worry that these formats are alienating users and driving the adoption of ad blockers. While we welcome the willingness to tackle annoying ads, the CBA’s criteria do not address a key reason many of us install ad blockers: to protect ourselves against the non-consensual tracking and surveillance that permeates the advertising ecosystem operated by the members of the CBA.

Google’s approach contrasts starkly with Apple’s. Apple’s browser, Safari, will use a method called intelligent tracking prevention to prevent tracking by third parties—that is, sites that are rarely visited intentionally but are incorporated on many other sites for advertising purposes—that use cookies and other techniques to track us as we move through the web. Safari will use machine learning in the browser (which means the data never leaves your computer) to learn which cookies represent a tracking threat and disarm them. This approach is similar to that used in EFF’s Privacy Badger, and we are excited to see it in Safari.

Users Can Opt In to Publisher Payments—But Not Out of Tracking
In tandem with their Better Ads enforcement, Google will also launch a companion program, Funding Choices, that will enable CBA-compliant sites to ask Chrome users with content blockers to whitelist their site and unblock their ads. Should the user refuse, they can either pay for an “ad-free experience” or be locked out by a publisher’s adblock wall. Payment is to be made using a Google product called Contributor, first deployed in 2015. Contributor lets people pay sites to avoid being simply shown Google ads, but does not prevent Google, the site, or any other advertisers from continuing to track people who pay into the Contributor program. This approach is consistent with the ad industry’s dogged defense of tracking, and its refusal to honor user signals such as Do Not Track. The industry’s sole response has been to create a system called AdChoices, which offers users a complicated and inefficient opt-out from targeted ads, but not from the data collection and the behavioral tracking behind the targeting. By that logic, it is okay to track and spy on people who opt out—as long as you don’t remind them that they are being tracked!

With a vast network of websites that display its ads, and over 50 percent of the browser market, Google has the power to address the ad quality problem by requiring sites to control the types of ads they show or risk losing all income from Chrome users. Google’s motivation is strong because, collectively, ad blockers are undermining Google’s revenue from programs like DoubleClick AdExchange, AdSense, Adwords—or, in the case of Adblock Plus and Adblock, unblocking those ads but demanding payments in exchange. While Google Chrome has mostly allowed users to install the ad- and tracker-blocking tools of their choice, there is always the risk that Google may seek to neutralize any blocking capability not under its direct control. This is no imaginary threat: in 2014, the Android Play Store banned Disconnect Mobile, a privacy app designed to prevent third party tracking. And in January of this year, the Chrome store kicked out AdNauseam, an obfuscation and anti-tracking tool that unblocks ads from websites that have adopted the EFF’s Do Not Track policy and promised to respect user demands for privacy.

At EFF, we understand that advertising funds much of the media and services online, but we also believe that users have the right to protect themselves against tracking. Advertising is currently built around a surveillance architecture, and this has to change. Until then, users will continue to install browser extensions like Privacy Badger and make use of tracking protection in browsers like Brave, Firefox, Opera and Safari to protect themselves. 

Google and the CBA want to address the visibly annoying aspects of ads while ignoring the deeper privacy issues. Instead, they should take their lead from Apple on this one. Ad quality needs to improve and advertisers must abandon any attempt to hijack our attention with disruptive audio, flashing animation, or screen takeovers. But this alone will not win back the trust of users alienated by an ad system run amok. Users should be given more control over the ads they are shown, and their Do Not Track demands must be honored. The web should be about opening up new possibilities both individually and collectively, but the feeling of being monitored can create unease that information about us could be misused or revealed without our permission. Since the Web has become central to human thought and communication, surveilling it without an opt-out is a fundamental intrusion into human cognition and conversation. Any plan to make ads “better” that lacks a core privacy component is fundamentally broken.

Source link:


Printer Tracking Dots Back in the News

Several journalists and experts have recently focused on the fact that a scanned document published by The Intercept contained tiny yellow dots produced by a Xerox DocuColor printer. Those dots allow the document’s origin and date of printing to be ascertained, which could have played a role in the arrest of Reality Leigh Winner, accused of leaking the document. EFF has previously researched this tracking technology at some length; our work on it has helped bring it to public attention, including in a somewhat hilarious video.

One of the experts, Rob Graham, used a tool that we created to decode the dots. Whenever someone’s liberty is at stake, we are extra careful in our public statements, but we offer the following thoughts on the situation:

  •  The affidavit that led to Winner’s arrest described how the government identified its suspect. The affidavit did not mention the use of the tracking dots at all, but referred only to other sources of information. It’s quite possible that printer dots did not play any role in this investigation at all.
  • However the government identified its suspect in this case, it’s worth remembering that forensic techniques are very powerful and can often reveal the origins of documents in unexpected ways.
  • This tracking technology is pervasive in color laser printers, and is a result of secret agreements between governments (the U.S. is not the only one) and the printer industry, dating back more than a decade. Some printer manufacturers openly acknowledge that such a tracking mechanism exists, but offer few other details. The original motivation given for the tracking technology is investigating counterfeiting of currency, although nothing in the technology limits its use to that purpose. Overall, this secret nonconsensual tracking makes it more difficult to publish any kind of document anonymously, which implicates both privacy and speech.
  • Not all printers’ tracking information is readily visible. Some of the documents we obtained about this technology showed that there is a subsequent generation of tracking technology, which apparently works by slightly rearranging dots that the printer is expected to print, rather than by adding new dots. Anyone using a color laser printer should assume that it uses some kind of tracking mechanism, whether or not tracking dots are visible in its output.
  • This technology is one way that governments secretly pressured industry to change products to undermine privacy and anonymous speech when the law did not require it. This should make us all wonder how else the government is working in secret to undermine privacy and speech.  We should insist that companies be transparent about how government requests have affected the design of the products we use, since those designs can have profound implications.

Source link:


Why California Urgently Needs Surveillance Transparency

A version of this commentary appeared in the San Diego Union-Tribune on May 27, 2017. 

In the summer of 2015, a local resident joined a nationwide project to uncover how police use face recognition devices. He filed a public records request with the city of Carlsbad, which quickly responded that no documents existed because the city didn’t use that technology.

This was demonstrably false: the Carlsbad Police Department had been part of a regional face recognition pilot program for years. Eventually, the city admitted that 14 officers were walking around with special smartphones that capture faces and match them against the county’s mug shot database. 

But Carlsbad could not produce policies, protocols, or guidelines for how and when officers may operate the devices. Nor did the city have a count of how many time the devices were used. The only record available was a technical manual for the face recognition app. 

Surveillance technology is rapidly advancing, whether it’s drones mounted with cameras, automated license plate readers (ALPRs) that track our travel patterns, fake cell towers that surreptitiously connect to our smartphones, algorithms that scrape our social media profiles, or devices that digitize our faces.  Many of these technologies aren’t limited to gathering intelligence on suspects, and instead collect information on everyone. 

The Carlsbad incident raises questions about public trust and high-tech policing. Who should decide which surveillance technologies are appropriate for our communities? Should police have to disclose how technologies that invade our privacy are used and how often they’re abused? 

Individual privacy and public safety are not mutually exclusive; it just takes a robust debate to land on the right balance between the two. This conversation won’t happen unless the rules change so police must obtain approval from the public and our elected officials before deploying invasive spy tech.

A bill now under consideration by the California Senate—S.B. 21—would ensure that police do not acquire surveillance technology without a public process.

Take Action

Tell the California Assembly to pass S.B. 21.

Before a law enforcement agency could acquire a new spy technology, it would submit a usage policy for public review during an open meeting. Elected representatives (such as a city council) would have the ultimate authority to approve or reject the technology. In exigent circumstances, police could temporarily bypass the process, but they would need to stop using such temporary surveillance technology and submit proper disclosures after the emergency has passed. 

Police and sheriff departments would publish biennial transparency reports. These disclosures would include: the kinds of data the technologies collect; how may times each technology was deployed; how often the technology helped catch a suspect or close a case; and the number of times the systems were misused. 

In 1972, Californians voted to include privacy as an inalienable right in the state’s Constitution. “The proliferation of government snooping and data collecting is threatening to destroy our traditional freedoms,” the authors of the amendment wrote. They warned technology would soon allow police to create “cradle-to-grave” profiles of every American, which then could be used humiliate us.

One need only look to nearby Calexico. In 2014, police in this border town spent nearly $100,000 from a slush fund of seized assets on sophisticated spy gear. They then allegedly used these systems to run illegal surveillance on city councilmembers with the intent to extort. A U.S. Department of Justice investigation confirmed this corruption, but also found a troubling pattern in which the city approved a network of surveillance cameras, body cameras, and ALPR technology “before implementing the essential fundamentals of policing.”

To head off these kinds of threats to privacy, the Santa Clara County Board of Supervisors has already passed a local ordinance promoting transparency about surveillance technology. The cities of Oakland and Palo Alto, and the Bay Area Rapid Transit board are also considering similar measures in response to growing community concern about unchecked surveillance. 

S.B. 21, legislation by Sen. Jerry Hill (D-San Mateo), would implement statewide standards—an important step for San Diego County, where police technology often flows freely between agencies. The bill also enhances fiscal responsibility by providing policymakers with data to evaluate whether a costly technology is actually as effective as vendors claim. 

As the U.S. government ramps up a new “War on Drugs” and aggressive immigration enforcement, we can anticipate even more military-grade surveillance technology to flow down to local law enforcement agencies through grant programs, equipment transfers, and federal partnerships. California lawmakers must pass S.B. 21 to put adequate controls in place to so these technologies are operated responsibly, transparently, and with respect for our constitutional rights. 

Source link:


Expansive Protections Against Police Abuses Win Approval in Providence

On Thursday night, the capital of the smallest state in the union adopted a wide-ranging police reform measure with national and historic implications. The Providence City Council voted 13-1 to adopt the Providence Community-Police Relations Act, which had generated controversy for the very same reason that it was ultimately adopted: it protects a sweeping array of civil rights and civil liberties (including digital rights championed by EFF) from various kinds of violations by police officers, all in a single measure. 

Included within the Act are protections to prevent police from arbitrarily adding young people to gang databases, providing notice to youth under 18 if they are so designated, and allowing adults an opportunity to learn whether they have been included. It also forces police to justify any use of targeted electronic surveillance by imposing a requirement that officers first establish reasonable suspicion of criminal activity. Last but far from least, the Act protects the civilian right to observe and record police activities, which—combined with technology such as cell phones, video, and social media—has recently proven crucial in inspiring a multi-racial social movement responding to long festering abuses.

Beyond those concerns shared by EFF, the Act also includes a range of further elements protecting civil rights. Visionary measures to address discriminatory profiling prohibit police from considering racial, religious, and gender characteristics when assessing suspects unless “the officer’s decision is based on a specific and reliable suspect description as well.” The Act also prohibits police from inquiring about immigration status, preserving community trust and protecting both families from being torn apart and police departments from being commandeered to do the federal government’s work enforcing non-criminal civil code violations. 

Earlier this spring, the Council unanimously approved a slightly different version of the measure, then known as the Community Safety Act, in a unanimous vote. Only a week later, it delayed its prior decision, deferring until June 1 a final vote on proposed recommendations from a working group it established to bring together stakeholders including community advocates and police officials.

After meeting five times over the course of the past month, the working group issued its recommendations, with the support of Police Chief Hugh Clements and other officers included in the working group. Yet the Fraternal Order of Police remained intransigent in its opposition, issuing formal condemnations of the policy process at the eleventh hour for the second time in only a few weeks.

In the wake of the Council’s approval, Mayor Jorge Elorza pledged to sign the bill into law. But long before it gained the approval of policymakers, a proposal for intersectional policing reforms united community organizations in and around Providence, including Rhode Island Rights, a member of the Electronic Frontier Alliance.

Groups of residents promoted both formal and informal discussions of the issues. They educated their neighbors, drew together a remarkably broad coalition of local groups, and even hosted a street festival “to use music, dance and art to bring attention to injustices and inequalities in our city and encourage people from across Providence to stand behind this legislation so that we can ban racial profiling and build a safer city, specifically for youth, immigrants and people of color.”

The movement for police accountability has drawn viral participation in some cities driving national news cycles, including St. Louis, Baltimore, and Charlotte. But Providence may now plausibly claim to lead the nation in embracing policy reforms responding to those social movements. 

By working to secure the near-unanimous support of their elected municipal representatives, grassroots groups who championed the new Act have conclusively demonstrated the viability of expansive local reforms combining measures to limit police profiling, surveillance, and retaliation all at once. Where concerned residents in other parts of the country learn from their examples, they might create new policy opportunities for civil rights and civil liberties, and together, even shift the national landscape.

Source link:


While EU Copyright Protests Mount, the Proposals Get Even Worse

This week, EFF joined Creative Commons, Wikimedia, Mozilla, EDRi, Open Rights Group, and sixty other organizations in signing an open letter [PDF] addressed to Members of the European Parliament expressing our concerns about two key proposals for a new European “Digital Single Market” Directive on copyright.

These are the “value gap” proposal to require Internet platforms to put in place automatic filters to prevent copyright-infringing content from being uploaded by users (Article 13) and the equally misguided “link tax” proposal that would give news publishers a right to compensation when snippets of the text of news articles are used to link to the original source (Article 11).

The joint letter addresses these two muddle-headed proposals by stating:

The provision on the so-called “value gap” is designed to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications if they want to have any chance of staying in business. …

More and more voices have joined the protest by academics and a variety of stakeholders (including some news publishers) against this [link tax] provision. The Council cannot remain deaf to these voices and must remove any creation of additional rights such as the press publishers’ right.

IMCO Proposal Lowers the Bar for Awful Copyright Policy

Incredibly, since the letter was drafted, the proposals have gotten even worse. In our last post on this topic, we highlighted some of the atrocious amendments to the original text being pushed by the CULT (Committee on Culture and Education) of the European Parliament, notably to give producers and performers an unwaiveable copyright-like power to demand additional payments for the use of their work by online streaming services. But the CULT committee doesn’t have a monopoly on bad ideas for European copyright. 

One of the other committees, the Internal Market and Consumer Protection Committee (IMCO), will be finalizing its own recommendations for the amendment of the Digital Single Market Directive in a vote on 8 June. On Wednesday Member of the European Parliament (MEP) Julia Reda sounded the alarm about a sly move by EPP Group Shadow Rapporteur to IMCO, MEP Pascal Arimont, to propose that the committee accept an alternative “compromise” text, that far from being a compromise, checks off every item on the copyright maximalist wish-list.

As regards the upload filtering mandate, the “compromise” would extend this mandate to cover not only content hosts, but also “any service facilitating the availability of such content,” apparently including search engines and link directories. Only small startups would be exempt from this filtering requirement, and only for a maximum period of five years.

The safe harbor that protects Internet platforms from copyright liability for users’ content would also be abolished for any Internet platform that uses an algorithm to improve the presentation of such content—and it’s hard to imagine any platform that doesn’t do that. As such, it is no exaggeration to say that if this became law, it would no longer be safe to operate a user-generated content website in Europe.

If this wasn’t bad enough, the Shadow Rapporteur’s proposal goes all out in favor of the Commission’s unpopular link tax proposal, extending it even further so that it would cover all uses of news snippets both online and offline, and last for 50 rather than 20 years. The new monopoly right would also be extended to scientific and academic journals, allowing them to demand fees for the use of abstracts of articles. Only the use of single words and bare hyperlinks would be exempted from the link tax, meaning that as few as two words quoted from a news article would raise liability for copyright-like fees payable to publishing organizations.

A Ban on Image Search

Another of the senseless proposals that has been published this week, this time not by the Shadow Rapporteur of IMCO but by the CULT committee again, is a proposed amendment to extend the “value gap” proposal to include a new tax on search engines that index images, as for example Google and Bing do. This proposed amendment [PDF] provides:

Information society services that automatically reproduce or refer to significant amounts of visual works of art for the purpose of indexing and referencing shall conclude licensing agreements with right holders in order to ensure the fair remuneration of visual artists.

It’s unclear how much support this particular proposed amendment may have, but should it find its way into the final CULT committee report, we can well imagine that just as Google News shut down in Spain following Spain’s implementation of a link tax for news publishers, the closure of image search services won’t be far behind. It’s hard to see that outcome as being any less detrimental to artists as it would be for users.

European lawmakers need to draw a line in the sand, and stop giving oxygen to copyright holders’ most fanciful demands. If you are European or have friends in Europe, you can help deliver this message by contacting members of the IMCO committee and of the CULT committee to urge them to oppose such extremist rent-seeking proposals.

Source link:


Wikipedia Joins the Fight for Fair Use in Australia

Australia’s ongoing debate over the introduction of a new fair use right took a turn last week when Wikipedia joined the fray. The world’s largest online encyclopedia now displays a banner to its Australian users encouraging them to support a joint campaign of Australia’s major digital rights groups to modernize its dated copyright law by legalizing the fair use of copyright works.

As the campaign points out, the adoption of fair use would not harm copyright owners, but would simply authorize many everyday uses of copyright material that are currently technically infringing, such as forwarding emails, backing up movies, and sharing memes or mash-ups. That’s one reason why Australia’s Productivity Commission recommended the adoption of fair use as an improvement to Australia’s patchwork of technologically-specific exceptions, such as a rule that allows format shifting from VHS tapes, but not from DVDs.

Libraries and educators would also benefit. Perversely, under current Australian copyright law, educational institutions are required to pay royalties for copying even freely-available online materials such as publicly accessible webpages for use by students. The adoption of fair use could see an end to such anomalies, with flow-on benefits across Australian society.

Why has Wikipedia, which is hosted in the U.S., jumped into this debate? Because the online encyclopedia provides an excellent example of the opportunity that the fair use doctrine creates for valuable information to be shared, without damaging the interests of creators. For example, in an article on Australian band Crowded House, you can hear a few bars of some of their most well-known tracks, and in a page about Aboriginal artist Albert Namatjira, a small representation of his art can be found.

Would anyone wishing to listen to Crowded House forgo purchasing their album because they can hear a few seconds of the same music on Wikipedia? Of course not, and that’s one of the factors that make Wikipedia’s partial reproduction of their music fair. But because Australia lacks a fair use right, Wikipedia could not be hosted in Australia without risking being found to infringe copyright. It’s high time for this to change, for the sake of Australian users, creators, and innovators alike.

Other countries around the world are recognizing the benefits of fair use. South Africa is currently proposing to introduce a new fair use right into its own copyright law, adding to a growing list of countries that have done the same, including Israel, Malaysia, the Philippines, Thailand, Taiwan, Singapore, and South Korea.

Australians can join this growing movement and support the campaign for fair copyright by emailing their politicians, or by sharing the #faircopyrightoz hashtag on social media.

Source link: