SAN FRANCISCO—The European Commission is ramping up pressure on tech companies to more aggressively use automated filtering to scrub “illegal” content from the internet, a move that is drawing criticism from some lawyers and free speech activists in Silicon Valley.

In a communication issued Sept. 28, titled “Tackling Illegal Content Online,” the commission said it “strongly encourages online platforms to use voluntary, proactive measures” to pull down illegal content and to pour more money into “automatic detection technologies.”

Though the document is not a binding regulation or legislative proposal, the commission makes clear that it will monitor the tech industry’s response to its call for action and may take further steps—“including possible legislative measures”—by May 2018.

“Lawyers should be emphatically paying attention,” said Andrew Bridges, who represents tech firms in copyright disputes at Fenwick & West. “I think that any company that provides any kind of platform these days need to be absolutely on top of this stuff.”

Bridges and digital rights advocates argue that implementing the commission’s proposal would be too costly for tech companies—especially smaller startups—and chill free expression without effectively fixing the problems the EU is targeting.

The push by the EU seems to be part of a larger trend of placing more responsibility on online platforms—and not only in Europe. The U.S. Senate has also proposed creating a carve out for claims relating to sex trafficking in Section 230, which generally shields online intermediaries from liability over the content they host.

The focus of the EU communication is largely on hate speech and online material that incites terrorist violence. But it also explicitly references applying filtering technologies to target material that infringes intellectual property rights, like pirated movies and music.

European cities have been hit by a wave of terrorist violence over the past months, most recently in the UK and Spain. The release of the document by the commission, the EU’s executive arm, comes after the heads of EU member state governments in late June adopted a statement saying they expect the industry to develop “new technology and tools to improve the automatic detection and removal of content that incites terrorist acts.”

But Daphne Keller, a former senior lawyer at Google who now is the director of intermediary liability at Stanford’s Center for Internet and Society, warns that the commission proposal places too much confidence in the ability of technology to know what is “illegal.”

“The communication buys in wholeheartedly to the idea that expression can and should be policed by algorithms,” Keller wrote in a blog post. “The Commission’s faith in machines or algorithms as arbiters of fundamental rights is not shared by technical experts.”

Pointing to a March 2017 paper co-authored by experts from Princeton’s Computer Science Department and advocacy group Engine about the limits of online filtering, Keller added: “In principle, filters are supposed to detect when one piece of content—an image or a song, for example—is a duplicate of another. In practice, they sometimes can’t even do that.”

The commission doesn’t call out any companies by name, but it describes the online platforms that it has in mind as “search engines, social networks, micro-blogging sites, or video-sharing platforms.” It’s proposal almost surely has Google, Facebook, Twitter, and YouTube in mind.

Representatives for Google and Facebook declined to comment directly on the commission’s communication and instead pointed to public posts and comments previously made by company officials about fighting terrorism. Twitter did not respond to a request for comment.

To some degree, it appears that at least the major tech companies are already trying to respond to the call to be more active about filtering the material they host.

Google General Counsel Kent Walker, in a speech to the UN on Sept. 20, underscored the large volumes of footage that are uploaded to YouTube every hour and described efforts to pull down extremist videos more quickly—saying 75 percent of the videos that had been removed in recent months “were found using technology before they received a single human flag.”

Jeremy Malcolm, an attorney and senior global policy analyst at the Electronic Frontier Foundation, said the problem with using a fully automatic filter is that determining whether online content is illegal often depends on context; the one general exception is child pornography, he noted.

Malcolm gave the example of scholars posting terrorist videos online to work with other academics to analyze them. Filters would have a difficult time discerning between the two. “We normally recommend that there should be a court order to take something down,” Malcolm said, “and it should definitely not be an automated process doing that.”

Keller worries especially that it would be too difficult to restore legitimate content once it’s been taken down, and cites data about how the counter-notice system has performed under the U.S. Digital Millennium Copyright Act.

“A key takeaway is that while improper or questionable [takedown] notices are common—one older study found 31 percent questionable claims, another found 47 percent—reported rates of counter-notice were typically below 1 percent,” she wrote. “That’s over 30 legally dubious notices for every one counter-notice.”