Hello, What’s Next readers! In this edition, AI + crypto = good, while AI + cars = disaster. Plus, civil liberties groups score a win to shed more light on surveillance, lots of fretting over virtual reality litigation, and the darkest startup pitch ever.

Something you think I should cover here? Let me know at bhancock@alm.com or reach me on Twitter @benghancock.

Want to receive What’s Next straight to your email? Click here to sign up.

 

 

Protocol: The AI Tool of SEC Enforcement Dreams?

If you’re a regular reader, then you know the Securities and Exchange Commission is paying close attention to initial coin offerings (ICOs), those digital token sales that are part IPO, part early-stage seed funding, part something else entirely. The agency has filed civil actions over ICOs it alleges are outright scams, and also warned other seemingly legit companies that they still might be violating securities laws in conducting an ICO. Last week, it was reported that the Ether cryptocurrency sunk in value after an SEC official said the agency has “dozens” of ongoing ICO investigations (never mind that those two things are not really related).

If you’re an investor — or, you know, a plaintiffs lawyer — you might say that’s all well and good. Wouldn’t it be nice, though, if one could just automatically tell which ICOs were scams and which ones were for real? That’s the idea behind “IcoRating,” a tool that its creators call a “deep-learning system for scam ICO identification.”

In a research paper released this month, a team of researchers led by Chinese startup Shannon.AI explain how they think they can spot a fraud ICO pretty much at the click of a button, and why that’s important for investors and the growing crypto ecosystem. The 11-page paper gets pretty dense in parts, so I caught up with co-author William Wang of the University of California, Santa Barbara’s computer science department to have him explain it to me.

What you might not glean just by reading the paper is that identifying which ICOs are “scams” is actually a pretty labor-intense, subjective human exercise. A couple members of the research team with expertise in software and finance spent months combing through “white papers,” code repositories on GitHub, crypto project websites, and biographical information for more than 2,000 ICOs. And at the end, they determined whether the project was a scam, and then labeled (or “annotated”) it as such using a computer program.

Wang concedes that this might be controversial, but says it’s not all that different from how we clean our inboxes and declare what is junk or not. The study also looked at how the ICO performed after the launch to help inform its analysis. After the annotation, the program then pulled out data points in order to find patterns, essentially training its algorithm to pick up the common signs of a scam. Some of the technology is based on established machine learning principles, Wang explained, while other parts — such as assessing the backgrounds of a project’s founders — are more novel and specific to ICOs.

“I know there are a lot of different opinions about about cryptocurrencies and ICOs, but essentially we believe we need to differentiate” between what’s a scam and what’s legitimate, Wang said. In the future, the project hopes to move away from making a “binary” categorization, and instead telling investors what level of risk they’re taking on. “The question is, can we use AI to protect investors, which I think is very reasonable,” Wang added.

>> Takeaway: So could this tool help regulators spot bad behavior in a nascent market? “I think with computational tools,” Wang said, “at least it will give them a better idea of what’s going on.”

Read the white paper: https://arxiv.org/pdf/1803.03670.pdf

HT to The Next Web. For a more humorous take on cryptocurrency investing, check out this recent episode of John Oliver’s Last Week Tonight.


On The Radar: 3 Things to Know

1. A woman in Arizona died after being hit by an Uber self-driving car. Now, her family and the parties involved face novel issues around legal liability. • Police in a Phoenix suburb said the Uber vehicle was in autonomous mode with a driver behind the wheel on Sunday night when it struck a woman who was crossing the road outside a crosswalk, Ross Todd and Cheryl Miller write for The Recorder.

• Attorneys watching the case said multiple parties could face liability, including Uber and Volvo, the maker of the car. “The argument I would expect to see is that this is a vehicle and we’ve always judged vehicles by a strict liability standard,” said Todd Benoff at Alston & Bird.

>> Think Ahead: The vexing legal questions likely won’t get to court this time. Ryan Calo, who teaches robotics law and policy at the University of Washington School of Lawwrote on Twitter, “Uber will settle this driverless crash. And fast. It’s not ready for a test case.”

2. A front opens up in the legal battle over surveillance secrecy as FISC rules that civil liberties groups have standing to sue. • The U.S. Foreign Intelligence Surveillance Court of Review ruled on Friday that the American Civil liberties groups and other public interest have legal standing to challenge the sealing of portions of the court’s own opinions, Buzzfeed’s Zoe Tillman reports.

• “Movants need not show that they are ultimately entitled to access the materials in question. Instead, they need only show that their claim is not immaterial nor wholly insubstantial and frivolous,” the court ruled.

>> Takeaway: This doesn’t mean that the groups will actually get they documents they’re seeking. But they can try. “We look forward to finally being able to make our case that the public has a right to see important court decisions concerning the legality of the government’s expansive surveillance activities,” ACLU Attorney Patrick Toomey told Buzzfeed.

3. The Section 230 bill is moving through the Senate today, and Ron Wyden is making a last-ditch effort to shield internet companies. • The Senate voted yesterday to take up the bill known as FOSTA, which would amend Section 230 of the Communications Decency Act to allow sex trafficking-related civil and criminal claims against internet platforms. Debate was set to kick off this afternoon.

• Sen. Wyden, the Oregon Democrat who helped author Section 230, doesn’t like the bill. But he’s offering an amendment that seeks to ensure that tech companies won’t be hit with lawsuits because they moderate the content on their platforms. It’s an attempt to clarify what critics say is vague language around the scope of liability for third-party content.

>> Takeaway: The amendment seems to face long odds. It’s confronting opposition from a slew of trafficking and law enforcement groups, and the internet companies seem to have given up the fight over this issue already.


Virtually Legal: VR/AR Companies More Anxious

Notorious B.I.G. said it best: “Mo Money Mo Problems.” That might be be the best way to sum up the results of a new virtual and augmented reality industry study by the law firm Perkins Coie, which found increasing worry among executives over legal risks related to privacy, health and safety, and intellectual property.

In a report released Tuesday, the law firm said 44 percent of the individuals it polled responded that “consumer privacy/data security” was a legal risk of concern to their organization. That’s a big jump up from just 15 percent the last time Perkins Coie conducted the poll in September 2016. “Product liability/health and safety issues” were also cited as a concern by 42 percent of respondents, up from 18 percent in the last survey.

Jason Schneiderman, a partner at the firm’s Palo Alto, California, office and a contributing author to the report, interpreted the results as a sign of a “more mature marketplace” for VR, AR, and “mixed reality” (MR) technologies.

In the earlier stages of the industry, many companies were simply trying to get their technology off the ground, Schneiderman said. “But once some of them start to get legs and actually develop something, then they get to sort of move downstream on their worry list.”

>> Think Ahead: Could this be the new hot area for patent litigation in the tech industry? Among the 140 respondents, 61 percent of the respondents said patent litigation was most likely to drive intellectual property related disputes in VR/AR. Read the full survey.


Dose of Dystopia

We can back up your brain. … but we’ll have to kill you first.

If that sounds like the darkest startup pitch you’ve ever heard, you’re not alone. But yes, apparently this is a real thing. According to the MIT Technology Review, an MIT grad named Robert McIntyre is taking his idea for “preserving brains in microscopic detail using a high-tech embalming process” to the accelerator conference Y Combinator. “Nectome” offers the possibility that one day, your brain could be uploaded to a computer simulation.

Here’s the thing, though: in order for that to be even theoretically possible, the brain has to be preserved while it’s still, uh, freshBut don’t worry, killing you to preserve your brains is totally legal. “The company has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal. The product is ‘100 percent fatal,’” the magazine writes. McIntyre added: “That is why we are uniquely situated among the Y Combinator companies.”

Unique, indeed.


That’s it for this week. Keep plugged in with What’s Next!