Hand holding mobile phone with chatbot application.

The ticket-challenging chatbot has expanded into 1,000 areas of law, across all 50 states, but questions loom around providing legal services without attorneys.

When news broke last year that Stanford student Josh Browder’s “ robot lawyer ” DoNotPay was able to help appeal over 160,000 parking tickets across London and New York City, the legal community had barely heard of a “chatbot.” Since then, chatbots have expanded across the legal industry in ways that have both assuaged and stoked attorneys’ concerns about the technology.

DoNotPay has blown up in the year since its launch. The chatbot service last week announced that it now offered users over 1,000 different free legal services across practice areas through chatbots hosted on both Facebook Messenger and its own website. Further, the technology is now available to users in all 50 states. Browder told Legaltech News that DoNotPay would make the set of drag-and-drop tools he uses to create bots publicly available “so that any lawyer, activist, charity or person in the world can create one of these bots.”

But what happens when anyone can build a bot? In a legal services industry with an increasing presence of providers cutting lawyers out of the equation, questions arise about the quality of legal services and the ethics of having a non-attorneys handle legal issues.

Life Saver or Tech Terror?

DoNotPay’s large scaling is part of Browder’s attempt to meet the needs of users who weren’t sure which of their legal problems they could bring to the chatbot for help. He began building new chatbots that could handle different legal needs, among them a chatbot that can help refugees in the United States apply for asylum status . Volunteer paralegals and other legal professionals provided much of the research and documentation that form DoNotPay’s new chatbot functionalities.

As DoNotPay’s uses have expanded, attorney concerns have ratcheted up, especially when it comes to legal disputes with life-altering consequences. Speaking about the use of chatbots for immigration law, Reid Trautz, director of the Practice and Professionalism Center at the American Immigration Lawyers Association, told Legaltech News that while technology like Browder’s could likely help many people, the lack of attorney oversight could have devastating consequences for asylum seekers at risk of being deported back into violent situations.

“This isn’t a parking ticket. Asylum is really about life and death issues,” Trautz noted. “It’s not about filling out the form. It’s about putting the right information into that form,” he later added.

Attorney concerns over chatbots also came through on a panel called “The Rise of Legal Chatbots” at this spring’s FutureLaw conference at Stanford University , where chatbot founders, including Browder, took some significant heat from many attorneys in the audience. Joshua Lenon, attorney in residence at matter management group Clio, acted as the contrarian on the panel. “We are about to enter a reign of tech terror,” he said, noting that funding into chatbots may be taking dollars that otherwise would go toward courtroom innovations.

And while chatbots are incredibly scalable and accessible, they can potentially direct users to incorrect information and mislead users into thinking there’s a real-life attorney sitting on the other side of the chat.

However, Lenon said that chatbots, if appropriately used and overseen by legal professionals, can be a powerful tool for helping people access information and guidance about some legal processes, even high-stakes legal issues. As an example, he pointed to the refugee crisis, telling LTN that while “the refugee problem is not as necessarily clear cut as a chatbot can portray it to be,” the bots are “uniquely placed to help direct those people.”

“Often, the only technology [refugees] have access to is mobile. There is a big dilemma to how to best get proper guidance to people in dire straits with limited access to not just legal representation, but any information period,” he added.

Peter Swire, professor of law and ethics at the Scheller College of Business at the Georgia Institute of Technology and senior counsel at Alston & Bird, said that chatbots themselves fall into something of an ethical gray area.

“There are some ways to provide information that are clearly legal. You are allowed to write a book about Georgia law. That book can be searchable online. Merely providing a search capability is unlikely to be unauthorized practice of law,” he explained.

However, when providing information can be viewed as advising, things become more problematic. “As the service becomes more interactive, it starts to look more like providing advice. At some point, the interaction would be so pervasive that it would seem the same as speaking with an attorney,” Swire said. “At that point, there is a strong case that they’ve crossed the line and are now furnishing legal services or advice.”

Browder noted that while some have raised ethics questions about chatbots and regulations around unauthorized practice of law, some of which came to the fore following a recent New Jersey Supreme Court committees’ joint opinion on use of online legal services providers like LegalZoom.com Inc., these apply much differently to a free service.

“Everyone has a right to create products and free speech. As long as you’re not charging people, the regulations really don’t apply,” he said. While Browder is exploring some ways to monetize the system he’s created, including corporate sponsorship of some bots, he insists that DoNotPay will always remain a free service for users.

Swire pointed out that while bar association regulations around unauthorized practice of law may hinge on payments, state statutes around unauthorized practice in Georgia do not mention payment as a factor.

“Where there is interactive advice, responding to questions by the client, that could easily violate Georgia law and not merely state bar rules,” Swire said.

Questions of Quality

Even the most ethical approach to chatbot design doesn’t particularly matter if the technology falls flat, however. I tried DoNotPay’s search function from my computer in Atlanta but had significant difficulty getting connected with the service I was looking for. My inquiries about employment discrimination, for example, only offered me a chatbot that could help me cancel a contract made at home. When I asked Browder about it, he said that this could be due in part to DoNotPay’s jurisdiction zoning, which would limit me to seeing only the bots I could use under Georgia’s fairly restrictive consumer protection laws, but noted also that DoNotPay’s search function is still very much in beta.

DoNotPay’s chatbots themselves, however, seem to be fairly effective. Once I finally found a bot that matched a request I sought in the search, one of DoNotPay’s chatbots helped me draft a strong, well-cited and appropriately toned letter requesting extended maternity leave. A writer for The Guardian similarly used it to successfully draft a cease-and-desist letter to telemarketers using his personal information.

Though I was unable to test these procedures, Browder said many of these chatbots give people the ability to scale the legal aggression and tone of the letters they produce as needed. DoNotPay’s chatbots can help users prepare a polite, friendly request for maternity leave, but also send more strongly worded requests, or send notices to a regulator. Browder said that while chatbots operate on a linear workflow, “it’s people’s ultimate choice” in how they’d like to proceed through their legal issues.

Miriam Gutman, a staff attorney at Atlanta Legal Aid Society, also tested out DoNotPay’s search function and chatbots, but saw limited application of the technology for the majority of the clients she works with, in large part because many people aren’t even sure how to describe potential legal concerns. (Full disclosure—I attended college with Gutman.)

“The people I speak to often have trouble figuring out the legal issue they need help with. Sometimes clients will just tell me they got a letter in the mail from another attorney and they don’t understand [it] at all. Then I need to look at the document before I even know what topic to ask about,” Gutman said.

When users can’t find a chatbot that meets their request, DoNotPay directs users to email DoNotPay directly. Browder said DoNotPay staff will then follow up with a direct email to users within 24 hours of the original inquiry suggesting some legal tech or community resources that may better fit their problem.

“We’ll never tell you your issue is not important or too small. Lots of chatbots say, ‘Sorry I didn’t understand, we can’t help you,’ and that discourages a lot of people,” he said.

Even where legal needs can be diagnosed and DoNotPay can provide a resource, Gutman finds that the technology literacy and access required to use DoNotPay effectively is still fairly out of range for many of Atlanta Legal Aid’s clients.

“It seems like it’s access to justice for middle-class people, which is great and important, but some of my clients, especially a lot of my clients with disabilities, would not be able to work this bot at all, even if they had the exact problem this bot is good for,” she said.

There’s no question that unmet legal needs are prompting a great deal of innovation, but with any innovation comes critical scrutiny. Even though questions remain about chatbots’ methods, tools like DoNotPay seem poised to take on a huge role in serving at least some of those unmet needs.