Tech-n-law-ogy

Swapping Bots for Lawyers Via App: DoNotPay

I know it has been a fearfully long while since I last posted here. I forgive anyone and everyone from abandoning the empty, echoing halls of AdvocatesStudio for greener and more fertile and more updated tech blog pastures. Yet, here I am, writing again, prompted by the thought that suing someone may be as simple as downloading an app. DoNotPay, a robot lawyer Chatbot app, is now promising to help people file suits in small claims court, no JD required. And, because I like the cheap and free around here, DoNotPay currently is free for users. That is a lot cheaper than the hourly rate charged by the average lawyer.

Developer Josh Browder, now barely drinking age here in the US, created DoNotPay as a means of automating the process of challenging parking tickets, mainly inspired by his own excessive collection of tickets generated shortly after receiving his drivers’ license. The ChatBot – a conversational interface that prompts a user to provide information that can then be leveraged by the AI to provide answers or actions – allowed users to select one of several defenses to the ticket, enter details and send an appeal generated by the app to the appropriate legal authority. Browder taught himself to code at the age of 12, and his efforts certainly haven’t been wasted – the first version of the bot in 2015 reportedly saved UK drivers approximately 2 million pounds in two months time. Buoyed by his early success, Browder has allegedly claimed his app may “take down” the legal profession, which undoubtedly will be applauded by a couple of people.

Following on the parking ticket win, Josh added new beta functionality to the app in 2017 on the heels of the massive Equifax data breach –  he apparently also was swept up into the breach (notice a trend here?). DoNotPay offered the ability to sue Equifax in small claims courts throughout the U.S. up to the small claims jurisdictional limit, ranging from $2,500 to $25,000. The new functionality basically assisted the user in preparing the forms necessary for the small claims action; you still had to serve the Complaint and attend the hearing. After entering your name and address in the app and the app generated the necessary papers to institute a small claims action in a PDF format that could be printed and filed. Providing any assistance in the process, though, is a benefit to users unfamiliar with local small claims practice who might otherwise not bother to navigate the legal maze.  And, as found with the parking tickets, users reported some success using the app to secure awards from Equifax.

Within the past week, Browder has again tweaked the app, now permitting users to create documents to sue anyone in small claims court. And, the Bot is now available via mobile application – previously, the tool was strictly web based. An Android app is coming, Browder promises. There are additional new features, and this might be where Browder monetizes – users can find deals on fast food by filling out surveys and deals on prescription and over-the-counter drugs, make appointments at the California Department of Motor Vehicles and check on class action settlement eligibility.

The app can be used to help fight bank fees and dispute transactions, secure refunds from companies like Uber, and fix credit reports. Like the beta version, the bot asks for a name and address, claim size (to see if it is within the jurisdictional limit of the applicable state), and then generates a demand letter, creates the filing documents, offers information on how to serve the suit, and even generates suggested scripts and questions that users can leverage at the hearing.

The new app doesn’t stop there – DoNotPay also recently acquired Visabot to assist in the application of green cards and other visa filings. While, Visabot was a pay app for some of its services, Browder is offering the former services, like all DoNotPay services, for free.

Does DoNotPay violate state laws on the unauthorized practice of law? Good question and one that is not yet resolved. My thought is that, if the information DoNotPay provides is targeted information that is freely accessible in the public forum, albeit in in a guided interface that helps users cut through the swathes of irrelevant, confusing or downright unhelpful information, perhaps that is not the same as providing legal advice. However, as I haven’t used the app myself yet, I cannot comment on whether any of the tools cross the line. I also cannot comment on the accuracy of the information provided by the app. Browder certainly maintains that he has been addressing concerns and making updates to improve information and to ensure compliance with applicable laws.

Browder also maintains that the information users provide to the app is protected –  per Do Not Pay’s privacy policy, user data is protected with 256-bit encryption and there is no purported access to PII or case information.

Some may cynically claim that apps like this make an already litigious system worse. However, the fact remains that those who are most likely to use such an app are most likely the under-served segments of legal services in our society. Perhaps opening those doors a little wider may encourage some positive behaviors on the part of institutions that have benefited from that lack of access. Particularly in the area of immigration these days, such assistance, in any form, may be vital and life altering.

It is not clear how long the app will remain free. For now, Browder is seed funded with $1.1 million from investors and micro donations from customers. Browder’s stated intentions is that basic legal services will remain free, but inevitably, he may need to add charges for some services in order to keep the app going.

You can download the app yourself on the App Store – feel free to report back on your experience. Would love to know how our new Robot overlords handle the complexities of small claims court.

Categories: Tech-n-law-ogy

Extracting command line arguments from Node.js using destructuring

If you’ve worked on a Node.js command-line program, you were probably faced with the extraction of command line arguments. Node.js provides all command line arguments in the process.argv array. However, the contents of the array aren’t what you might expect.

What’s in process.argv?

The first two items in process.argv are:

  1. The path to the executable running the JavaScript file
  2. The path of the JavaScript file being executed

So the first command line argument is the third item in the array. For example, consider the following command that runs a Node.js program:

node index.js --watch

The contents of process.argv will look something like this (depending on your system and file root)

  1. /usr/bin/node
  2. /home/nzakas/projects/example/index.js
  3. --watch

While the first two items in the array might be useful to some, chances are that you’re only interested in --watch. Fortunately, you can use JavaScript destructuring to pick out just the command line arguments you want.

Using destructuring to extract arguments

Using JavaScript destructuring, you can separate the process.argv array into pieces and only use what you need. For example, this code separates the array into its three parts:

const [ bin, sourcePath, ...args ] = process.argv; console.log(args[0]); // "--watch"

Here, the bin variable receives the Node.js executable path, sourcePath receives the JavaScript filepath, and the rest element args is an array containing all of the remaining command line arguments.

You can take this one step further and just omit bin and sourcePath if you have no use for them:

const [ , , ...args ] = process.argv; console.log(args[0]); // "--watch"

The two commas at the beginning of the pattern indicate that you’d like to skip over the first two items in the array and store the remaining items in the args array. You can then further process args to determine what to do next.

Conclusion

While the process.argv array is a bit confusing at first, you can easily slice off just the information you’re interested in using JavaScript destructuring. Destructuring assignment is ideally suited for extracting just the information you want from an array.

Categories: Tech-n-law-ogy

Detecting new posts with Jekyll and Netlify

This blog has long featured the ability to subscribe by email, so you could get an email notification when a new post was published. I’ve used various services over the years to achieve this, first with FeedBurner and later with Zapier. As I’m a do-it-yourself kind of person, I never liked relying on external services to determine when a new post appeared on my blog. I figured I would never be able to build my own system When I moved this blog from the dynamic Wordpress to the static Jekyll[1]. Still, it seemed like a waste to have a service keep polling an RSS feed to see if it changed. After all, I know when my blog is being built…why can I just check for a new post then? It took me a little while and several iterations but eventually I figured out a way.

Step 1: Creating a data source

Most services that check for new blog posts use RSS feeds to do so. I didn’t want to use the RSS feed for two reasons:

  1. Parsing RSS is a pain
  2. Bandwidth concerns - My RSS feed is quite large because I include full post content

So I decided to create a small JSON file containing just the information I was interested in. This file lives at /feeds/firstpost.json and contains metadata related to just the most recent post on the blog. Here’s the Liquid template:

--- layout: null --- { {% assign post = site.posts.first %} "id": "{{ post.url | absolute_url | sha1 }}", "title": {{ post.title | jsonify }}, "date_published": "{{ post.date | date_to_xmlschema }}", "summary": {{ post.content | strip_html | truncatewords: 55 | jsonify }}, "url": "{{ post.url | absolute_url }}" }

This file includes just the information I need for any new blog post notification, which might include emails, tweets, Slack messages, etc. I’m using the absolute URL for the blog post as a unique identifier, but you can use anything is sufficiently unique. (You can always add or remove any data you may need if this dataset doesn’t fit your purposes.)

Credit: This format is loosely based on JSON Feed[2] and the code is partially taken from Alexandre Vallières-Lagacé’s Jekyll JSON Feed implementation[3].

Step 2: Deploy the data source

This is very important: the data source must already be live in order for the detectiong script to work correctly. So before going on to the next step, deploy an update to your site.

Step 3: Create the new post detection script

The new post detection script checks the live data source against the one on disk after running jekyll build. If the id of the most recent post is different between the live and local versions of firstpost.json, then there is a new post. Here’s the detection script:

"use strict"; const fs = require("fs"); const fetch = require("node-fetch"); (async () => { // fetch the live data source const response = await fetch("https://humanwhocodes.com/feeds/firstpost.json"); if (response.status !== 200) { throw new Error("Invalid response status: " + response.status); } const currentFirstPost = await response.json(); console.log("Current first post is ", currentFirstPost.id); // read the locally built version of the data source const newFirstPost = JSON.parse(fs.readFileSync("./_site/feeds/firstpost.json", { encoding: "utf8" })); console.log("New first post is ", newFirstPost.id); // compare the two if (currentFirstPost.id !== newFirstPost.id) { console.log("New post detected!"); // do something for new posts } })();

This script uses node-fetch to retrieve the live data source and then compares it to the local data source. If the id is different, it outputs a message. How you respond to a new post is up to you. Some options include:

  • Send an email notification
  • Post a tweet
  • Post a Slack message
  • Emit an event to AWS CloudWatch (this is what I do)

The most important part of the script is that it needs to be executed after jekyll build and before the site is deployed.

Step 4: Updating Netlify configuration

One of the advantages that Netlify[4] has over GitHub pages for Jekyll sites is the ability to modify the build command. The easiest way to do that is by using a netlify.toml file[5] in the root of your site. In that file, you can modify the build command. Here’s an example:

[build] command = "jekyll build && node _tools/newpostcheck.js" publish = "_site"

The command entry specifies the build command while publish indicates the directory into which the built web site files should be placed (most Jekyll builds use _site, and this is Netlify’s default). The command should be updated to run the new post detection script after jekyll build.

Note: You must have a package.json file in the root of your repository to have Netlify install Node.js and any dependencies (such as node-fetch) automatically.

Step 5: Deploy to Netlify

The last step is to deploy the changes discussed in this post. When Netlify builds your site, the new post detection script will be executed and you will be able to respond accordingly. It’s a good idea to run the script once with a new post and observe the logs just to make sure it’s working correctly before hooking up notifications.

Conclusion

The advantages of using a static site generator (such as Jekyll) sometimes means giving up a big of convenience as it relates to changes on your site. While dynamic solutions (such as WordPress) might offer more hooks, static solutions are often capable of similar functionality. New blog post notifications are important for most blogs and being able to achieve them using Jekyll is one more vote in favor of static sites.

While this post focuses on Jekyll and Netlify, the same approach should work for any static site generator and any deployment system that allows you to modify the build command.

References
  1. Jekyll (jekyllrb.com)
  2. JSON Feed (jsonfeed.org)
  3. jekyll-json-feed (github.com)
  4. Netlify (netlify.com)
  5. The netlify.toml File (netlify.com)
Categories: Tech-n-law-ogy
Subscribe to www.dgbutterworth.com aggregator - Tech-n-law-ogy