AI: useful legal tool, or potential malpractice trap?

Artificial intelligence has arrived in our daily lives. The technology seems to have reached the point at which users can get it to draw pictures, compose songs and write documents based on a few short prompts. (A human wrote this blog post, if you were wondering.)

The makers of software like ChatGPT say it can make work like legal research faster and easier. But the technology is still quite new and prone to mistakes. For a lawyer, relying on it as a crutch instead of starting point for your own research can lead to embarrassing mistakes that harm their client’s case.

Passing off AI-invented case law as the real thing

In one of the first examples of this, a New York attorney with 30 years of experience is facing sanctions after using case law references generated by ChatGPT while he was working on a personal injury lawsuit. After including those citations in court filings, it came to light that ChatGPT made them up.

At a recent hearing, the lawyer admitted that he could not find the “court rulings” ChatGPT had generated anywhere else but said he did not understand at the time what the AI platform really was. He thought it was a “super search engine” and denied that he intentionally tried to deceive anyone in the personal injury case. His defense attorneys said their client acted carelessly on his own client’s behalf but not maliciously.

Negligence, not intent

The question of whether he intended to submit fake citations might matter in the matter of whether this attorney should keep his license to practice in New York or federal court. But in a legal malpractice case, proof of intent is not necessary. What matters is if your attorney acted negligently, or below the standard of due care a lawyer owes their client.