Cohen says he hasn’t “kept up with emerging trends.”
Michael Cohen, the former lawyer for Donald Trump, admitted to citing fake, AI-generated court cases in a legal document that wound up in front of a federal judge, as reported earlier by The New York Times. A filing unsealed on Friday says Cohen used Google’s Bard to perform research after mistaking it for “a super-charged search engine” rather than an AI chatbot.
Michael Cohen was working for Trump precisely because he couldn't get a proper lawyer job elsewhere. Good lawyers will steer clear of a client that will ask them to commit crimes for them.
The impending doom of the fascist right is the only thing keeping me voting for the dems. If we had rank choice I’d be so much happier voting every election.
If you lived here, you might begin to understand the level of nationally fucked the literate half is aware of daily. Then again, you seem like a decent person, so I wouldn't wish that on you. 😅
Problem is that these llm answers like this will find their way onto search engines like Google. Then it will be even more difficult to find real answers to questions.
Why is there not an automated check for any cases referenced in a filing, or required links? It would be trivial to require a clear format or uniform cross-reference, and this looks like an easy niche for automation to improve the judicial system. I understand that you couldn’t interpret those cases or the relevance, but an existence check and links or it doesn’t count.
I assume that now it doesn’t happen unless the other side sys a paralegal for a few hours of research
This is what you get when the political system favours lies above truth
The more these people lie and get away with it, the more it will become the culture. China levels of big brother oppression are only a decade or so away if this keeps on going.
The problem is breathless AI news stories have made people misunderstand LLMs. The capabilities tend to get a lot of attention, but not so much for the limitations.
And one important limitation of LLM's: they're really bad at being exactly right, while being really good at looking right. So if you ask it to do an arithmetic problem you can't do in your head, it'll give you an answer that looks right. But if you check it with a calculator, you find the only thing right about the answer is how it sounds.
So if you use it to find cases, it's gonna be really good at finding cases that look exactly like what you need. The only problem is, they're not exactly what you need, because they're not real cases.
While the individuals have a responsibility to double check things, I think Google is a big part of this. They're rolling "AI" into their search engine, so people are being fed made up, inaccurate bullshit by a search engine that they've trusted for decades.
Google may not be showing an "AI" tagged answer, but they're using AI to automatically generate web pages with information collated from outside sources to keep you on Google instead of citing and directing you to the actual sources of the information they're using.
Here's an example. I'm on a laptop with a 1080p screen. I went to Google (which I basically never use, so it shouldn't be biased for or against me) and did a search for "best game of 2023". I got no actual results in the entire first screen. Instead, their AI or other machine learning algorithms collated information from other people and built a little chart for me right there on the search page and stuck some YouTube (also Google) links below that, so if you want to read an article you have to scroll down past all the Google generated fluff.
I performed the exact same search with DuckDuckGo, and here's what I got.
And that's not to mention all the "news" sites that have straight up fired their human writers and replaced them with AI whose sole job is to just generate word salads on the fly to keep people engaged and scrolling past ads, accuracy be damned.
This is one of the emergent threats from AI. Disseminating lies with the plausible deniability of "What, I didnt lie, the computer did it". Health insurance companies are already murdering folks by having AI deny claims. I predict AI will be used for get off scot free genocide.
You mean like what happening in Gaza right now? You think all those weapons of war made the last half decade don't have AI routines programmed into them? You think the Iron curtain works like space invaders with people clacking buttons, or an aim bot shooting wildly before you can even comprehend there's a target to shoot at?
All AI is doing is amplifying problems that already exist. Too many people lack media literacy, and too many people resort to anger and opposition when they don't understand something.