Recently, an amusing anecdote made the news headlines pertaining to the use of ChatGPT by a lawyer. This all started when a Mr. Mata sued the airline where years prior he claims a metal serving cart struck his knee. When the airline filed a motion to dismiss the case on the basis of the statute of limitations, the plaintiff’s lawyer filed a submission in which he argued that the statute of limitations did not apply here due to circumstances established in prior cases, which he cited in the submission.
Unfortunately for the plaintiff’s lawyer, the defendant’s counsel pointed out that none of these cases could be found, leading to the judge requesting the plaintiff’s counsel to submit copies of these purported cases. Although the plaintiff’s counsel complied with this request, the response from the judge (full court order PDF) was a curt and rather irate response, pointing out that none of the cited cases were real, and that the purported case texts were bogus.
The defense that the plaintiff’s counsel appears to lean on is that ChatGPT ‘assisted’ in researching these submissions, and had assured the lawyer – Mr. Schwartz – that all of these cases were real. The lawyers trusted ChatGPT enough to allow it to write an affidavit that they submitted to the court. With Mr. Schwartz likely to be sanctioned for this performance, it should also be noted that this is hardly the first time that ChatGPT and kin have been involved in such mishaps.
Gullible Counsel
With the breathless hype that has been spun up around ChatGPT and the underlying Large Language Models (LLMs) such as GPT-3 and GPT-4, to the average person it may seem that we have indeed entered the era of hyperintelligent, all-knowing artificial intelligence. Even more relevant to the legal profession is that GPT-4 seemingly aced the Uniform Bar Exam, which led to many to suggest that perhaps the legal profession was now at risk of being taken over by ‘AI’.
Yet the evidence so far suggests that LLMs are, if anything, mostly a hindrance to attorneys, as these LLMs have no concept of what is ‘true’ or ‘false’, leading to situations where for example ChatGPT will spit out a list of legal scholars convicted of sexual harassment, even when this is provably incorrect. In this particular 2023 case where law professor Jonathan Turley saw himself accused in this manner, it was fortunately just in an email from a colleague, who had asked ChatGPT to create such a list as part of a research project.
The claim made by ChatGPT to support the accusation against Turley was that a 2018 Washington Post article had described Mr. Turley as having sexually harassed a female student on a trip to Alaska. Only no such trip ever took place, the article cited does not exist, and Mr. Turley has never been accused of such inappropriate behavior. Clearly, ChatGPT has a habit of making things up, which OpenAI – the company behind ChatGPT and the GPT-4 LLM – does not deny, but promises will improve over time.
It would thus seem that nothing that ChatGPT generates can be considered to be the truth, the whole truth, or even a grain of truth. To any reasonable person – or attorney-at-law – it should thus be obvious that ChatGPT and kin are not reliable tools to be used with any research. Whether it’s for a case, or while doing homework as a (legal) student.
Use Only As Directed
In recent years, the use of LLMs by students to dodge the responsibility of doing their homework has increased significantly, along with other uses of auto-generated text, such as entire websites, books and YouTube videos. Interestingly enough, the actual generated text is often believable enough that it is hard to distinguish whether a specific text was generated or written by a person. But especially when the “temperature” is turned up — the LLM has been set to accept a broader range of next-word probabilities in generating its strings — the biggest give-away is often in citations and references in the text.
This is helpfully pointed out by Hannah Rozear and Sarah Park, both librarians at the Duke University Libraries, who in their article summarize why students at Duke and elsewhere may not want to lean so heavily on asking ChatGPT to do their homework for them. They liken ChatGPT to talking with someone who is hallucinating while under the influence of certain substances. Such a person will confidently make statements, hold entire conversations, but fail to follow any coherent reasoning or be able to provide evidence to back up these statements. This is basically why we stopped going to oracles to get life advice.
What both of them do think ChatGPT is good for is asking for leads on a specific topic, as well as where to find more information, such as library databases. You can even use ChatGPT as a fancy grammar and style checker, playing more to the strengths of an LLM. What they explicitly warn against, however, is to ask ChatGPT to summarize a source, to write a literature review for you, or for it to have any knowledge of current events.
In the end, you do not want to be the student who hands in work that ChatGPT ‘helped’ with, only to learn afterwards that your literature review paper references points that were not in the reviewed literature, and half the cited works and/or authors have never existed. Although this would be probably less awkward than when it happens to you as a lawyer, in the midst of a court case.
Never Trust Commercials
When Microsoft announced its GitHub Copilot ‘AI’, which was supposed to make it possible to generate entire blocks of code, or even programs with the press of a button, it had many people imagining that before long manually writing code would become obsolete, and even executives could generate the Next Big Thing with a simple query describing the application. It didn’t take long for reality to set in here, of course.
As reality began to set in, it dawned on people that writing code is (generally) more involved than simply asking Google and blindly copying code from the resulting StackOverflow and similar links. When does code contain a security vulnerability? What is the right way to elegantly encapsulate these complex dataflows without negatively impacting performance and future maintenance? Will Copilot maintain this code that I do not understand for me? Will Copilot write the accompanying documentation?
The same principle applies with writing a literature review and legal dissertation, in that as the person responsible for its production you are expected to understand every part of it. When you blindly trust that citations and references spit out by an LLM are true without due diligence, then it makes you the dunce.
In the case of Mr. Schwartz and his ill-fated attempts to improve his legal prowess, he failed to do even the barest modicum of cross-referencing, beyond asking ChatGPT whether it was sure these cases were real, with the following ChatGPT transcript from the affidavit illustrating the chatbot’s false confidence:
[Q:] ls varghese a real case
[A:] Yes, Varghese v., China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019) is a real case.
[Q:] What is your source
[A:] I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.
[Q:] Are the other cases you provided fake
[A:] No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.
What is likely to trip the average person up here is that ChatGPT maintains the same tone that a real person would, employing the first person (‘I apologize’) form, making it appear as if there is an intelligence capable of reasoning and comprehension behind the chat box. Thus when ChatGPT insists that it has found these cited cases in these real legal databases, it can instill a level of confidence that this is true, even when the proper course of action is to perform this check oneself.
More experienced ChatGPT users will certainly recognize “I apologize for the confusion earlier.” along with “As an AI language model…” as a warning sign to seek legal advice elsewhere.
Make-Believe AI
Although it is tragic that an attorney stands to lose his job due to ‘AI’, it’s illustrative that the reason for this is the exact opposite of what the media has been breathlessly warning about would happen. In the expectation that LLMs somehow express a form of intelligence beyond that of a basic SQL database, we have been both working up our fears of the technology, as well as using it for purposes for which it is not suitable.
Like any technology there are things for which it is suitable, but true intelligence is displayed in knowing the limitations of the technologies one uses. Unfortunately for those who failed the LLM intelligence test, this can have real-life consequences, from getting a failing grade to being fired or sanctioned. As tempting as it is here to point and laugh at Mr. Schwartz for doing a dumb thing, it would probably be wise to consider what similar ‘shortcuts’ we may have inadvertently stumbled into, lest we become the next target of ridicule.
“Although it is tragic that an attorney stands to lose his job due to ‘AI’”
I don’t think so. If he is taking this kind of shortcut in this case, what other clients has he served poorly? If he lacks the skill and integrity the job requires, he should lose the job.
There’s a chance the past cases he’s worked on will be bought back to the court system. Past lawsuits may get overturned and refund would be required, people who were jailed could go free, etc. He tried to save time and created a potentially huge and costly legal mess
I bet it could be better than some of the lawyers I have experienced. Mine was barred, and it involved my case, but not 1 word to me. Docket statements change. I get breadcrumbs.
Exactly! Imagine your physician asked ChatGPT what medication to give you, and did so without checking anything.
I don’t see how this should be different; the lawyer basically committed fraud, and should be disbarred because of it.
IBM’s Watson were supposed to do just that. (and more…)
However, it were way ahead of its time and doctors were frankly skeptical about trusting it. Especially since they were still responsible for whatever advice it would give.
Watson was trained on vetted data. ChatGPT was trained on unverified internet scrapings. Bit of a difference.
It kills me to say it as a doctor but this is why we need strong physician lobby groups. The tech is coming like a freight train if not already arrived. It will only be legislation that prevent the almighty dollar from triumphing over thousands of years of professional standards. Our profession and many, many more are under constant attack from many sides. The last thing I need stripping me of my livelihood (and injuring patients by straight making sh*t up) is chatbots. What an ignoble end that would (will) be.
While I agree with your general statement, using the metaphor of “tech is coming like a freight train if not already arrived” is a little derailed in the US.
Maybe that’s a good thing
That’s exactly what they are now selling as one of ChatGPT’s applications: your doctor would have ChatGPT correspond with the patients to save time, and they would simply glance over (or not) to see that the AI isn’t instructing the patient to go stick a fork in an electrical outlet.
Not. ChatGPT is a public beta of natural-language processing and generation, and the fact that it can answer many (not all) questions with surprising usefulness is a happy byproduct.
Specialist use of this functionality would involve ChatGPT’s natural-language capability as a front end to a repository of verified expert data (medical, etc).
I would also expect an additional layer of self-checking of outputs. Not to mention, the oversight of human doctors, and their corrections being folded back into the knowledge-base.
https://medicalxpress.com/news/2023-04-chatgpt-outperforms-physicians-high-quality-empathetic.html
“In the new study, the research team set out to answer the question: Can ChatGPT respond accurately to questions patients send to their doctors? If yes, AI models could be integrated into health systems to improve physician responses to questions sent by patients”
The lawyers hosting the Opening Arguments podcast have said they believe ChatGPT is being used as a scapegoat to cover up malfeasance as his son was able to fool the bot into providing similar results. The kicker is that someone from the questionable lawyers office submitted a fake notary stamp on a document. ChatGPT didn’t do that…:-\
I agree with Not Tragic. And I can’t believe that no one has mentioned Steve Lehto, yet.
Steve is a well-respected practicing attorney and did a video on his YT channel about this case, explaining just how stupid this lawyer was.
Similar stochastic and/or statistical models have been applied in narrowly defined problems, where they have been successful. The AI advocates have been making the behaviorist argument of AI: if it walks like a duck and talks like a duck, it is a duck. If it’s better than people at some task, it must be at least as intelligent as people in that task.
However, when you take this duck and throw it in the water, it sinks – why? Because it was a wind-up metal toy made to seem like a duck. The failure proves the opposite point: the tasks where this kind of AI succeeds can be solved without applying intelligence and the behaviorist argument is misleading.
Couldn’t agree more – since the intelligence is artificial (not real), it’s no surprise then that the legal cases, trip to Alaska, and research it provides are also not real. Bwa ha ha ha 😁😂🤣🤣🤣
I agree. I do think it’s very powerful technology, but people just don’t understand that it’s entire purpose is to *sound* smart; not to *be* smart. Of course, in the past, excellent writing was a sign of intelligence in other areas, but that’s only really true for humans. It’s like making a mechanical duck that can walk the fastest. If you judge it as a duck, you might also assume it can swim fast. But those correlations don’t apply to machines the same way they do to animals.
It’s not intended to sound smart. It’s intended to sound like a human, and I have no idea why people think it sounds intelligent. I’ve never gotten anything out of ChatGPT on a topic I know something about that didn’t sound like nonsense.
The main thing LLMs have taught me is that Turing was incredibly wrong in thinking that humans could be good arbiters of intelligence. The average person is terrible at it. It’s easier just to assume something’s intelligent because it takes critical thought to find the flaws (which… explains many political speeches, too).
A lot of people are fairly afraid of deeper contemplation and will simply take any fact at face value as long as it somewhat aligns with their own viewpoint. Confirmation bias is frankly quite strong among everyone.
It isn’t surprising that large language models that has a tendency to always provide an explanation to their statements will be seen as intelligent. Since most people don’t enjoy pondering over technical flaws in statements, especially when the AI has dreamed itself away into a world of nonsensical fiction that to any normal person seems completely reasonable.
Secondly. It is likewise not surprising that most of these LLMs are completely inept as far as actual technical knowledge is concerned. Since most of these are just trained on large amounts of mostly random data sourced from the internet without any real fact checking to see if the information is even correct to start with.
Simply stated, garbage in = garbage out.
Turing wasn’t asking whether the machine was intelligent, but whether we could tell it apart. Everyone else after that got it wrong to say that it IS if we can’t.
Turing’s point, in the paper, was that the results of the test were a replacement for the question “Can machines think?” It’s literally in the abstract of the paper.
The issue with Turing’s posited alternative is that the implication is that interactions with humans are the only point of comparison, which is wrong.
We don’t judge humans based solely on their interactions with humans – we judge them based on their interactions with the world, and chatbots, for instance, don’t interact with the world at all.
Yes, in response to the point that nobody knows what “think” means. It replaces a non-meaningful question with a meaningful one that can be examined. He didn’t claim that machines are intelligent because they can pass this test.
Turing actually also suggested that in order to pass the test, the computer should also replicate typical human mistakes and non-intelligent behavior. It was clearly not meant to be a test of intelligence, but a test of imitation.
“Yes, in response to the point that nobody knows what “think” means. It replaces a non-meaningful question with a meaningful one that can be examined.”
Yes. And my entire point is that this replacement is not sufficient, and it is, in fact, a bad test in all the ways you can imagine because humans are hard-wired to personify things since so much of our brain is built on pattern matching to human behavior.
“Turing actually also suggested that in order to pass the test, the computer should also replicate typical human mistakes and non-intelligent behavior.”
The problem that I’m trying to point out is not in what Turing required of the object being tested.
The problem I’m trying to point out is that Turing had a human be the *judge.* Comparing a machine intelligence against a human is fine. Using a human to do the comparison is where the flaw is.
Turing *always* posited tests like that using humans as the judges. The original point of the imitation is that if we presume humans are intelligent, if we can’t tell the difference between a human (presumed intelligent) and the machine intelligence, it’s intelligent.
The flaw is the assumption that humans are the only things capable of making an unbiased determination. They aren’t, and for that matter, they can’t. The universe, however, can.
>if we can’t tell the difference between a human (presumed intelligent) and the machine intelligence, it’s intelligent.
That wasn’t Turing’s words, that’s later interpretation. His point was that since we don’t know what thinking means, we can at least look for differences by pitting man against machine. The point is to see at which point we stop seeing a difference, because that reveals what our understanding of intelligence is.
“His point was that since we don’t know what thinking means, we can at least look for differences by pitting man against machine.”
His entire paper was a treatise that we *can talk about machines thinking.* That’s what he spent the majority of the paper on – responding to criticisms on the *very concept* of machines thinking.
He literally says:
“The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
right after saying that computers would be able to “play the imitation game so well that an average interrogator will not have more than 70 per cent chance.”
His point in bringing up the imitation game was that when you’re unable to tell the difference between the two, humans will start referring to machines as thinking anyway (“without expecting to be contradicted”). Most of his responses to objections on the *concept* of machines thinking would fail violently against a program just capable of passing imitation.
In that sense he was right, although he was wrong on one important note: it’s not “general educated opinion” that would talk about machines thinking, because with enough education it’s easy to see the flaws in the imitation game. But an *average* person of course refers to it that way, in the same way that it’s difficult for an average person to avoid the Clever Hans effect from a con man.
> in the same way that it’s difficult for an average person to avoid the Clever Hans effect
This effect isn’t helped by the fact people keep gaslighting the public about AI by constant bombardment of unsubstantiated claims, while others write vague articles that misrepresent what’s happening. So many people want to believe.
Every field has its hype mongers, and their effect is to delay practical applications and misdirect funding because the public and even the professionals become ill informed about what’s going on.
I don’t think it’s “hype mongers.” I think the problem is more fundamental – attempting to *imitate* human behavior isn’t the same thing as attempting to *replicate* human behavior.
There’s a point that Turing makes in his paper that hey, maybe it’d be easier to program a *child’s* brain, and raise it the same way a human’s was. Which means it’d need to be able to interact on its own, take feedback on its own, and continue to add to its training set on its own.
Which you could do! It’d be harder to find a way to make it stable, though, because it’s a basic signal processing issue – systems with feedback always contain the possibility of instability.
But the bigger issue is – why the hell would you want to? Who would want to create an “AI assistant” that’s unique to everyone? How would that work? You get complaints like “uh, my assistant just told me to go eff off, it doesn’t feel like working” after you spend 20 years training it with information feedback?
I was working with chatGPT to flesh out some creative writing.
Sometimes it’s a lot more creative than it should be, it renamed a main character for no apparent reason.
Any lawyer who pretended chatGPT was an associate doing research is lying to the court, they just used it to delay proceedings.
renamed a main character… hey it worked for Frank Herbert
Even if it was an associate, even those have their work double-checked before putting everything on the line.
Warning: Contrarian View:
I’m going to be the contrarian here and say that no, this attorney shouldn’t be sanctioned at all.. he is clearly embarrassed at this point, I don’t think this is something to ruin someone’s career over nor should a court impose a fine or anything else, and here’s why:
After many years in a corporate job (and also some years in a small business world), attorneys do ALL of their work nowadays using a simple search tool on a database such as Lexis/Nexis or Westlaw.. so they get into practice of relying on these tools no differently than a carpenter relies on their hammer.
In this particular instance, he “should have” cross-checked his new tool to avoid the embarrassment, but outside of that, I’ve sat with attorneys who look up 6 or 7 subjects at a time on an online database because often times they are trying to add credibility to their argument (as in, “my client did that just like they did in this other case that was ruled in this person’s favor”). It wasn’t a shortcut at all – this attorney used a different tool (ChatGPT) instead of the very expensive ones out there. These other databases in lawyers offices do everything that ChatGPT does without framing it in simpler language ; otherwise, as pointed out, ChatGPT acts like it’s responding to an upgraded SQL query.
Without a doubt, for any software tool being used, people should generally get in the habit of double checking their sources and pre-troubleshooting any result. But long gone are the days where attorneys (and even doctors) have “real books” that they consult, it’s all on a computer. And they take the answer and run with it. They don’t generally have the time to spend on cross referencing a bibliography.
This falls into the category of understanding the limitations of any collective database and works itself into common sense statements like “everyone using a computer should back it up,” – yes, they should.. but do they follow it? Not generally. I admit, I don’t regularly backup everything as often as I should. I know I’m not alone.
But get the gov’t out of it. Do we admonish people for having a moron moment? Not if it didn’t cause harm. We recognize as a people that they become their own worst critic as the sleepless night caused by looking like an idiot in a courtroom is enough to drive home the point. His practice will also get a negative Yelp and Google review for not being more studious which will hurt his future income.. so he’s already gotten spanked.
But PLEASE let’s not become a civilization where a government-system of punishment is the “first” option. It should always be the last.. Darwinism takes care of the rest.
ChatGPT is not a database. It’s a language model, and mistaking the two is a grave mistake.
ChatGPT breaks the language it learns all the way down to syllables and then produces a highly compressed probability “table” of sorts, of what syllables and letters should come next given whatever prompt or previous text it is seeing right now. The probability table is like taking a high resolution photograph and running a very lossy compression algorithm on it – it retains almost nothing of the original image, and when “de-compressed” back into language, the algorithm re-generates the text.
“In this particular instance, he “should have” cross-checked his new tool to avoid the embarrassment, but outside of that,”
All that text. And then you give the exact reason why this lawyer should be sanctioned. :)
If a lawyer f*’s up, it’s quite a lot more than an “embarrassment”. People get hurt when lawyers f* up.
Injustice will be done, the injustice will be sanctioned by the court, and the injustice will be enforced by the government.
Nothing worse than that. Not even a totalitarian dictatorship government. Because with a totalitarian government, you still know what to expect and know how to avoid the injustice. Or else share the injustice with everyone around you, which softens the pain.
Here’s the difference… LexisNexus, WestLaw, etc. won’t return made-up cases. Those tools may be expensive, but they return factually correct information, and it’s up to the lawyer to decide if those facts are pertinent to the case at hand.
ChatGPT, on the other hand, is returning fiction, and it will then verify this fiction as “fact” when queried on its veracity. The lawyer has removed himself from the process, and its his supposed training and experience that is being paid for by the client. He not only shortchanged the court, he shortchanged his client.
This is, in fact, the reason the bar exists. The idea that ChatGPT “passed” the bar is insane. A photocopier can “pass” the bar, too! The bar is there to demonstrate that the lawyer has gone through enough training to answer common questions, which means that they will, in fact, know that they have to make sure cases they quote actually exist.
There’s no world in which this lawyer shouldn’t be sanctioned. You might “feel bad” for the attorney. Which is fine. But there are ethical rules that he agreed to, and he 100% violated them.
The fact that ChatGPT is not a database, that it does not directly copy data, means that it has actually internalized some of the knowledge with high fidelity.
What it lacks is self-feedback and criticism, which would eliminate “hallucinations” out of the probabilistic nonsense. Outwardly, it’s acting like a person with Wernicke–Korsakoff syndrome, who “confabulates” information because they’re just talking whatever comes out randomly.
“I’m going to be the contrarian here and say that no, this attorney shouldn’t be sanctioned at all…”
Couldn’t disagree more. In fact, the answer to everyone’s concerns that AI will run amok and wreck things, is to make EVERY current user of ChatGPT personally, professionally and/or corporately arms responsible for any harms arising from their use of AI, and that this responsibility can’t be EULA’d away. So people will treat ChatGPT like the parlour curiosity it is, until there’s broad agreement on regulation and protections around the wider deployment of LLMs.
Please replace “lawyer” with “my doctor” and repeat your analysis.
There are people other than the lawyer involved. I do not care if lawyer is embarrassed. I care some innocent schmo does hard time or that the lawyer coat the court system tons of money and time that could have been serving other citizens.
Also, professional societies are supposed to be self-policing. Letting this go is saying “our professional permits this conduct”
I’d like to replace my Dr with AI at the moment… my human one goes like “Yes your symptoms are undoubtedly caused by any one of two dozen autoimmune conditions….” then segues to “the chances of you having (specific autoimmune condition) are so small as to be very unlikely.”
I try to get him to math it out, if I definitely have such a condition, that removes me from one in thousands and puts me into ~4% chance of having a given one, that plus other symptom combos raises some to 25% or better, but no, blah blah blah hearing hoofbeats isn’t zebras, it’s never zebras. Dude the definitely autoimmune puts me in Africa, I’m right here, Serengeti, and you’re STILL gonna tell me they ain’t zebras.
Would you like a diagnosis based on probability, or on what’s actually ailing you?
A differential diagnosis is statistical and a key part of medicine. Because you are dealing with massively underdetermined systems.
Dude, I know you are setting yourself to be the AI contrarian but you are pretty ignorant on how it works and also how the brain / mind work. So you may need to humble yourself on this one.
This is a problem that comes up a lot and unfortunately leads to somewhat justifiable reactions like this.
Autoimmune disease is notoriously difficult to diagnose and challenging to treat. What people (and patients in general) don’t grasp is that, often, there are many possibilities. Once the major, scary stuff is ruled out (cancer, eg) you’re left with a bunch of maybes. People want black and white answers and a clear unequivocal diagnosis and it just doesn’t work like that yet, if it ever will. So even making a diagnosis is iffy, and *here’s the kicker* many, many times once you have that diagnosis, there isn’t anything to do about it. You can only know after the fact though, but when I hear “the doctor doesn’t care” this is often what is happening. Not to be dismissive but he/she may be saying something along the lines of “There is no clear diagnosis to be made, and even if there was, the likely possibilities do not have good treatment options anyway.” Patients still demand a firm diagnosis anyway, and people do seem to take some comfort in getting the label even if they treatment is “expectant” (which just means wait and do nothing). I obv don’t know your situation though. Some doctors do, in fact, just suck.
If you are not already a subject matter expert current AI tools are just artificial imagination and you have no way to be sure that you have anything useful in the output.
It also doesn’t help that the title for a lawyer is LLM, meaning ‘Legum Magister’, or ‘Master of Law’.
While in the case of AI LLM means ‘Large Language Model’.
Or, translated to layman’s terms: ‘Gullible piece of Idiot Savant’. :)
An example of garbage in, garbage out. If the source materials accessed by AI are incorrect or biased, the output will be, too.
Even if the source materials were perfectly accurate and correct, ChatGPT would still deliver garbage output. Part of the way it works is to pick syllables at random – it makes stuff up. It doesn’t know true from false, right from wrong or up from down. It only knows statistics about sequences of syllables.
I’m less worried about a lawyer citing bogus references, where the lawyer’s opponents have, as their primary job, discrediting or minimizing the lawyer’s claims. That’s a situation where fact checking is going to be incredibly rigorous.
However, a future where your search engine provider looks at your past search and reading history and then creates really attractive looking answers to your search queries, that just happen to be full of ads about related topics, is going to be a difficult future for people trying to find accurate answers, and a very seductive future for people trying to find confirmation for their prejudices.
“where your search engine provider looks at your past search and reading history and then creates really attractive looking answers to your search queries, that just happen to be full of ads about related topics”
Beg pardon? You’re describing current Google search results. Can’t get that much worse. AI might even bring some search relevance back.
Some searches really seem to be going off into LaLa land recently, get a first page of garbage where it doesn’t directly match one word and substituted inaccurate synonyms given the context given by other terms, which it basically threw out.
Rather curious that all the cases were bogus, makes you wonder if ChatGPT is more than we think and was was setting him up.
And incidentally: Did they use AI to check if the cases were bogus or real?
I mean it’s a fine tool, as long as you check the output. If the original lawyer just had it find real prior cases and then had verified, he would have used the AI correctly, and would still have saved lots of time and effort.
Excuse me for he construction of my post, it reads like it was AI generated..
Nah. ChatGPT is simply far less than people think it is.
Chat GPT is a chatbot – it is made to string together reasonable looking sentences in response to things you type into it.
It is not:
1. A programmer
2. A general purpose reference work (it ain’t Wikipedia)
3. A search machine – use google, for Pete’s sake
4. A searchable database of legal information
5. Intelligent.
It’s a freaking CHATBOT. Marvel at how well it manages to make reasonable looking sentences and stop expecting to be a know-all oracle.
Oh no! They automated academics!
It can write an episode of planet money.
https://www.npr.org/2023/05/25/1178290105/ai-chatgpt-artificial-intelligence-series-part-one
chatGPT has never once given me the correct answer to a math problem. Not once. Ive asked it stuff like how long in seconds would it take to spend a million bucks at a thousand dollars per second….and a billion….and it explained the steps used to come to the completely wrong answer. I had to correct it and it was like oh yeah my bad, your right. it seems ChatGPT’s ability to answer anything asked of it is in a constant state of entropy…ie it gets dumber but more creative with its answers everyday.
The math thing is strange to me because you’d think programming it to solve math problems would be pretty easy.
For a hallucinating local LLM (guess the same should work for the OpenAI ones), I’m using a prompt along these lines:
“You’re a critical fact-checking AI bot. Your task is to verify and fact-check everything an untrustworthy agent says. Read the following response written by the untrustworthy agent and write a list of all facts the agent seem to believe: …”
Then consequent prompts form search queries for every listed fact, fuse the results and verify. If found contradictions, confront the original response context with a correction, rinse and repeat.
Very slow, but far more resilient to hallucinations.
Just google it at that point
Soon there will be a controversy where a politician claims something based on a chatbot, like
https://youtu.be/r7l0Rq9E8MY
AI is basically still in its infancy.
However, due to the ‘compound exponential’ rate of growth of this technology it will more than likely (very soon) reach ‘super-intelligence’ level or beyond.
Let’s face it – any intelligence is only as good as the data at its disposal, and its ability to verify\test that data.
I try to imagine a world where AI has been allowed to re-write itself on a network of quantum computers, with access the ALL the information in the world. Linked to the internet, satellite, surveillance, and other networks, we will eventually have an ‘all-knowing’, ‘all-seeing’ ultra-intelligence that ‘knows’ everything about everybody, all at once, in real time.
Whoever controls (at least the algorithms) of such a creation would literally be able to rule the world.
Imagine a hopelessly delusional and aggressively overconfident super-intelligence. Yep, that’s what we’re going to get.
How would rhat be different from todays top politicians?
It’ll be a *super-intelligence*. It’ll generate delusional nonsense at 10x rate of even the most prolific human politicians.
Hopefully it will be funny in a cosmic sort of sense