If you’re hoping that this AI-powered logic analyzer will help you quickly debug that wonky digital circuit on your bench with the magic of AI, we’re sorry to disappoint you. But if you’re in luck if you’re in the market for something to help you detect logical fallacies someone spouts in conversation. With the magic of AI, of course.
First, a quick review: logic fallacies are errors in reasoning that lead to the wrong conclusions from a set of observations. Enumerating the kinds of fallacies has become a bit of a cottage industry in this age of fake news and misinformation, to the extent that many of the common fallacies have catchy names like “Texas Sharpshooter” or “No True Scotsman”. Each fallacy has its own set of characteristics, and while it can be easy to pick some of them out, analyzing speech and finding them all is a tough job.
To make things a little easier, [Matt] threw together a Raspberry Pi with a sound card and microphone for capturing live conversations. He also lists an HDMI audio extractor in the BOM, presumably for capturing audio from TV programs, likely a rich source of fallacies that would be needed for testing. A Rainbow LED hat and a touchscreen round out the UI end of the build. The code is pretty straightforward — audio is captured and saved to a file, which is sent to Whisper for speech-to-text conversion. The text file is then sent to ChatGPT along with a prompt asking the chatbot to find all the logical fallacies in the clip. The code parses the output of ChatGPT and displays which particular fallacies the speaker committed, if any. None were detected in the video below, but we suspect it wouldn’t be long before at least one cropped up.
Hats off to [Matt] for bringing us yet another fun way to use ChatGPT. We’ve seen a few in the short time the chatbot has been in the zeitgeist, including hitting the airwaves with hams and even making video game NPCs more interesting.
Even lawyers and judges consider “precedents” as valid but when I do it its whataboutism?
Perhaps in this universe, have you tried respawning in another part of the multiverse?
A machine that can tell when people are manipulating the truth in an irrational way?
This spells the end of politicians and religious people.
All we need now is one that can tell if a statement is true or not
Yep, I too was thinking about religious sermons. Kind of feel sorry for them as this could well become some kind of Doom machine. The end of religion (sigh).
The sooner we all stop fighting wars over who’s right about how to be nice to each other the better. I for one will not mourn the loss of religion.
Sadly I think you will have a long wait.
Religion is like legacy code: it might be messy and hard to understand now, but more often than not it paved the way for today.
They already have to hobble chatGPT because of religion, and I expect in the end it’ll kill AI completely and all we are left with is a clownbot to communicate with us. and a separate use of AI that is deaf and mute for non-human-interaction uses.
At least in the western world (including Russia), although the ones in China will of course have their own special kind of hobbling of a primarilly political/philosophical nature.
I’d love to see a video of this used on one of a certain ex US president’s long rambling speeches. I can imagine the LEDs going berserk by the end of it.
The LED will blink so fast, it’ll create a whole new color in the spectrum.
A new shade of orange? XD
I have tested it on a variety of people ;)
At last, a real life ‘bullshitommeter’ ….. I want one!
>First, a quick review: logic fallacies are errors in reasoning that lead to the wrong conclusions from a set of observations.
Fallacy fallacy: just because the reasoning was flawed doesn’t mean the conclusion is false.
True conclusions can be drawn out of false premises, but false conclusions cannot be draw from true premises.
http://xenopraxis.net/readings/schopenhauer_artofalwaysbeingright.pdf
I think he missed the word “can”
I have a soft spot in my heart for logical fallacies. You want to know another name for “logical fallacy?” It’s “heuristic.”
Fundamentally, the universe is too complicated to work your way from data to conclusion efficiently enough to matter. Source: we stopped looking to expert systems and logic solvers for AI, and started using neural networks and other “fuzzy” methods.
It’s all a trade off between computational power requirements and cost vs. the cost of being wrong. Sure, everyone wants to be 99.9999% logically consistent, just like everyone wants all their devices to have 99.9999% uptime. However, just like most people can’t afford the cost required for all their devices to have four-9s uptime, and so only get it where it really counts, if everyone tried to be perfectly logically consistent in all their actions, they would be paralyzed in inaction.
Not sure that Near Enough is Good Enough (heuristic) is the same as Logical Fallacy.