ChatGPT has been put to all manner of silly uses since it first became available online. [Engineering After Hours] decided to see if its coding skills were any chop, and put it to work programming a circular saw. Pun intended.
The aim was to build a line following robot armed with a circular saw to handle lawn edging tasks. The circular saw itself consists of a motor with a blade on it, and precisely no safety features. It’s mounted on the front of a small RC car with a rack and pinion to control its position. [Engineering After Hours] has some sage advice in this area: don’t try this at home.
ChatGPT was not only able to give advice on what parts to use, it was able to tell [Engineering After Hours] on how to hook everything up to an Arduino and even write the code. The AI language model even recommended a PID loop to control the position of the circular saw. Initial tests were messy, but some refinement got things impressively functional.
As a line following robot, the performance is pretty crummy. However, as a robot programmed by an AI, it does pretty okay. Obviously, it’s hard to say how much help the AI had, and how many corrections [Engineering After Hours] had to make to the code to get everything working. But the fact that this kind of project is even possible shows us just how far AI has really come.
No one seems to be talking about how these chat “AI” are just repeating what they learned.
Nothing new or novel is coming out of this. It is just a lazier way to search for stuff.
Granted, it is “searching” a highly varied sum of many technical bits o information, but it isn’t giving you anything NEW that you couldn’t learn from reading the top 20 search results.
Which is also part of the problem.
Some of the wrong answers it gives are because the question or answer was misinterpreted. But many are because the pool of things that got fed in are just wrong information from the internet.
If I want to get my sources, I can follow the citations.
How do you get a convincingly argued answer from a chatbot?
You can’t treat it like a person. I can’t say “ok I trust this professor to give me good info on X because they are an authority”.
So, their answers fall into “random commentor/blogger on the internet” levels of trust. Which doesn’t account for much.
(Yes; I see the irony…)
99.999% of humans are just repeating what they learned. You’re not Isaac Newton. Most employers/other people seeking minds for tasks will not care if it’s parroting. It’s functionally the same in all but the most transcendental cases (you and I have never accomplished the transcendental in our lives).
The big secret of AI is not that it’s “merely” the Chinese room experiment. The deep dark Lovecraftian secret is that YOU are the Chinese room experiment.
Also: “ok I trust this professor to give me good info on X because they are an authority”—have you seen these trusted professor’s twitter accounts? They are still just random bloggers on the internet in nearly every case. They will also get replaced by AI eventually.
The difference is humans, in general, ‘know’ what they learned and then can apply that to every day life and modify and use it to fit the conditions at the time. Even reject what we have learned over time too. We aren’t just parrots, repeating what we heard. For example changing a tire. Some time in our life we are told how to do it. Intuitively we can then go out and change the tire because we know what it ‘means’. The AI would have to be ‘trained’ to do that and still not ‘know’ what it is doing. At least that is how I think of it.
Agreed with above. Just a ‘twist’ on a search engine concept. I have no problem with that… As long as we treat it as such and not more than it is.
ChatGPT is not going to the big book of how to program an automated edger and looking up the chapter on circular saws. Have you even experimented with it? It puts together novel arrangements of concepts all the time. The concepts are connected but they haven’t been arranged in a particular way before. To me, it seems the human brain just has more inputs and has many many more connections to strengthen knowledge over time. I am sure ChatGPT can unlearn something as more evidence accrues that breaks a connections. The only reason you “know” anything is that it has been strengthened and confirmed by multiple inputs. In other words, we’re done for, there isn’t much time left now.
Knowledge is also generated by logical inference and generally by thinking. Not everything you know is based on someone or something telling you that.
What robots like ChatGPT do is simply copy, they do not analyze the information and synthesize knowledge because they don’t have any idea what it means.
Humans can analyze and synthesize information based on their learned knowledge – that’s how you come up with new ideas – ChatGPT merely copies and combines.
Humans *comprehend* things though, even if we repeat stuff like 2+2=4 we know what that means – as far as I can tell AI is pretty much a heap of probability that spits out whatever it thinks most likely / looks most right no matter how stupid.
I’m still convinced that AI is not getting *better* it’s just the scaling up of computing power & data-sets that allow it to do pretty much the same stupid stuff faster on a much larger data-set.
Or to put it another way; more monkeys, more typewriters.
Pssst, it’s free content to put ads on.
But you can trust a “random commentator/blogger” IF they spell out their reasoning and IF it works when you follow it through and IF there aren’t major other factors at work which have been missed out, and you can trust them fairly well IF their conclusions match your observations. Maybe the age of AI could teach people to trust based on following chains of reasoning for themselves, and replicating work afterwards to verify it, rather than falling in to the trap of assuming an authority is always correct. Regrettably I fear though that natural stupidity will complement artificial intellignece in the worst way possible, people will still have faith in authority opinions even as the authority becomes ever more divorced from reality.
The main problem is that 90% of the content are people posting code that doesn’t work and asking why – and the AI can’t distinguish between the question and the answer.
Also, much of the internet is already bot generated junk to catch clicks on search engines, which also get included. There’s just so much noise that you cannot trust the AI to generate anything useful.
I’m imagining a future where kids use ChatGPT to write all their high school essays, that Turnitin dot com’s AI analyzes to make sure it’s never been submitted before, and no human actually ever reads it, and on the one hand that’s an educational disaster, and on the other, it’s perfect training for when they later use ChatGPT to provide automated cover letters for their resumes, which the corporate AI will scan for keywords, and no human actually ever reads it…
Tried chatGPT on one of my kids’ homeworks. It couldn’t even write an acrostic poem, which is pretty much the bottom of the barrel of English homework.
Agree with you…
Simply GIGO (Garbage In Garbage Out)..
Man I’ve been having a hard time just getting chatbot to give me information at all. I’m surprised it didn’t find that saw blade offensive or consider the act of cutting grass insensitive and hurtful. Creating the narrative for the Ai is all the work. You do that I’m sure it will transcend time and space.
Using it to have conversations is like a weapon to liquefy brain matter.
It forgets rules.
It is woke.
It doesn’t always give relevant information.
It reminds you of openAi policy more than it gives a valid response.
But I still think they have something here.
I don’t like my information curated for me and that’s what it is doing. An ubias chatbot will eat skynet for breakfast.
While all correct, I think the point is missed: being that tech is now mainstream and will succumb to media HYPE and marketing.
ChapGPT just refactors Siri/Cortana/Google with a interface that asks “how” vs “what”. And tries to reduce 1k of results which is great for a researcher to a single answer which is great for a practitioner. We are all moving from being research mindset to a trust mindset. And that’s the dangerous part we (tech devs) need to get right.
As for hype marketing, ChapGPT itself is the new Bitcoin, and we know how it’ll end.
AI is going to come for all laptop jobs first (it already ate 99% of translation jobs). If you’re reading this while wfh on a laptop, get ready because you WILL be superfluous. No backsass on this one, it’s happening.
Everyone works on a laptop. Are you excluding no one from this proclamation?
Maybe he just saying 99% of jobs are for NPC’s and can easily be replaced by a non human worker. Or maybe it will just force people to find passions, Goals, interests… vs following the cashing and banking learning system. Which gets you a 4 year degree and a laptop when you graduate. If someone can make it through college, they just had their firmware flashed successfully. A college graduate is no different than an Ai. There is no independent throught.
People really just type out anything on this internet.
The problem with the Internet is that it is filled with random data that has no basis in fact. Forums are the best place to search for misinformation: It’s not deliberate. it is a product of the great unwashed thinking that they ‘know’ something because they heard something similar on TV or it was the subject of one of their LSD trips, (they dreamt it so it must be true). It seems as though the common modus operandi is, “If you don’t know, make something up”, and then blog about it, or offer it as sage advice on a forum. It is amazing how many views that certain youtube videos get, by reinventing perpetual motion or repackaging other people’s fantasies and perpetuating this source pool of utter tripe. Training an AI by feeding it unfiltered Internet is bound to end in tears.
Once we can extract code from the brain we will get life experience. I have over 10000 hours in sanding Floors. Once you can extract that information compair it will 3 other dinosaurs you get the perfect Ai Hardwood floor guy.
It’s not about refining information (current chatgpt model) it’s about finding information that has already been refined.
Then we will have the its okay brain movement where it’s fine to have a fat brain a gluttonous brain..
Brains that can walk the line for 30 years will be sought after. Brains that are at the head of Apple or Microsoft, Google, tesla.
Basically iZombe the TV show.
We have like a 1 and 11 chance to be currently in a simulation.
It’s good for simple stuff, to save yourself some time messing about on stack overflow for example. That’s about the limit though. I tested it out a few weeks ago, asked it something along the lines of “create a ROS 1 SLAM package from first principles, written in python, that subscribes to laser scans from the /laser topic and produces a costmap”
It’s response was a python script that called Gmapping as a library.
Technically correct, but in reality not what I asked for or wanted. It would have probably worked, but you could just use the Gmapping package (detracted) without having to make any custom packages to interface with it.
I think anything more complicated than asking it to structure some data with an example, isn’t realistic. You might get lucky, but I suspect all the examples of it doing anything complicated took a lot of coaching. Probably more time spent than just writing the code yourself.
This is a fad now and we should stop with the daily reports on “AI did this” topic or at least shelve them to a separate section for the true enthusiasts. These videos are similar to “look how easy it is to build X” on TikTok. Nothing is as easy as it is presented, lots of extra work goes behind the camera and the results are not guaranteed.
For example there was a post a few weeks ago about an e-ink frame thing that displays AI generated portraits. I tried said AI – it could not generate anything even close to acceptable quality. So the original poster probably spent considerable amount of time refining the results or editing them at which point are they truly AI generated?
The hype machine is in full swing though.
Well anything creative would benefit from a detailed analysis. Just think how complicated it is creating a movie, or a AAA video game, and all we see is the reviews?
Why is everybody using ChatGPT to help them write code? There are already coding-optimized apps built on GPT, available from either GitHub or Azure.
It’s maddening to see all this attention given to the chatbot when there’s better products available for specific tasks. Do I need to write an article on how to use GitHub Copilot, and why the chatbot sucks?
Would probably have gone with a weed-eater type string cutter instead of the saw blade. Either way though this thing needs to be closely watched while it’s doing its job. Little fingers can be curious as can neighborhood pets. If you can’t trust it to work independently, and I don’t think I could, you might as well edge the lawn yourself. Besides, mowing is the time sink, not edging.
I’ve said this before, and I’ll say it again. SOMEBODY nedds to try running their business using a chatbot. I was kind of hoping that Supplyframe might have a go at using it for running Hackerday for a week or two.
They already are.
How do you think twitter is run?!
The difference between chatgpt and a real programmer is that the real programmer has no clue whom he stole the code from. He picked up ideas left and right all his life, and puts them to good use.
While chatgpt on the other hand knows pretty damn well whom it stole it from. :)
As a real programmer, I know exactly who I stole it from. Their username is in the bottom corner of their post on SO. /s
Why are we all up in arms about how this AI or that AI is nothing more than a hat trick, when we’re missing the real point: we’ve made the process and results user friendly. Think about what it really takes to create any AI: understanding a great deal of math from linear algebra, statistics, trig and everything else in-between. Understanding how to create models and paths that use that math to predict outcomes; combine that knowledge of outcomes to create iterative code that utilizes multiple CPUs and/or GPUs; create datasets to be analyzed that provide a known mathematical probability model to use for different yet similar datasets; confirm that the outcomes from the unknown data validates within predictability parameters; spit out the result in a form that the warm meat bags can understand. That’s an oversimplification, but you get the point, all that learning to parrot what everyone else is saying isn’t what’s being sold, it’s the ability to parrot autonomously that’s valuable. I feed my AI system better curated inputs and know I can parrot something more valuable. Want the secrete formula to Coke? Train an AI on the inputs and expected outputs and let it do its thing. Make it so some rando can do it at a push of the button without all the background work and you’ve got a product.
There was a book I was pointed to call “The Invisible Computer” that was recommended from this very site. I bought it and read it and the first few pages goes into the idea that the computer is something to be interacted with in a way that the user doesn’t know they’re using one. When we see sci-fi movies and shows performing real-time analysis and doing things at a push of a couple buttons, we first think “it’s not that easy,” but the second thought should be “but it should be.” Sure, I could learn how to create PCB schematics that optimize power distribution, reduce noise, and stay cool to the touch, but I could also have an AI learn from 50 brightest minds and their work to parrot a masterpiece everytime I need one so I can focus on the function and goal I was trying to solve, not the grunt work to get there.
ChatGPT isn’t impressive for us because we know how the sausage was made, but we should celebrate it because it enables the other 90% of the population to do something great without needing a PHd, even if it’s only used for parlor tricks.
Sure it didn’t do great but hey – at least it’s not Bard.
This reminds me of the kids game. I think it’s called rumor. Anyway, first kid tells the second kid something, nobody else except that kid can hear it. Then that kid tells the next kid in line what he thought the first kid said. And so on until the rumor has passed through the full line of kids and the last kid says what he heard out loud for all to hear. And the first kid then tells the whole group what the original rumor was. It’s usually pretty funny to see how much with the first kid said has changed by a time it reaches the last kid.
The AI chatbots will be fairly authoritative when they’re taking their information from humans who have actually applied some knowledge and are posting it. The problem will be when the AI chat bot regurgitates the postings of other AI chat bots and so on on down the line. The problem is the AI chatbot isn’t contributing any real authoritative estimation of the validity of what it recommends. Since it is only like the kid in the middle of the chain. Repeating flawed information and damaging a bit more as it is passed along to us.u
nsuspecting folks at the end of the chain
That game is called Chinese whispers.
I don’t know if I actually believe this story. I’ve tried to use chat GPT quite a few times to write Arduino code, and I don’t think it’s ever made something that even compiles.
Not to mention the fact that often you can ask the same question twice and get completely contradictory answers. In addition to not compiling, sometimes, if it were to work, you would do the opposite of what was asked.
I’m not surprised, this is what passes for AI these days. Sure it’s neat that it can be conversational, but it’s not accurate. And I’ll take accurate over conversational any day!
RC cars are the worst for this kind of thing. They lack many things: a stable platform, maneuverability, traction, low speed gearing.
There is a reason professional lawnmowers don’t use ‘car’ style steering, preferring instead a ‘tank’ style steer. Also favored, not a coincidence, by line following robots such as roomba.
Chatbots lack common sense.
Unless the trick was to aee if the chatbot could make a bad idea work?
If you are engineering a system you want it properly documented. The chatgpt doesn’t understand its own output so you have undocumented untested code from an untrusted source. It is also incapable of learning new things. That doesn’t mean it cannot be a useful tool. But it won’t replace humans.
If my recently experiments with using ChatGPT to help me write unit tests are any indication to go by, the and actual human had to provide plenty of help in making the code work. I was learning Jest and trying to write a unit test for a function that takes an array of URLs for images and streams them into files using axios, fs.createWriteStream, and a promisified finished. My usual searching failed to help and even SO didn’t have any answers. ChatGPT didn’t actually give me any usable code, but it at least gave me something close enough that I had something to research.
I’m currently working on a school Family Project with my son. He wanted to make a piston. We set up all the python code to make the piston display on a 3.5″ raspberry pi screen, showing the 4-strokes, and then made it turn a stepper motor to spin an actual demo in real time with the displayed strokes. All code was pretty much ChatGPT. I did some minor stuff like rerouting the X display to my desktop for testing, but overall it was all done by ChatGPT with requests from me and my son.