This week, the hackerverse was full of “vibe coding”. If you’re not caught up on your AI buzzwords, this is the catchy name coined by [Andrej Karpathy] that refers to basically just YOLOing it with AI coding assistants. It’s the AI-fueled version of typing in what you want to StackOverflow and picking the top answers. Only, with the current state of LLMs, it’ll probably work after a while of iterating back and forth with the machine.
It’s a tempting vision, and it probably works for a lot of simple applications, in popular languages, or generally where the ground is already well trodden. And where the stakes are low, as [Al Williams] pointed out while we were talking about vibing on the podcast. Can you imagine vibe-coded ATM software that probably gives you the right amount of money? Vibe-coding automotive ECU software?
While vibe coding seems very liberating and hands-off, it really just changes the burden of doing the coding yourself into making sure that the LLM is giving you what you want, and when it doesn’t, refining your prompts until it does. It’s more like editing and auditing code than authoring it. And while we have no doubt that a stellar programmer like [Karpathy] can verify that he’s getting what he wants, write the correct unit tests, and so on, we’re not sure it’s the panacea that is being proclaimed for folks who don’t already know how to code.
Vibe coding should probably be reserved for people who already are expert coders, and for trivial projects. Just the way you wouldn’t let grade-school kids use calculators until they’ve mastered the basics of math by themselves, you shouldn’t let junior programmers vibe code: It simultaneously demands too much knowledge to corral the LLM, while side-stepping any of the learning that would come from doing it yourself.
And then there’s the security side of vibe coding, which opens up a whole attack surface. If the LLM isn’t up to industry standards on simple things like input sanitization, your vibed code probably shouldn’t be anywhere near the Internet.
So should you be vibing? Sure! If you feel competent overseeing what [Dan] described as “the worst summer intern ever”, and the states are low, then it’s absolutely a fun way to kick the tires and see what the tools are capable of. Just go into it all with reasonable expectations.
“Vibe coding” is for losers.
https://xkcd.com/1081/
It’s so great to see so many omniscient, open minded uses on the Hackaday comments section.
You must be new here.
It is wonderful to experience pioneering days once again.
So true. It’s been a lot of fun. We’ll see how it all shakes out, but right now is great.
It’s never about the tools. It’s how you use them.
A form of prototyping.
Definitely. The pushback and the misuse that justifies the pushback are familiar.
It wasn’t that long ago we still had devs saying that REAL programmers use vi or other pure editors because IDEs with stuff like linting and tab completion were a crutch that enables bad programmers. Of course, plenty of students starting out with heavy IDEs ended up having no idea how the toolchain works under the hood so they’re married to that IDE until they outgrow it.
And there’s still plenty of folks saying that higher level languages like Python are worthless because REAL programmers use C++ or C. Of course, people building large applications with Python and Javascript and hitting performance bottlenecks shows that sometimes those high level languages are being misused.
Ah, these debates take me back. When I was a lad, we didn’t have IDEs, linting, or AI whispering sweet nothings into our code. We did our debugging by toggling LEDs and watching serial output dribble in at 1200 baud. If you wanted graphics, you poked values directly into video memory and prayed to the gods of offset alignment.
Our editors didn’t autocomplete — they barely completed what you typed. And if you wanted a loop, you wrote it in assembly, by hand, on paper first. We didn’t have Stack Overflow. We had the manual, and if that didn’t help, you went to the pub and asked a bloke with a beard and a Commodore 64.
So yeah, maybe Vibe is a bit like giving a toddler a backhoe — but let’s not forget every new tool starts off looking like cheating until it becomes the new normal. Doesn’t mean we have to like it, but we might as well understand it.
Is it really coding if you’re not threading magnet wire through rope core memory modules?
Scoff….
Copy con: program.exe
Then enter op-codes and data with Alt-keypad.
Yep. I remember the early days of ECAD on the Mac where more experienced engineers were profoundly skeptical about its utility – until I turned in perfect schematics with automatically generated netlists, tied into Excel BOMs.
AI is a tool for early adopters, think Photoshop 0.9.
That’s what she said!
I was talking to my neighbour’s kid, a CS student. He told me about an ESP32 based project that he did to detect his room door being opened, he used an IR proximity sensor for that. The ESP32 hosted a webpage which got updated with the info whenever the state changed. His laptop ran a python script which read the webpage and played a “welcome” audio file when the door was opened. A cool project. A bit immature but, sure why not?
I asked him what python library he was using to make the HTTP request…silence… “chatgpt wrote it for me, I don’t know”
Yes, a 3rd year CS student doesn’t know what an HTTP request is, or is aware what library he’s using. Because “chatgpt wrote it for him”
The future is going to be bright. Probably blindingly so.
Just wait till they’re introduced to calculators.
Yep, headed down the road to Idiocracy. Entering the wonderful era of ‘let the AI do it’.
The only IDEs I ‘use/used’ was Delphi and now Lazarus. I still do all my python and C/C++, Rust, C#, etc. progamming with a text editor like notepad++ or geany on the Linux side. Just no need for IDEs.
Even Pico programming with a text editor, and loading is done with rshell if Python, or if ‘c’, then cmake on the command-line and then drop onto the presented USB folder.
Sometimes it really feels like I’m the only one using Geany when everyone around me uses vscode and similar. It’s nice so see someone else mentioning it.
LLM discussion aside, I hope this is satire or something. Proudly proclaiming you don’t use IDEs does not make one seem more competent, let alone actually be more competent. Quite the opposite. Do you also manually screw in every screw and manually drill every hole? Tools are just tools.
the thing is, AI hardly harmed that at all. the tradition was already entrenched to pull in libraries with massive nested dependencies for even the most trivial functionality. iow, 2 years ago they knew the name of the library and it was awful. today, they don’t know the name of the library but it isn’t any more awful.
It’s not even the libraries.
He’s putting up a web page and having a python script poll that page (likely w something like a fixed IP address for the ESP32 and a looped request).
If your going there, hiding the complexity of HTTP from yourself is the right answer.
But hammering on a webpage?
The whole thing is Rube Goldberg.
There are, at least, a half dozen better ways.
He asked the AI the wrong questions.
"Vibe coding should probably be reserved for people who already are expert coders, and for trivial projects. "
…. seems like a bit of a contradiction to me. Why would an expert coder waste their time on completely trivial projects anyway? I’m inclined to believe the author does not understand how to use LLMs for coding non trivial projects. A non trivial project will have plenty of trivial code, boiler plate stuff, that the LLM can fly through. There might be a few things where the guys on Stack Exchange or wherever have not come up with solutions, but this might well represent a tiny fraction of the whole code, even when programming an auto ECU.I’ve been professionally writing software for 20 years. I have tons of trivial projects. Projects that are one-offs, conceptual “I wonder if”, throw-away type things. And I’ve used AI to write some of that code. I used it because it was convenient, and I knew that I didn’t care very much what the output was.
I think the point being made here is that AI code generation is knowing enough to be dangerous. You know how to get out what you want, but you don’t know the pitfalls. The security point mentioned above being chief among them.
Well said.
On: industry standards on simple things like input sanitization, …a training data problem. too much GitHub code without it.
Uh, there’s always some kind of project to do at home that’s ultimately trivial and for fun, automation or some such thing.. Being an “expert coder” doesn’t mean you only work on the most difficult projects at all times. I don’t understand your perspective here.
Untrue. It’s bad at even that XD
You’ll spend as much time trying to get good boiler plate as just typing it yourself lol
It only takes a few seconds afterall
Guilty party here. Learned to code in the 70s but didn’t make a career out of it. The AI generated stuff is much quicker and more clever than I am.
Can relate. Twelve years as a COBOL programmer, jumped to sysadmin and never looked back. Whatever code I need, I find it easier to AI it and just anímate the code to check if it works as expected.
If and when it works recent example simple I want to send hex data from a captured sting of data using c# and simpletcp library
Ai goes on to barf out how to write a tcp/ip library from the effing ground up … when I’m using a library that already handles that and has done so for almost a decade
It’s as simple as choosing the correct model. Use the wrong tool, get wrong answer. All it took was 2 clicks and copy pasting your words in the input box. https://chatgpt.com/share/680b34e9-55e4-8006-b083-80248c0f4619
Oddly (or maybe not?) I’ve made a lot of use of GPT 4o on my project over the past couple months not for the coding but for debating and working out library and documentation details – kept having to delete its memory to start again and eventually got to a point where I could paste in the working text and user library (it no longer needed to see the back-end) and it would be fully lucid and understand the system and be able to even make good editorial suggestions on how the heck to document it and in what order to introduce stuff to the reader.
So basically I used it as a test to tell when I’d gotten to a point where the whole system was cohesive and comprehensible enough and I didn’t have any gaps left to explain in the README.md — in hopes that this means a reader and (python using) user can make sense of it all as well and start doing stuff with it.
Sort-of a giant whole-project style and readability checker. Though this isn’t much tested against actual human readers/users at this point (I’m just starting to test my install and getting started instructions to verify them and write up a “quick-start” guide now).
If nothing else, I wound up with an… unusually complete…. README.md that I wouldn’t have composed without a lot of pointed nudges from the AI about what might actually need to be explained to the reader (and even where I should go make functions in the library issue informative error messages where silent failures would lead to much frustration).
An astonishingly higher than average
number of drivers believe
they drive better than average.
I wonder if coders fall prey to this same delusion.
If AI is informed by history,
is it capable of not repeating it?
Or is it doomed to repeat it
because that is all it knows?
Is coding with AI an automated way
to get the average of all code?
Are we required to form our prompts
like experts to get great code?
If we prompt like a novice
will we get code with hello world still in the comments?
Could an incredibly well written vulnerability be posted enough times to be propagated into AI models and repeated in production?
… and most drivers are better than the average. Which makes perfect sense when you understand the stats. It’s not a normal distribution. I suspect the same is true of coders, even if “high on alcohol and cocaine whilst joy riding a stolen car running away from police with no licence or insurance” isn’t a factor.
Or for a simpler to intuit example, the vast majority of humans think they have an above average number of legs.
Nah, they’re probably just deluded?
I believe there are plenty of non-professional hobby programmers who craft simple programs for themselves that aren’t internet exposed and who can benefit from the AI generated code. I often explore my own indicators and graphical tools for Thinkscript to aid my own stock research and analysis. Thinkscript isn’t overly complex but I previously would have to see other examples or Google for them to be able to put a new tool together. Now I use Grok and check the code it gives. I can much more rapidly craft what I need using Grok. And I can instantly tell if it’s working or not. I would vibe code if writing an application for Charles Schwab. But it sure has been useful for my personal needs.
Edit: “I would “NOT” vibe code if writing an application for …”
If you are producing 4K lines of code in a day and it is solving a task that you can’t find any FOSS equivalent of, is it vibe coding, or something else?
Code Debugging is harder than writing code — hell, reading someone else’s code it harder than writing code. So how exactly does an LLM make things easier?
Because it can debug code.
the whole LLM deal is that you can talk to it, you can engage it in conversation. it doesn’t just churn out code. you can ask it to find the bugs in a segment of code. you can ask it to explain how the code works, or how it would write a missing segment. most importantly, you can ask it to try again. that’s when it’s most surprisingly competent, when it is responding to suggestions for improving its own results.
it’s scattershot at everything it does, and it is as confident when it is wrong as when it is right. that’s frustrating, but we’ve all had coworkers with those attributes.
i’ve never used it for anything other than asking it how smart it is. but the free public ones, you can definitely paste in like a 20 line function and ask it to find the bug and sometimes it will go straight to the problem. i’d like to know if the paid versions are any use on larger code bases but my guess is that they are.
I needed a method to determine whether a point was inside of a polygon determined by four points for some collision thing I was doing on a twitch stream overlay I made for my grow stream. I know how to do this mathematically and how to code it. It’s trivial.
I asked an LLM to do it out of curiosity, and at first it gave me a (perfectly working) function that returned whether a point was in a rectangle determined by the min and max x,y coords of each point, which was not what I asked. I told it so, and it apologized, told me I wanted to construct triangles and then determine inclusion in one or the other, and gave me a (perfectly working) function to do so.
This took virtually no time at all. Considerably faster than if I had just written it out myself. To someone who knows how to code and tell whether the output is correct or not, this is a nearly perfect way to generate trivial code that would otherwise waste your time. This does highlight the point, though, that you need to know whether what it provides you as a solution is actually correct or not.
We are approaching peak enshittification.
It sure seems that way. No critical thinking skills needed (or wanted). Just vibe it and hope for the best. No thank you. I enjoy programming. Why I made a career out of it. And the challenge.
Given that most things you write are inherently trivial components, and the actual difficult, novel parts are far fewer, why wouldn’t you want to use an LLM so you can spend more of your time on interesting code?
why would you think it lowers critical thinking??? that goes to the person, mot the tools they use….most of the comments written here just look irrational and nonsense, with a big byass that is pretty common among old people facing the fact that the world they knew is coming to an end to leave room for a new better one. “Oh, back in my day…” style…Can’t you see? maybe instead of focusing on other’s critical thinking abilities, why not focusing on self-criticism.
AI is just making so many dull and tedious processes so fast and easy I just cant understand how you cant see that…
it would be such a relief to know there is an upper limit, an apex, a descent on the other side
And that’s why it doesn’t really work, as you go on to iterate.
If someone wants to waste time bickering and fighting with machine in hopes it finally gives them the right thing, that’s their prerogative. Meanwhile, in the same time, I’ll have already written the entire thing and another project. AI code generation, especially vibe coding, is about as unproductive and inefficient as you can get.
It is a major time waster in my experience. It’s not even useful as a code-generation tool, let alone a full code replacement a la vibe coding. At best, it’s a search engine for pulling up docs. As soon as the novelty of LLM chatbots and AI filters wear off, generative AI will be largely forgotten to the depths of time (just like crypto), save for one or two that find a niche (probably only ChatGPT will be around).