Although generative AI and large language models have been pushed as direct replacements for certain kinds of workers, plenty of businesses actually doing this have found that using this new technology can cause more problems than it solves when it is given free reign over tasks. While this might not be true indefinitely, the real use case for these tools right now is as a kind of assistant to certain kinds of work. For this they can be incredibly powerful as [Ricardo] demonstrates here, using Amazon Q to help with game development on the Commodore 64.
The first step here was to generate code that would show a sprite moving across the screen. The AI first generated code in all caps, as was the style at the time of the C64, but in [Ricardo]’s development environment this caused some major problems, so the code was converted to lowercase. A more impressive conversion was done in the next steps, as the program needed to take advantage of the optimizations found in the Assembly language. With the code converted to 6502 Assembly that can run on the virtual Commodore, [Ricardo] was eventually able to show four sprites moving across the screen after several iterations with the AI, as well as change the style of the sprites to arbitrary designs.
Although the post is a bit over-optimistic on Amazon Q as a tool specifically for developers, it might have some benefits over other generative AIs especially if it’s capable at the chore of programming in Assembly language. We’d love to hear anyone with real-world experience with this and whether it is truly worth the extra cost over something like Copilot or GPT 4. For any of these generative AI models, though, it’s probably worth trying them out while they’re in their early stages. Keep in mind that there’s a lot more than programming that can be done with some of them as well.
I find, in my work, that the first, and possibly hardest part of the programming work is understanding the problem and its interrelated factors, and what kind of approach you want to the solution. I really don’t think AI will ever substitute for a human.
Oh it will probably give you a solution but it’s very unlikely right now to be anywhere near optimal or even as good as a slightly dim human could do.
The optimal solution isn’t even a fixed solution since it’s necessarily based on the desires and priorities of the person or company asking for the solution. Strictly as a relatable, topical example, Biden and Trump would ask very different things on a new software suite for ICE. Their priorities are simply not the same.
It may not be optimal or what a slightly dim human is capable of with sufficient research, but it may be good enough, and if the goal is tinkering with an unfamiliar architecture in between projects you have more time and intellectual to devote to then it’s quite possible that this sort of approach could be the optimal one. I realise that it’s possible to trap yourself into only approaching problems this way, and one’s mileage will vary considerably in terms of results, but I’ve been contemplating doing similar for briefly tinkering with some bits of vintage hardware that I really can’t justify devoting the time to learn properly. Also, there is some potential value in using this sort of effort as an excercise in efficient use of existing code and technical documentation with LLMs in order to generate more useful responses in order to generate good enough code in other similar contexts.
Perhaps I’m letting my ADHD and limited engineering talents show, and attempting to make a virtue out of a vice, but shortcuts to a superficial familiarity with many types of project are something I find more engaging than a perfectionist slog in the pursuit of more elegant solutions to a few.
Ah, see, I’d feel cheated. I mean that’s what hobbies are for, killing time in an enjoyable manner. Learning how to go down to bare metal on something that I have no real reason to is my hobby. Like I can do everything building a new house from the concrete to the shingles, make beer, wine, liquor, black powder, dynamite, or a katana. From scratch. And I mean all the way from scratch. Like growing the barley or making the chemicals that I can’t go out and pick up from nature. Picking up a new language or a new architecture is just another aspect of that for me.
It comforts me to know that if I were ever transported back to 1060 AD that I could rule the world in under a decade.
But would you have unlimited rice pudding?
I think perhaps where we differ is in the extent to which we intrinsically value the acquisition of skills. I’m only interested in doing so as a necessary evil to realise an idea, be that a crucial problem in my life or a passing fancy. As soon as that idea has been realised, my enthusiasm for practicing the skill diminishes greatly, and as soon as there is an easier (in an end to end sense, and mindful of the sunk cost fallacy) I’m more interested in that. Any perfectionism on my part is in the realisation of the idea, and the ideas which I tend to have (or at least those which are of any merit) are abstract rather than concrete. If I found myself playing with 6502 assembler, it would be much more likely to be in the service of telling an elaborate joke or because a restoration job took an unexpected turn than out of a fundamental desire to acquire a comprehensive understanding of the architecture or a foundation for future work of the same type, in fact (and while I certainly respect the skills and preferences of those who do and feel otherwise) both of these fill me with a certain degree of dread, I’d certainly need to seek out a renewed prescription for stimulants in order to maintain sufficient enthusiasm.
To get the AI started on the programming project, you have to be able to tell it what you want. My point above is that that in itself can be a major hurdle. You can’t tell it what you want if you haven’t even figured that out yet. AI could do the small, non-nebulous things like give you a 6502 routine to multiply two 24-bit variables; but the overall project won’t be so simple.
History doesn’t repeat, but it rhymes. I remember when a hot topic of discussion was whether C compilers would ever generate code as good as (in terms of execution speed and compactness) what a person could produce.
I used an AI to validate an electronics design, analyze it, provide value estimates for current, recommended resistor values, and provide formulas for each I could validate.
Bouncing ideas off an AI works wonders for collecting your thoughts and restricting your problem space to the most likely positive results.
Worth it.
Never say never the field is in its infancy after all.
But I agreed with the idea, the best result I’ve had was when I wrote out the structure or functions with comments and made the AI fill it in
Worked really well.
Also when working with existing code based asking if to rewrite functions or add new ones for specific ends.
The fact it can write some code from scratch already is stupendous.
AI for professor, did you hear about vector database. This concept is a famous in mathematical, it called mapping. Basically, the language is a mapping itself, when i say , the cookie is good , cookie is object and good is property presentations for cookie . so the big question mark here , how to map a language to programming language , well as i said cookie is a object that mean it already defined elsewhere. So before doing something, people must RAG the topic for Ai , but how to RAG , actually a personal must be professor in that topic . back to the concept , if you good enough to write a document to detailed all the technology, AI could map to object that you querry, the result might be fragment, then user must take integral for the information, then prompt again the result will come . Ai inspired from ODE from physic , so that mean derivative and integral for language might work .
I spent a lot of time to create 2 thing , a fundamental abstract object, and implement. Then everything is defined, i tell AI to do the rest . the good point we have LLM , that mean 2 steps, which are systematic and lexicon already ready . so when i ask a object , Ai will looking for possibility of my requirements then took out the function ( function already used in maping ) , so i just need create a knowledge graph for management .
So because LLM i dont have to use high level language, all function from internet programming, to hardware can be done on linus . and here is critical point , when i open many firmware to learn how to write one , i notice that ,all that stuff follow a structure of the kernel, looking for the author . the author alway have experience on a OS flagfrom . so the design of this firmware is a modified version of the OS that they have most experience. Because of it , so if i inherited a OS kernel, i could make everything, so what is the code ? Algorithm -easy to say is fine-turning design for specific system. And from here i could use AI , lol because i already have abstract object inside a OS .
So how about lower language such as C and ASM ,the answer still can be use but the instructor wild . about instruction, to avoid unpredictable error , some software have debug system, it also a set of rule . so if applied it to lower language AI could programming ( i still not have time to attempt) but it s unnecessary, because when doing it we rewrite entire programming field :v .
Okie i think it s enough for everyone can programming at home . some project doesn’t need security and strict debug , so linus is enough for any automatic at home .
I’ve already got the solution for intelligently generating assembly code – it’s called a compiler..
“Intelligently”.
Ask me about the hours I lost on
-fstrict-aliasing
recently. Or don’t, because I ended up just punting and turning that optimization flag off, which made code work as intended.gcc c potable transparent high-level code generating machine better than platform-specific operating system assembler software technology?
If necessary, of course.
gcc c will usually get the job done without resorting to machine code?
I’ve been a software engineer for 30 years, and program predominantly in python for ML nowadays. But my main experience is with C++, C#, Embedded C and Java. I can tell you right now that every LLM I have used from Llama on my PC (96GB with 4090 GPU), to inbuilt co-pilot on github, to OpenAIs GPT 4.0 etc. all of them basically have been absolute garbage at writing code. Yes for a min batch files, or quick single function algorithm they can work, but take python code for example. It’s pretty unlikely an LLM can actually work out what installed packages I have and therefore the code they give me is often out of date due to the lack of backward compatibility in python APIs/Packages. For larger projects in C++ or C# (even Java) they are awful at seeing the bigger picture/design, this is due to classes being split into files, and for C++ 2 files (header and source). The way these companies have closed them off by training then allowing a small context size means they will stay garbage. for the last year of programming, I would say for every 10 minutes saved by using an LLM for boiler plate code, I spend at least an hour redesigning the garbage suggestions. Then I spend another hour fixing obscure bugs, and weird edge cases, this is in any code that actually compiles. What I fin hilarious is people like Bill Gates (a rich man who thinks he is clever because he created a monopoly company), actually said LLMs would replace all Software Engineers within 5 years, and he said that 2 years ago. The hilarious thing about all this is they are not even AI, they posess no Inteliligence at all, they are just probability machines.
Exactly, they lack any intelligence, understanding or reasoning. They are pretty much just very fancy predictive text. The code they produce isn’t based on any reasoning, it is based on patterns that it “learned” from its training data.
They don’t even understand what any of it means, what any word means and they don’t need to for how they are designed to work but it does mean that they can’t reason and that hallucination is a problem since it has no understanding of what it means it has no way to know if it is right.
We are still a long way away from actual AI, everything now is just taking older ideas to the extreme and we aren’t really that much closer to real AI.
I needed to write a reply on Hack a Day to expound my hatred for the gen AI fad, so I clicked over to my new AI tool Engagr, a ChatGPT comment writing tool that didn’t actually write this comment. Instead, it gave me a recipe for pickled herring and called me a few slurs. I think I learned something, but I’m not sure what.
Hummm. I think what I did with Claude was a little bit more impressive:
Prompt #1:
Create a Space Invaders clone in Shadertoy.
Prompt #2:
Implement:
Player control (using iMouse)
Collision detection
Scoring system
Game over conditions
Prompt #3:
Make the invaders look like the real invaders.
Worked close to a charm in Claude Sonnet.
Cannot post a picture here and don’t have Claude Sonnet anymore.
So you can try yourself.
How about less LLM AI stuff on here
While AI (ChatGPT) may not be able to write fully functional programs (yet), I have found it to be a big time saver. I have been writing code for a long, long time and welcome the new coding assistants.
A few examples where ChatGPT provided 90% of the code to get me started in an new environment or help generate/improve code in existing applications.
Fusion360 – “compose a script to save meshes for all objects and bodies that are visible and end with ‘#'”
Corel Draw – “write a macro to change the text and background color of objects based using a text file”
Visual Studio – paste code from a function “optimize and reduce # of lines, and add comments”
ffmpeg – “provide a cmd line: rotate video 90deg, crop to x y starting at x1 from 1:20 to 2:23”
Bash-python – “here is a EFI script, covert to python running in UEFI shell”
Adafruit Trinket – “in Arduino IDE using a trinket M0 analog input with 12 bit resolution, write a sample program to sample the analog pin every 50ms and produce a rolling average. generate two values: a average integer and a scaled floating value using two external resistors with a maximum 28V. Also include resistor values and scaling factor for maximum of 13V and 25V. Recommend an optional
capacitor value at analog input to filter noise. ”
I like writing code but now would rather have a working solution sooner.
I don’t have an AWS account, so I thought I’d try this with ChatGTP. It was able to create a “Display a C64 sprite on screen” in the first pass which was impressive. The sprite looked like garbage. Change the sprite to a square tool 7 tries as it couldn’t correctly count the numbers in the DATA statements. “Update the example to move the sprite across the screen” worked, well except when it got to 255. When I asked it about the additional register it couldn’t give me a correct answer that worked. Bottom line, yes it will “assist” but only if you have an understanding of the code itself. It’s felt kinda like training the guy who is suppose to replace you at work, but doesn’t have a f*ing clue.
Two uses cases where we desperately need AI because humans are for the most part dim-witted.
1) Correctly interpreting car ECM (Engine Control Module) codes and data provided by the OBD (On-board Diagnostics) port of every car made since the 1980s.
2) Setting up and repairing Linux installs.
Don’t bother replying until you provide a working solution to either problem. I will assume your reply is just another dim-witted human or a dysfunctional AI agent mouthing off.
For point 1 why do you think it is humans being dim-witted? The codes don’t tell you the problem for the most part, they tell you what is wrong. So in order to fix it you need to find what the actual problem is, I don’t think an AI could do much more than suggesting potential issues.
I attempted to get several LLMs to write Motorola 6800 code. Didn’t go well. It stated out out but went side-wards pretty quickly. I ended up with 6809, 68K, x86, and Z80 code. If the code isn’t there then there’s nothing to base it’s library on.
So these AIs will only give you a mediocre average code from it’s sources. And one other poster noted getting ti to learn what’s going on when you have more than one source file, is a nightmare.
You still need an engineer to pull everything together more complicated than a basic shell script type program.
I’m not saying it isn’t useful but when a CEO starts dropping people for AI they’re in for a world of hurt, time and expense.
I have seen multiple articles and videos about using AI to write code for Arduino and ESP32. They were always pretty easy examples and very very similar (sometimes identical) to examples you could find in the first results on Google. So from what I have seen it doesn’t really write the code itself, it just pastes in bits of its training data.
Unfortunately, the CEOs will be fine.
There are dozens of different concepts flying under the flag of “AI”, and neural nets and LLM are just a tiny fraction of the effort. I’ve been messing with “AI” since 1974, and I play in automation and robotics fields the most.
IMO, the current crop of AI tools is not appropriate for better coding. The same amount of computing power used for training and generative AI will massively increase software development power when it is hooked to discovery of solutions using Genetic Algorithm tools and evolutionary development. Right now that is very time-consuming, and the solutions are unpredictable and mostly non-verifiable, but I have hope in seeing cool systems developed without human intervention before the end of my life.
A superficial, but thought-provoking book on this, is Kevin Kelly’s “Out of Control”. Grow your programs instead of writing your programs?
I’m not a programmer, but that didn’t stop me from using AI to write All the code for an Android app I wanted, I used AI for everything fron code to icons generated with AI. I compiled it all on my phone into a working apk. The app was a simple budgeting program, to keep track of specific purchases and expenses. I’m able to see where I’m spending most. Sure I could have bought something that could have done the same thing, but I prefered a very old school simple look. I couldn’t have done it without AI
Yes there were many moments of frustration, because I don’t know what the code should look like, but in the end I have a cooling working budgeting app that I designed and an AI coded
Don’t ask GPT to code ARM64; its results are pretty inaccurate.
My AI Assistant and myself were able to run an assembly file that writes “I count” back to terminal after 5 script iterations to address my machine’s API (the assistant taught me so). I made 2 copies and added a typo bug to 1 so 3 scripts in 1 directory. We then counteracted thru 14 Makefile iterations to succeed in a function Makefile that assembles, links, and runs all scripts in the directory and ignores the bugged one. Next we will count out a scalable assembly template that generates patterned samples for further automation learning to then factor in piece-wise a generic genetic model for medicine discovery via a generic genetic model.