One of the fun things about writing for Hackaday is that it takes you to the places where our community hang out. I was in a hackerspace in a university town the other evening, busily chasing my end of month deadline as no doubt were my colleagues at the time too. In there were a couple of others, a member who’s an electronic engineering student at one of the local universities, and one of their friends from the same course. They were working on the hardware side of a group project, a web-connected device which with a team of several other students, and they were creating from sensor to server to screen.
I have a lot of respect for my friend’s engineering abilities, I won’t name them but they’ve done a bunch of really accomplished projects, and some of them have even been featured here by my colleagues. They are already a very competent engineer indeed, and when in time they receive the bit of paper to prove it, they will go far. The other student was immediately apparent as being cut from the same cloth, as people say in hackerspaces, “one of us”.
They were making great progress with the hardware and low-level software while they were there, but I was saddened at their lament over their colleagues. In particular it seemed they had a real problem with vibe coding: they estimated that only a small percentage of their classmates could code by hand as they did, and the result was a lot of impenetrable code that looked good, but often simply didn’t work.
I came away wondering not how AI could be used to generate such poor quality work, but how on earth this could be viewed as acceptable in a university.
There’s A Difference Between Knowledge, and Skill

I’m going to admit something here for the first time in over three decades, I cheated at university. We all did, because the way our course was structured meant it was the only thing you could do. It went something like this: a British university has a ten week term, which meant we had a set of ten practicals to complete in sequence. Each practical related to a set of lectures, so if you landed one in week two which related to a lecture in week eight, you were in trouble.
The solution was simple, everyone borrowed a set of write-ups from a member of the year above who had got them from the year above them, and so on. We all turned in well written reports, which for around half the term we had little clue about because we’d not been taught what they did. I’m sure this was common knowledge at all levels but it was extremely damaging, because without understanding the practical to back up the lectures, whatever the subject was slipped past unlearned.
For some reason I always think of poles and zeroes in filters when I think of this, because that was an early practical in my first year when I had no clue because the lecture series was six weeks in the future. I also wonder sometimes about the unfortunate primordial electronic engineering class who didn’t have a year above to crib from, and how they managed.
As a result of this copying, however, our understanding of half a term’s practicals was pretty low. But there’s a difference between understanding, or knowledge, and skill, or the ability to do something. When many years later I needed to use poles and zeroes I was equipped with the skill as a researcher to go back and read up on it.
That’s a piece of knowledge, while programming is a skill. Perhaps my generation were lucky in that all of us had used BASIC and many of us had used machine code on our 8-bit home computers, so we came to university with some of that skill already in place, but still, we all had to learn the skill programming in a room full of terminals and DOS PCs. If a student can get by in 2025 by vibe coding I have to ask whether they have acquired any programming skill at all.
Would You Like Fries With Your Degree?
I get it that university is difficult and as I’ve admitted above, I and my cohort had to cheat to get through some of it, but when it affects a fundamental skill rather than a few bits of knowledge, is that bit of paper at the end of it worth anything at all?
I’m curious here, I know that Hackaday has readers who work in the sector and I know that universities put a lot of resources into detecting plagiarism, so I have to ask: I’m sure they’ll know students are using AI to code, is this something the universities themselves view as acceptable? And how could it be detected if not? As always the comment section lies below.
I may be a hardware engineer by training and spend most of my time writing for Hackaday, but for one of my side gigs I write documentation for a software company whose product has a demanding application that handles very high values indeed. I know that the coding standards for consistency and quality are very high for them and companies like them, so I expect the real reckoning will come when the students my friends were complaining about find themselves in the workplace. They’ll get a job alright, but when they talk to those two engineers will the question on their lips be “Would you like fries with that?”

In germany, at alot of Univeristies the programming exams are done on a piece of paper with 0 elektronics involved. This is kinda damaging to the ones who actually can code, but forget stuff like imports and some semicolons cause the IDE will tell them. But those who have no idea how to code will be visible.
In France, they have a new Prime Minister Le Cornu who claimed a law degree he never finished.
some of our ministers in germany got sacked because of cheating on the phd. I didn’t grew up here and my uni there were quite a few cases of cheating. Some got sacked, most got better grades than me. I couldn’t do it, I started laughing out loud… learning the subject was easier.
Ooof thats rough.
I am one of those oddities that for the most part use a text editor to write code (there are exceptions, but python/go/and C I only use a text editor). So I often miss semicolons and such, then when a compiler complains I go “oh yeah”.
Notepad++ user here, or at least was until I killed off my last win10 machine.
I don’t know why but I find it easier to use than most IDEs.
I guess I’m going to need to learn a new approach now..
I did that in high school. Write down C by hand on an exam sheet.
The compilers at the time were less than helpful, throwing up a bunch of unrelated errors and not pointing out what might be amiss by even attempting to detect it, so you quickly learned to check all your semicolons and brackets were in place before doubting your code. That meant the hand-written test was a doozy.
Here in the Netherlands, I’ve had several exams like that, but we were always allowed to keep whatever notes we wanted. Some students even printed out c-reference documents and whatnot hoping it’d help them.
In the end the teacher didn’t even grade the works based on absolute correctness, but just on how well you understand it and whether your solution would’ve worked after making it compileable.
Of course this kind of exam stresses out students who don’t know what to expect, but in my experience these exams were actually the fairest ones to determine one’s programming skills. Later in life you probably end up in front of a whiteboard during a job interview, and you’ll be prepared.
In Poland too.
What was the prof’s answer when I asked him why the hell do we have to write code of our C++ assignments on A4 paper, scan it and then e-mail it to him? It’s pure nonsense.
“This way you’ll memorize the syntax.”
F… you. It’s been 15 years and I still haven’t touched C++ since then.
I’ve done C, assembly for various MCUs, Python, Java, some Bash and now I’m transitioning into farming and small-time machining workshop. I just can’t get myself to ever again do
#include <iostream>or something.I always get reminded of missing semicolons….
Obligatory XKCD…
https://xkcd.com/3160/
I need to try this…
Perhaps the new skill on the block is getting AI to generate good code.
Once this is mastered we might have better than human code?
That’s not a skill, any more than entering prompts to make AI ‘art’ makes someone an artist. If a machine is anywhere in the productive process, it cannot be art.
So the digital pixel-art image I personally created using my computer as the canvas and then printed on my printer to frame it and hang it on the wall is not considered art to you, since there are two involved machines in the productive process. Confusing.
I really hope you mean the creative process.
It’s not playing a key role in that example, it’s simply replicating something a human can do by hand. The point is that you are still being creative.
The ultimate test is creativity. You can imagine a car painted in the style of Van Gogh, for example, a computer could not on its own and would need to be shown that to reproduce it (which is what AI ‘art’ is doing, just stealing, copying and pasting).
Is the creativity not in the origination of the idea then, because if it’s not then you’ve negated the value of pretty much every piece of art and code that was inspired by another…
Ugh, posted too soon by accident, that’ll teach me to not put my phone in my pocket before I’ve finished.
I agree that AI art is wrong but from a philosophical point of view, it’s as much a valid piece of art as many others (financial value, value to the art world and human culture notwithstanding) and thus disagree that the machines or tools used negate the value of the output as art or even valid code.
If that were the case then any piece of art, music etc drawn, painted, sculpted etc after the first person saw a cow then used a brush, a chisel, pastel, charcoal etc was of zero value because the artist used the same tools and subject as someone who’d gone before.
So there has to be artistic value in the idea and the formulation of the technique used to bring it to life.
Therefore you could argue that the person who ‘configures’ the AI with a prompt and iterates on that configuration to achieve their vision is the artist no?
The masters used to take apprentices who learned techniques and often copied works of the masters, does that make their later work less valid?
I thought art was defined if someone liked it.
The test I like to use on art is whether there’s intent in it. Art, no matter how produced, conveys some idea or emotion or message, point, that comes from the artist. Art communicates on some level. Think Marcel Duchamp’s found object art: โโฆan ordinary object could be elevated to the dignity of a work of art by the mere choice of the artistโ
The way in which AI is not producing art is in the fact that while the user may give it a prompt to produce something, the choice of what comes out is made not by a mind with an intent but essentially a random number generator, so the outcome is the choice of no-one.
An artist can still pick from randomly generated images and say “This is it”, but the choice there becomes a question of whether the piece was chosen simply because it was offered by the machine. To make it art, the artist – the person – needs to define the meaning.
In this sense, when a person just picks from the output of an AI generator and says “I like this”, they’re not producing art but Kitsch: something which looks like it has meaning or intent but it’s really void and doesn’t communicate anything. It merely pretends to be art.
For instance, Duchamp taking an ordinary urinal, turning it on its back and signing it “R. Mutt” was basically flipping the bird at the art establishment and saying “Look what I can do – what are you gonna do about it?”.
And that’s what made it art. There was a point in it, why it was chosen. Someone else playing the same game and plopping a piece of old lavatory furnishing on a pedestal would be an imitation without original meaning from the artist, so the trick doesn’t work anymore as-is. At least you have to tell the same joke better if you want to pull it off.
Or, think of James Burke walking down the length of the Saturn V rocket laying on the ground, talking about rockets, and ends his walk and talk with the actual Saturn V launching up in the background. One take, perfectly executed, never to be replicated. What’s so impressive about that was that they managed to pull it off just so, and that puts you in awe of the audacity – what a great trick that was! Then you realize you’re looking at the rocket going up, in that state of emotion, and realize that was exactly what they wanted to communicate. Do you see what I see when I look at this rocket going up? Maybe you didn’t get it at the moment, but you felt it all the same.
If it was done using film trickery and clever cuts, or later with computer graphics, it wouldn’t have worked because of the lack of authenticity, even if the intent was there. With AI, everything it makes lacks authenticity because it can randomly generate just about anything, and it lacks intent because it is randomly generated, so you see in it what you want to see and feel what you want to feel by coincidence. The “message” doesn’t come from any artist, but from you, the viewer, which is the point of fake art or Kitsch. In the same sense, people passing off AI generated imagery as art is like selling an empty bottle of expensive wine – if you buy it, you have to supply the wine yourself.
Sounds simple at first, but is it? Arenโt we biological machines, too to a certain extent?
And if not, how do we exactly define the difference herr? These things can get philosophical really quick.
I mean, we could jokingly say that in the future LLMs will do the “liberal arts”, so humans can do focus entirely on field works (hard, manual labor). ;)
Please see above. A machine doesn’t understand, it cannot create. Try to get an AI to imagine a pink elephant and make an image of that without a pink elephant being in its training set, you can’t.
Hi, no offense, but did you even mentally comprehend what I tried to imply here?
Proving consciousness or a soul is an near impossible task.
We can’t even do that for ourselves so far.
Yet same time we agree on the general idea that people around us have consciousness, too.
So from a philosophical, ethical or moralic point of view
there’s the question if we shouldn’t speak in favor of the accused when in doubt?
Granting an A.I. rights “when in doubt” is more humane than denying them, after all.
PS: There’s an episode of TNG that mentions the matter.
https://www.youtube.com/watch?v=WkO9yDCW2n4
Voyager had one, too, when the doctor fought for his rights as an author.
https://www.youtube.com/watch?v=m6Bi7tJK-j8
Sure, that’s “just” science fiction. But the fundamental dilemma is same.
At which point does a “set of algorithms” become a consciousness?
And how does someone figure that out?
And at which stage exactly is young human life sentinent?
These are some questions of our society.
Another circumstance is, I think, that LLMs just have started.
Who knows how they influence the world in the next few years after they have passed a few generations..
I’m a bit worried, to be honest. ๐
It could change the value or whole meaning of work and change human society as such.
Because the current model of capitalism is basically based on needs, goods and human exploit.., err, labor.
Your pink elephant example is badly flawed, try asking someone who’s never seen or heard of an elephant to draw one, they can’t, because it’s not in their ‘training set’ either.
@CJay UK
Indeed, that is an often given reason for why medieval bestiary and map marginalia is so entertainingly bonkers – a description of the creature as interpreted by the artist, who likely didn’t even get to talk to the person who wrote the descriptions, as the map is likely made of data from many sources, many of whom are likely already dead…
Dude according to your definition, no one has done anything in the last 80-100 years lol. What makes an artist? Anybody can draw. Do racecar drivers need to run the races instead to earn your merit? Glad you are not in charge of the world lol. It must be exhausting walking everywhere and not touching most of the world around you… Just wow. Now, if you had been all up in arms about people not attributing their work to AI when applicable, I would totally have your back. But in your world people that use word processors are not writers, DJs should be yelling louder and not using antennas etc. Thanks for that it was a fun mental exercise :)
then most art does not exist at all because any implement that could be used to create it could probably be considered a machine, such as a pencil, an ink pen, or definitely a camera.
Tell me more about how you make your pencils. And paper.
For purposes of the current discussion, you need not detail how you made the saw blades to fell the trees.
But does coding have to be art, or is it enough for it to be engineering?
Your remark evoked another thought… I wonder… Ultimately, what good is a programming “language” to an AI?
If we allow AI to optimize workflow to its logical conclusion, why wouldn’t it ultimately produce binaries directly from our design prompts, as opposed to source code?
I get it… source code lets you intervene to understand/approve the underlying mechanism that the AI came up with, but it also presumes there is a human programmer interested in reviewing the code.
People are lazy and corporations are greedy, never missing an opportunity to seize on a cost savings by eliminating a human someplace. Put another way, corporations are perfectly content with the production of mediocrity so long as it benefits stock price.
I think we’re looking at of future filled with crapware.
“If we allow AI to optimize workflow to its logical conclusion, why wouldnโt it ultimately produce binaries directly from our design prompts, as opposed to source code?”
Because we ask it to and because we know it’s flawed, so we want to be able to intervene, I do wonder if one might be able to produce binaries for something like a C64 or Atari VCS etc where there are far fewer levels of abstraction between the language and the bare metal hardware…?
The difference between a programming language and bare metal instructions is basically just one or two layer of abstraction to make the logic easier for a human. So if the AI can’t write good source code it almost certainly also can’t write good machine code either as the two are so closely related.
The only way to write great machine code and terrible C/Java/Python/Cobol(etc) is if the AI can get the logic of the program flawless, know about all the variations in hardware it would need to write good machine code for the targeted device and somehow doesn’t actually know anything at all about how to translate that logic into the more human readable language.
Had a discussion recently. Basically what it came down to that programing as a profession would be more architect than anything else. A higher level, AI would be more suitable for.
Well, In my university I think it is basically necessary to cheat in some subjects, and although they have systems to avoid people from cheating, and I know because I talked with the guys who set those systems up and I have used them in some subjects, they don’t even stop people from accessing a LLM webpage from the browser while doing the practical exam. Like, you better not gonna get caught, but they don’t even try to block DNS requests if you do it.
If you don’t cheat, the exam becomes really hard to pass in the time you have to do it. I consider myself a good programmer, and I like to write VHDL and C, I do understand everything they show in the university.
But when you start with the test, and the compiler decides to not work because you set the project wrongly or you are not used to using fcking windows and it starts behaving strangely in the exam, you better be prepared.
I will admit it, I have uploaded the projects to my github, and downloaded them from internet in the middle of the exam. They don’t disallow it. But, looking at the university published stadistics, I can see that most people don’t prepare themselves like this for the exam, and then they fail. I have seen people crying in the exam because shit stops fcking working or the keyboard stops writing. And in an hour you have barely time to finish the assignment. I have to say, the exams of this subjects I am talking about is of the type “all or nothing” Like, your code can be perfect, but if it doesn’t upload for whatever what reason to the uC, you will fail the subject and the professors won’t even look at your code
I guess the strategy they have taken is, if they cant avoid cheaters to cheat, they will let everyone cheat… Although it is not allowed and most people try to play fair
Many people say that learning BASIC in the 80s did not help with higher level languages. Kind of BASIC corrupted something in the brain and I cannot understand C++, or so it goes. Me included. I was pretty good in java, though.
I had the interesting experience of starting college later in life right before LLMs exploded, so I got to experience the before and after having never written a line of code before. I’ve tried ‘vibe coding’ but found it’s like every overhyped LLM trick, it looks kind of cool at first but it falls apart when you try anything complex. That said I use LLMs when coding, it handles the routine stuff well enough with quality checks in place. I tell new students that if you know how to ‘code’ LLMs are great, if you don’t they are a nightmare. The thing is, most of these students will be fine in industry, I consult with one of the largest companies on earth and they constantly push their managers to increase AI use and adoption, going as far as setting weekly use quotas. The students who figure out how to meet the end objective with LLMs will pass and eventually become product managers with mediocre engineering skills. It’s up to the schools to make the assessments to ensure that engineering skills are being cultivated. Like in your example, students have always and will always flow like water to the state of least effort using whatever tools get them there. Is any of this good or bad? I have no idea, I’m personally a data engineer who hates LLMs and uses them extensively which doesn’t make a pick of sense so I don’t know.
Admittedly I use AI to assist ME in coding. It’s more for taking my previous code and tweaking. I am always guiding the build and NEVER have I had AI code anything that worked without some amount of adjustment.
Will AI coding be obvious? Yes. It does not work well out of the box for me, and unless you’re doing some basic things it will show itself. For third party integration, it is far from useful without a conductor to guide the ideas and results.
Vibe coding, like google/stackoverflow reliance, is a gift and a curse. To the newbies it feels like a library of answers, to professionals it is shortcutting and lazy. Unmaintainable code is always the result of this strategy. Vibe coding should be considered plagiarism, much like copy and pasting wikipedia articles without citation is. The point of coding is not always runnable code, but to demonstrate knowledge and proficiency gained.
IMHO, wrong task.
Ai, build me a house for under $200K in the location of my choosing, at the size of my needing. While at it, find me a job as a CEO of some kind of large corporation with no responsibility for its actions, or win me a lottery, your choice.
its a large language model not a genie
though replacing CEOs with a LLM and some generative deep fake seems short-term possible and could be VC’s wet dream
And might well actually be more functional, as the largest bodies of text about CEO type folks is going to be about how awful they are, and how they need to stop trying to micromanage the coal-face workers who’s jobs they don’t actually understand… And the second largest likely the notes of the discussions the board and investors have had – but being a LLM the two don’t actually have to have anything to do with each other, as it doesn’t have that ego or actual ‘grand design’ it wants to achieve, not any real logical consistency
Eliza was a chatbot, too and no psychotherapist.. ;)
– That didn’t stop her from doing her duty, though.
Anyway, the current development leads to LLMs becoming an “super app”.
Search engines already transform from research tools to service tools.
Rather than just finding information, they will start to provide them.
So you can ask Google to book an airplane flight reservation, for example.
The downside is, that the sources (websites etc) will nolonger see clicks/users because of this.
So they may vanish in the process.
We have no idea how big the influence of LLMs right now, I’m afraid.
The “old” internet as we know it is dying rapidly, and it’s not just a fad.
and none of that equals the reality bending wish granting that Samee implied expecting his AI Agent to perform.
True vibe coding (adjust the prompt until something works but don’t look at any of the code) is a real problem but AI assisted coding is the way of the future. Asking for boilerplate code to access an unfamiliar API, providing options that might optimize a section, and AI pair programming where the AI suggests some code or double checks you’re for errors are all good uses. Learning to effectively use these techniques should be included in any modern computer degree program.
I’d prefer to look up an example in the manual/online tutorial. This AI vibecoding will only add to the huge volume of crap code/apps out there.How the blazes can you secure the code when you don’t know what it is doing? This laziness or lack of ability/knowledge is unlikely to be coupled with any rigorous testing for security or even robust functionality.
Fie on vibe coding!!!
I’ll believe the AI assist has real merit for that when it is correct and the documentation is poor enough you can’t find what you need searching it directly. Which for me at least on poor documenation was the case some years ago with the ALSA audio stuff in Linux where trying to learn how to fix something like your particular machine that has a common sound chip but has in the hardware got the chips pinouts wired to the “wrong” things IIRC the worst part was the ‘Headphone’ mixer being mapped to the Speakers while still having the autodetect of a plugged in headset working – as then it will auto mute/unmute the wrong mixers by default! Trying to figure that one out and create the right ‘model’ I believe the documentation called it for this particular way of wiring up this chipset was hell to figure out in the documentation, to the point I don’t think I ever fixed the configuration just invoked ALSAmixer manually every time I needed to. So had the LLM existed I might well have tried it, and if it could summon a correct answer….
But as when you need to access an API the documentation for that API should contain better ‘boilerplate’ code. As it will at least in theory be correct to the version in use and also have no change of LLM hallucinations making up garbage…
Using the LLM as a “regular expression for the ignorant” search I can just about see the value of, as if you don’t know the right terms to search even mastery of regular expressions won’t get you to the right answer fast, and without that mastery but knowing the right terms you could have much more searching to do as the term no doubt turns up in more contexts… But once you have something with documentation you should be able to search that yourself, even with the dumbest of searches and get better results than the LLM of today.
I like your well balanced take on the topic. ๐
Navigating the consequences of easily-available generative AI is a big struggle for most everyone I know in education. Can LLMs be a useful tool for learning in theory, and not just a way to avoid actual learning and work? Yes, in fact. will they be used as such in practice? time will tell..
For programming, I see the situation as something akin to the impact of the electronic calculator on mathematics (albeit with a much more complex, sophisticated and ethically fraught calculator). There’s a reason we still insist on teaching basic math by hand in school before we let kids anywhere near a calculator. Once you intuitively understand the fundamentals, then you can learn how to properly use the tool.
so much “vibe coding” is like trusting the answer from the calculator without even being able to tell if the answer is orders of magnitude away from what it should be.
Am i the only one who’s kinda happy that LLMs are making college assignments and a lot of academic institutions lose their importance? I loved being in college, loved interacting with professors and other lecturers. What i didn’t like the pointless of a lot of it.
Assignment were rigid, and not very fun. They were okay for teaching the subject material but i would have preferred more open ended problems. Something like “here’s what you need to do, do it however you feel like”. Projects were always a blast
I didn’t cheat in college.
I did however use AI to build a server process to fetch data from an API, do some basic manipulation on it and republish it to a Redis data store and an MQTT server so it can be consumed by a pico W and display sports scores for live games based on subscribing to a team.
I’ve built this process multiple times, and each time the AI does a better job off the hop. Yesterday’s results were a 98% solution. Functional code without bugs; basic functionality, only missing a few optimizations.
I’m perfectly capable of writing the optimizations myself, but I’m having more fun corralling the AI into giving me perfect results.
You do you, but I’ll be sitting over here getting good results.
And I have the skills to debug or rewrite it as-required.
Hi. I think you’ve captured a sentiment that many people share but have a hard time putting words to.
A question I sit with, is how to teach this skill? Is it a natural side effect of years of hand craft before the advent of AI? Or is it teachable through careful introduction of advanced llm tooling after demonstration of competency in foundational modules?
I use these tools in the same way, to implement processes that I have already been building for years. I find this kind of work to be rewarding and productive, but that is because I value building and iterating over the actual task of writing code.
However, sometimes I use it for novel tasks and often it falls flat due to my inability to properly guide it due to lack of my expertise in an area in haven’t previously studied.
I am both hopeful and cautious, depending on the day
+1 and then some to Mark!!
I’m less worried about AI grads because IMO the skill level of college grads with engineering degrees has been pretty abismal for decades, at least here in the US.
From the late 1990’s through the early 2000’s I was interviewing 10-20 EE job candidates a year. My part in the interviews was to evaluate their technical skills. I had to make my quiz easier and easier until I started with a simple resistor divider, 5V, 2k, 3k, GND, “what is the middle voltage”. Probably half of EE graduates said “2.5V” and when I said no, look again, many were not able to figure it out and started guessing. There was no understanding of the most basic principals.
I think you mean “principles”.
๐คฃ
Degree or not, nobody’s getting a CS job anymore. That’s why I just got a CDL.
That’s good for the next 0-10 years, but by 2035 most commericial truck-driving will be automated. There will still be jobs in the industry, but they won’t be the truck-driving jobs you just trained for.
That said, if you keep your truck-related skills up-to-date you should be fine. Same goes for programmers and other CS careers: If we keep our skills up-to-date with what the market will pay us to do, we will be fine.
I just want to circle back to this in the original post
So… Your university had ten lectures and ten lab sessions in a given course, and they couldn’t be bothered to coordinate lab#1 with lecture #1, etc?
This went on for years?
Everyone just shrugged and thought it was normal?
Not trying to cast shade here, but it kinda seems your university needs to take Course Design 101.
Yeah, the problem isn’t that she (“they” for those of you in Rio Linda) cheated, it’s that the course sucked and no one spoke up for years. If that course were a vehicle it would not pass MoT.
I had to look up Vibe coding, and then realized I have alway done with with same java code (just for fun, someone in a local RBTC lecture mentioned how he created a timer using ChatGPT and so I spent the weekend doing that, including getting instructions on how to set up java on my computer)
โAI, Make Me A Degree Certificateโ
I envision a robot smashing me in a press between two sheets of absorbent media, hanging the flattened me on clothes line until I’ve dessicated, sheering me into a nice 8-1/2 x 11 rectangle, running me through a laser printer and… for good measure, embossing a gold star at the lower right.
“There,” says the AI, “I made you a Degree Certificate.”
OK, you’re a banana
The people getting degrees by cheating with AI will end up with degrees for jobs that can be replaced by AI. They’ve basically figured out how to use cars to become kings of the buggy whip market.
AI is in the process of devaluing the ‘degree as a nonspecific employment certificate’ industry. There’s a category of students, teachers, curricula, certification bureaucracy, and HR departments that could be replaced by AI slop without anyone noticing a difference. They represent a job market that will disappear, just like the rooms full of people with adding machines from the 1950s.
The people who think they can use expert systems with LLM front ends to compensate for lack of skill and experience are missing the fact that ‘lack of skill and experience’ is an easy commodity to replace. The people who think expert systems with LLM front ends can replace skill and experience have missed the last 45 years of expert systems generating complex alternatives rather than simple answers.
Case in point, three pieces of code do the same thing: one uses extra memory to minimize runtime, the second uses extra processing to minimize memory footprint, the third is bone-stupid simple. Which one do you choose? Does your answer change if the code is called from a loop that busy-waits to the end of a 500ms interval? What if it needs to run every 10ms, sending and receiving 64-byte packets over a 115200 baud serial connection?
Meanwhile, the vibe coders are asking Reddit “why doesn’t this VAX 11/780 code run on my ESP32?”
These days, where memory and processing power are typically cheap, I default to “bone-stupid simple to maintain.”
That means if I (or an AI) can make the code 5 lines of code that even typical entry-level programmer could understand by looking at it, I’ll generally prefer that to 50 lines of moderately complex code that relies on a couple of not-obvious-to-the-newbie idioms even if the 5 lines are really library calls that are 1% as space-efficient and 1% as time-efficent as the 50-line version.
But if memory, execution time, power consumption, or any other factor is anywhere close to being “not free” or if this code will run so much that the wasted time/energy/memory add up to something even approaching “I need to care about resources” then I’ll scrap the 5-line version and optimize for something other than understandability/maintainability.
“Cheating” is just another word for “technology”.
If you cut wood by hand, a table saw is cheating.
If you sculpt pottery by hand, slip-casting is cheating.
If you farm wheat by hand, using a combine is cheating.
How about we start focusing on results, instead of arbitrary purity tests?
I’d have to disagree, if you cut wood be it by axe, saw, tablesaw or CNC milling machine you understand the method, know how it works, and know the quality of the result you can attain. The only difference is the amount of effort required to get that result with that tool – So knowing this and the tool you have at hand you’d design in a way you can actually make, and all the end results are actually made of wood.
Where these LLM cheaters are taking a product nobody really understands to create something resembling what they intended enough to function with no idea of why that way works, and if something else would be better. You wanted something made of wood, but instead you have something that loosely resembles wood, perhaps good enough for the moment but not the same at all – like the cheapest end of the IKEA lineup where most of the ‘wood’ is cardboard hexagons with a thin veneer and a few battons in it for the fixings and edges vs an actual wooden bit of furniture. (Nothing actually wrong with that sort of IKEA furniture (as long as you know what you are buying), but in this analogy it wasn’t what you set out to create at all – you got a very pale imitation of the furniture that would last a lifetime or 4 but didn’t know any better).
How you got your piece of paper and if you really earned it may not matter much because in the tech industry entry-level hiring has collapsed, with 2025 seeing a 73% decrease in hiring rates, according to the Ravio 2025 Tech Job Market Report. If you want a job then get a team together for a startup, and if you are lucky the lot of you will get a big bonus and be hired when a larger company buys the controlling share of your product. If you fail, try again, or go and learn a trade where you can earn good money, if you don’t mind hard work.
To take it out of the context of academia- somebody at the residential low voltage (alarms, cameras, A/V, automation, etc) company I work for used AI to generate the SOP for our network installs… blocking out DHCP reservations is a fine idea. Looks good on the surface. But when you start asking questions like ‘why would we ever need 10 routers?’ (.1 to .10 is the router block. .11 to .20 is switches, .21 to .30 is for APs, etc) …a router per switch and a switch per AP?
In fairness humans do like round numbers – its much easier to remember that the switches start at a logical predictable value so you can just get the switch you need only having to remember the one or two significant digits that ID the particular switch and the rules that define devices.
As long as you are not going to be hitting the allocation limits, or have some other reason like integrating with the existing infrastructure I would probably do that sort of setup too. Might not give every type of device its own block, can have .1-.20 or more likely count backwards from the other end as the Misc devices you don’t expect to need loads of (hopefully that lets the misc devices group get as big as they need, and you can still add in the next logical block group for other devices you do have multiples of without clashing)