ChatGPT & Me. ChatGPT Is Me!

For a while now part of my email signature has been a quote from a Hackaday commenter insinuating that an article I wrote was created by a “Dumb AI”. You have my sincerest promise that I am a humble meatbag scribe just like the rest of you, indeed one currently nursing a sore shoulder due to a sporting injury, so I found the comment funny in a way its writer probably didn’t intend. Like many in tech, I maintain a skepticism about the future role of large-language-model generative AI, and have resisted the urge to drink the Kool-Aid you will see liberally flowing at the moment.

Hackaday Is Part Of The Machine

As you’ll no doubt be aware, these large language models work by gathering a vast corpus of text, and doing their computational tricks to generate their output by inferring from that data. They can thus create an artwork in the style of a painter who receives no reward for the image, or a book in the voice of an author who may be struggling to make ends meet. From the viewpoint of content creators and intellectual property owners, it’s theft on a grand scale, and you’ll find plenty of legal battles seeking to establish the boundaries of the field.

Anyway, once an LLM has enough text from a particular source, it can do a pretty good job of writing in that style. ChatGPT for example has doubtless crawled the whole of Hackaday, and since I’ve written thousands of articles in my nearly a decade here, it’s got a significant corpus of my work. Could it write in my style? As it turns out, yes it can, but not exactly. I set out to test its forging skill.

In Which I Meet My Digital Impostor

A couple of weeks ago I covered the news of the impending re-entry of a failed Soviet Venus mission stuck in orbit since 1972. It’s a fascinating story because the craft may survive the experience, and it’s possible that by the time you read this it might have been retrieved. I asked ChatGPT to provide me with a 200 word article on the story “in the voice of the Hackaday writer Jenny List”. Here’s what it came up with:

In the early days of the Space Race, the Soviet Union launched a bold bid to reach Venus: the Venera missions. Among them was Kosmos 482, a 1972 mission that never made it past Earth’s gravity. Now, over five decades later, its descent craft is finally coming home—with a bang.

Kosmos 482 was intended to deliver a lander to Venus, but due to a malfunction in its upper stage, it was stranded in a highly elliptical orbit around Earth. Encased in a semi-spherical titanium shell, the descent craft—designed to survive the brutal atmosphere of Venus—has been orbiting quietly, steadily losing altitude. By early May 2025, that orbit had decayed to a precarious 232 x 136 km, setting the stage for an uncontrolled reentry around May 10th.

Unlike typical space debris, Kosmos 482’s reentry carries a bit more drama. Built to endure Venus’s hellish surface, the lander is expected to survive Earth’s atmospheric plunge mostly intact. Impact predictions estimate it will hit the ground at about 65-70 m/s, roughly the speed of a meteorite, packing the punch of a 500 kg hunk of Soviet engineering.

With an orbital inclination of 51.95 degrees, it could come down anywhere between 52°N and 52°S, adding an element of unpredictability. One thing is certain: after 53 years in limbo, Kosmos 482 is finally coming back to Earth—whether we’re ready or not.

Reading it is a fascinating experience, because superficially it feels like a Hackaday piece. It has my vague style with some phrases such as “packing the punch of a 500 kg hunk of Soviet engineering“. It’s casual in its language, and it’s captured my wordiness; in contrast to the more terse style it does when for example I ask it to do the same thing in the voice of my colleague Al Williams. It fails though in several senses, I feel it doesn’t capture my sentence style and the very British way I use punctuation, it relies on far more figures than I would put in a piece, and perhaps most importantly it has no inference. It relies only on the facts it can find, indeed those figures are a sign of it overusing them, and it doesn’t go further in commenting on any likely outcomes. In my human-written version of the same piece I finished up by speculating on who would own it if retrieved, and anything of that nature is absent in its output.

Don’t Worry, Humans Still Needed

So it’s clear from the above that while it can write something which is superficially coverage of the same story in my voice when writing a Hackaday piece, it still fails at the task in hand. Where we would try to give a little introduction, background and comment to the story, it instead presents only a summary of facts it has found. The fact that it can’t infer beyond the story is reassuring, because it means we meat-based Hackaday scribes still have the edge.  There are many people who will tell you to look for certain words as clues to AI-written text, but the truth is much simpler. Look for the human touch.

49 thoughts on “ChatGPT & Me. ChatGPT Is Me!

    1. The use of /s pre-dates reddit quite a bit you know.

      And since I am on a subject: gg meant ‘good going’ and then became ‘good game’, likely because gamers were ignorant of that meaning. Similar in how facebook moms take lol to be lots-of-love rather than laughing-out-loud.

  1. But how does the process work?

    It began, as these things often do, with a prompt. A user asked ChatGPT to write an article in Jenny’s voice. The AI, ever obliging, paused—ethics, impersonation, yada yada—and then pivoted. Instead of pretending to be Jenny, it wrote about how it might write like Jenny. Meta enough for you?

    🛠️ Reverse-Engineering a Writing Style
    To emulate Jenny’s tone, ChatGPT had to do what any good hacker would: take it apart and see what makes it tick.

    Curiosity-Driven Hooks: Jenny doesn’t just report on projects—she wonders about them. So the AI starts with a question or a quirk. Something like:

    “It’s not every day you see a Raspberry Pi driving a CRT from the Cold War, but here we are.”

    Technical but Not Arrogant: She explains things without condescension. ChatGPT mimics this by threading just enough detail to keep the engineers nodding, without losing the tinkerers.

    Historical Context: Jenny loves a good backstory. So the AI digs into the archives, pulling out references to 8-bit CPUs, obscure standards, or the golden age of RS-232.

    Celebrating the Hack: Whether it’s a kludged-together keyboard interface or a 3D-printed gear that almost meshes, Jenny’s writing always honors the spirit of the build. ChatGPT tries to do the same.

    🧪 The Test Article
    The AI then generated a sample article about a Raspberry Pi-powered retro terminal. It had all the hallmarks: a CRT rescued from a skip, a Pi Zero 2 W jammed inside, and a keyboard matrix reverse-engineered with a logic analyzer. It even included a line about “a wiring diagram that looks like spaghetti and functions like art.” Jenny might not have written it—but she might’ve smiled reading it.

    🧬 Why This Matters
    This isn’t just about style. It’s about how we preserve and celebrate the hacker ethos in a world increasingly run by black-box systems. Jenny’s writing reminds us that understanding matters, that curiosity is a virtue, and that sometimes the best projects are the ones held together with zip ties and hope.

    If an AI can learn to echo that spirit—even imperfectly—it’s a sign that the culture of hacking is alive, well, and maybe even a little contagious.

    1. Except, that’s not what the AS (Artificial Stupid) did.

      It slurped up a lot of statistics on Jenny’s articles, munged them with articles on the satellite, and spit out a string of words mapped by how probably they fit the input prompt.

      Ta-da!

        1. Nope
          No way it wasn’t, unless someone is trying very hard to write like chargpt

          The emogis at the start of a list of mini sentence paragraphs seems to be somewhat of a pattern when you ask chatgpt to explain something like that.

          1. It literally says “with help from CoPilot” in the name.
            Are you guys just AI impersonating anti-AI commenters?
            Though I’d think the AI would take into account the name if set up properly.

  2. ” perhaps most importantly it has no inference”

    It’s perfectly capable of inferring things. It can be difficult to get it to express what you want, particularly if you’re vague in your ask. You went looking for things you didn’t ask of it however.

    1. It is fundamentally incapable of inference, because it does not analyze the data it has – it merely combines it according to probability.

      Any inference that ChatGPT makes has to already exist in some form in its training set. In other words, enough people have had to have said the same thing, made the same inference in different ways, so it would have a fighting chance of appearing in the output.

      1. That hasn’t been true of current AI models for over a year. It isn’t 2020 anymore and modern agentic AI is quite good at pulling together new information from current sources and making inferences.

        I recommend doing a bit of research for yourself instead of spouting YouTube summaries with no comprehension of your own. You can build your own RAG in a day starting from scratch, if you already know how to program. I did about a year ago, and since then it has grown into a self-hosted AI assistant that helps manage my personal knowledgebase and assists with my project management and debugging. I’m currently working on a robot that it can control and use to explore the house independently.

    2. Of course ChatGPT can generate a sentence that nobody has ever said before – that’s easy – and therefore it can theoretically produce a sentence of inference that is produced by ChatGPT itself.

      But, that’s not because it took in knowledge and made conclusions out of it, but because of random dumb luck in combining words. That’s infinite monkeys banging on typewriters kind of stuff.

  3. As it turns out, yes it can, but not exactly

    The reason is that the LLM has to mix sources. It cannot exactly pull only the style from one example and only the contents from another, because it’s based on summing markov chains. It’s all just data to the model, it doesn’t actually know the difference between content and style, so it ends up mixing both. LLMs and other generative AI tend to “regress towards the mean” in this manner – the result tends to resemble the average of the training set, or the subsets that were selected by the prompt.

    So if you ask it to write a piece of text in person A’s voice, but replace the contents with something that person B said, the resulting style will be something in between – unless it can find pieces of text where A said the same thing as B, in which case it can emphasize A’s style instead of B. For other parts, it will lean towards how B said it.

    Since the LLM isn’t pulling data exclusively from something that Jenny wrote, but it includes various other pieces of text as well that relate to the matter, or which happen to be included by accident, the sum probability that it follows is never quite like the original. It’s like Jenny, but more average.

  4. Jenny is whistling past the graveyard here. ChatGPT didn’t speculate on who would own the fallen satellite because she didn’t ask it to. It would be perfectly capable of doing so. I use ChatGPT quite a lot … it has LOTS of flaws and is often just plain wrong. But it keeps getting better and better, and if I ask it to generate a Hackaday article on a random technical subject based upon something it found elsewhere (which is EXACTLY what HaD authors do) it is going to do a pretty good job of it.

    I recently asked ChatGPT if it thought AI would become sentient with time and scale. It replied that sentience doesn’t even have an agreed definition yet, and that there is currently no proof that scale alone would get the job done. It even suggested that it’s possible true sentience requires some sort of biological element that hardware will never replicate (how’s that for speculation, Jenny?). The most interesting part of ChatGPT’s reply to me, though, was when it said that while AI might never become sentient, it might be the case that AI gets to the point where humans can’t tell the difference.

      1. Except we link to real projects, and that whole thing is a confabulation. Hornswoggle. Hogwash. Balderdash. BS. It only makes sense on the bare surface level.

        Could an LLM be constantly re-trained to get you snapshots of the real web? Could you prompt it to give you real material with actual links only? Probably! I don’t know that it would be cost-effective compared to us.

        Say it was, though. Once all the copywriters are replaced with LLMs training on material produced by LLMs, what are you going to get? GIGO and Dead Internet Theory would combine to make some very bizzare output, I’m sure.

        1. You don’t get it. LLM (likely) won’t replace true content generation … but it will for sure replace rehashed journalism like HaD and similar sites that are basically aggregators and not generators.

          1. What don’t I get? Truly original content is pretty rare. Most things on the internet are derivative and rehashed; arguably Hackaday puts more effort in than a lot of sites. Certainly the ratio of rehash:OC is more than enough to poison the well for LLM training if the traning algorithms continue to pull from the whole ‘net. If LLM makes “rehashed journalism” faster and easier, it’s only going to proliferate, too, making the GIGO problem worse.

            Plus, people already are trying to use LLMs to replace content generation; its a real problem on sites like Royal Road. We’ve covered plenty of vibe-coded projects, and I’m sure I saw a tool for LLM-generated documentation. All that becomes grist for the mill, too, as the web gets fed back in as training data. GIGO.

            Not today, not tomorrow, but maybe someday.

    1. ChatGPT didn’t speculate on who would own the fallen satellite because she didn’t ask it to. It would be perfectly capable of doing so.

      The task was to generate an article in their style, and that would include such speculations if ChatGPT had actually picked up how Jenny tends to write articles. Style extends to the choice of contents.

    2. But it keeps getting better and better

      The irony is, the way it gets better is by people writing code to catch and handle exceptions and special cases, making it a form of human intelligence, otherwise known as a Mechanical Turk.

      1. The irony, to me personally, is that AI creations have become prevalent enough to establish an expectation. Its basis in statistical probability is then ingested by the masses who go on to create content based on the content that is based on our content. Unsurprisingly, like you said, this tool is limited by its creators and merely perpetuates the humanity designing (and ingesting) it.

        1. The masses were already making that content, the machine is only copying itself more and more as ads and individuals pump out millions of creditless derivatives. It doesn’t inspire people to create.

        2. I think that in the world of LLM AI they are aware that with simple LLM’s with which they started you run into a dead-end and that they are moving beyond that simple system of LLM’s to try to get to real AI.
          Although I don’t think they’ll get to conscious ones before getting over certain hurdles we put up in ourselves. It depends on some random person with just the right gift and insight to figure it out I expect, more so than any mere money and time and political hysteria spent on it.

          I saw in the news that researchers discovered that AI code often has dependencies that are made-up and do not exist. And in the comments people reiterated that LLM are ‘just statistics’ and got lots of upvotes, ignoring the simple logic that statistics clash with the concept of frequent made-up non-existent dependencies.
          I mean were would it get that from? What kind of statistical analysis of code would end up with tons of non-existent dependencies? (unless the training data was deliberately extensively poisoned I suppose.)

    3. It even suggested that it’s possible true sentience requires some sort of biological element that hardware will never replicate (how’s that for speculation, Jenny?).

      That’s just short of quoting John Searle. ChatGPT doesn’t pull the stuff it says out of thin air, or make any sort of inferences or guesses based on data. If it says something might be this or that, it is saying so because a lot of people have said the same thing.

      1. I think I understand your meaning, but it is not true that LLMs only regurgitate complete fragments that existed in the training input. You should do a machine learning exercise from the net where you set up handwriting digits recognizer by training a NN on MNIST. It’s substantially the same process as LLMs, and it helps to understand how all this works.

  5. Jenny, thank you so much for all you do, especially doing my code homework for me- Your services are invaluable (especially if you were to ask ChatGPT for your fair share of their profiteering)

  6. I feel you about the commenters. They really bring this site down to a lower level than it should be at. It’s a shame really, becuase it could be fixed – and a more friendly, non-combative, interesting discussion could take place.

  7. As I have often joked, if AI had to do surgery on someone’s tennis elbow, it would first remove the heart so the patient didn’t die….
    It definitely gets better all the time and I am constantly in awe of its ability to help me with schematics or bits of code BUT it tends to get an idea fixe and god help you if it wants to predict code from another version of whatever sdk/ide/versioned library than you are working with. You could hit it with a stick and it would still try the same version over and over until it is either debugged or man or machine give up lol. All in all it is way more efficient than posting to a forum, waiting days or weeks for somebody to gatekeep or berate you, and still not have come any closer to an answer. AI will often overexplain things in a good way so I will spot something to focus on or a parameter or function I had no clue even existed (who can know them all lol?) so I even learn something which is cool. The absolute best part though is it is not a dick about things and is even pretty good about keeping me upbeat when a project drags on. It will list what we have accomplished so far and the finish line does not seem so far away and it acknowledges project fatigue and frustration and the need for mental health breaks etc. If I had this as a young teen there is no telling how many magnitudes further along I would be than I am with things but that is life. Glad I got to ride a bike to return a vhs tape and tuned the antenna to get cartoons on a saturday morning and play in a creek. It is nice though as long as you keep a side eye on what it is trying to do.

    1. “The absolute best part though is it is not a dick about things”
      Suddenly I understand why people use the AI for possiby-dodgy advice. Better that than go to human netizens and get possibly-dodgy advice with heaping sides of condescention and attitude.

      1. I’m an old creature of habit so I am used to just clicking on the comments for the real story and links but man it just doesn’t happen so much these days. There isn’t dialogue so much as 1000 monologues and in the rare occasion one does occur, it is a fractal of fragile egos and people mad at themselves. I mean I can tell AI to give me snarky answers and it will and it even gave me a humorous nickname after a particularly long debug session but man some positivity really does go a long way. It was pretty good at getting me to give it a third final try and push thru. Funny though you can tell when they tweak the responses though. Some days it is a complete sycophant and others it can be like a depressed sibling you have to wring answers from. But yes, dodgy advice either way but a much gentler douche curve which is something I should teach it.
        Thomas Douche (pronounced dooooooshay) and his academic opus from 1923 in the Journalia of Academica Supplemente entitled “Derivations of Orbitational Toxicity in an Urban Metropolis”. His combination of the previous theories of his contemporaries was heralded at the time as one of the greatest contributions since Edison’s first Apple Computer named the LISA, which was first thought to reference its new Large Integer Scalar Array microprocessor and not his AKA Triple Award Winning Cocker Spaniel of the same name. Raised in a sharecropper home in Alabama, Douche attended Terrence R Matthews Elementary School until the age of 8, when he started working for the county clearing fire breaks for up to 19 hours per day. By the age of 11 he had risen to a managerial role and started to notice the sharp discrepancy in worker attitude within the organization. He was awarded the Taft Award for Junior Excellence and a scholarship to Matthew R Terry Middle School where he again excelled at basic math and contractions. After the war, he briefly worked at Sun Pak Labs as a research technician for storage of frozen seafood. He was present when someone else accidentally discovered Flash Freezing technology before promptly falling into the machine and freezing himself and the very secret to unthaw him at full, fresh, flavor (widely referred to as “The Three Fs of Flash Freezing Food From Full Fresh Flavor”. While delivering the eulogy, Douche was overcome with grief and in a moment later described as “desperate inspiration” proclaimed he would beat Lindberg’s transatlantic record in a Ford Model T instead of a plane. He would be posthumously honored with a Florida tag on the news conglomerate site, Fark.com. While recovering in the hospital, Douche noticed how relaxed the nursing staff was and how his pain medication “didn’t do shit” and began studying the organizational loopholes which later culminated in his first academic article entitled “A Treatise on the Hospital Horse Hustle and Pain Management” which received little fanfare until seven months after its release, when Teddy Roosevelt was so impressed he was briefly employed as one of the Trust Busters, until he found organizational flaws and they were disbanded as a governmental league and certainly not recognized as an official New England Revenue Prefecture collector nor distributor. A mere ten years after the invention of the carburetor for gasoline combustion engines, Douche found himself yet again thrust into the public spotlight when he saved three separate infants from a horrific trampling when elephants defied their captors whips and rifles during the Thanksgiving Day Parade in Watersville. With half of the town wiped out trampling each other to escape the elephants, Douche set off with the now orphans to stead a farm in the foothills of Missouri for the next two years before attending the newly established Calabash Community College and enjoying their cafeteria during a luncheon for free. His next paper saw little success as well until again it was noticed by a contemporary and lauded throughout the Halls of Academia, where all papers go to become articles. He was initially paid a meagerly sum of $3.54 for his work until he pointed out the Douche Curve. The President Superlative of the Halls of Academia immediately saw how he “was being a Douche” as per the graph displayed and immediately embiggened his offer to a cromulent value at the time. Though neither man ever admitted it, they were deeply jealous of each other and would lampoon the other’s cause noble, often in public. In his later years, Douche would go on to publish refinements to his Douche Curve, reflecting quarterly profit differences that Sears and Roebuck sustained during their wartime efforts. In his final years, Douche actually published an article discounting his previous efforts as “pure peacock ham” noting there was an error in the original Douche Curve formula in which he forgot to subtract one at the end. He was later awarded an Emeritus Status Quo at Oscar Meyer University and won several conservatorships posthumously. His lectures were often described as “fiery and energetic full of pith and vinegar” and “orange like the…fruit?” even though he was modestly dressed at most functions. Ever the scholar, he continued his studies until he became overcome with exhaustion and prescribed laudanum for his rapid hair loss. In his final years he became overwrought with grief at the very mankind he studied and their various inequalities in organizations until he famously proclaimed, “The differences of Man are so great and incongruous that it would take a canoe to cross the abyss of the Douche Curve…” After his passing due to a collapsed lung during a horse attack, Douche was buried without honors on the southwest lawn of the Hall of Academia next to the horse that mortally wounded him. The three orphans he left in the foothills of Missouri died about four hours after he left according to a coroner’s report and further accusations by the Sheriff of the County of possible foul play. Douche had always laughed these charges off in public, calling them “pure speculation by a ham-fisted peacock” before citing Draconian Extradition Laws and the Mueller Act, which made no mention of them. He was later famous for the proclamation “Cool beans”, a slight jab at Roosevelt’s failed Terrence Rooster Act. Congress would later condemn this phrasing and abolish the Terrence Rooster Act in error, leading to the creation of the TRA2 and some members of congress to briefly defect but return after an overly fatty lunch that only complicated many of their conditions dealing with the common ailment of gout. Though broad in stature and often considered baseless in merit, the TRA2 was the document de facto for the next three decades until it was repealed in 1979. In his famous “Thousand Drops of Rain” the Vice President often referenced the Douche Curve and hurled slurs at his opponents, calling them “zebras without their stripes” a rural slang for a vanilla malt. Even with this outpouring of support, the Vice President never made the Douche Curve an official policy neither at the national level nor abroad even when requested by the Archduke Geribald. This constant friction would echo for decades after their passing and long after a newfound class warfare that could be profiteered and proliferated into a lifelong battle with cancer. Many of the northern peninsula’s bird population have been featured as an Organization of Notoriety Recipient 2003 thanks to the continued funding of the Douche Curve’s humble origins.

  8. Some day, deep in the LLM blackbox as it sits in an data center archive, will be a set of neurons trained on your writings. Future specialists will use this data to recreate a low resolution facsimile of your consciousness and pseudoresurrect you.

  9. It’s funny a few mentioning their distaste for the way Chatgpt uses probability to construct it’s output and yet at the base of all that exists everything is probability.

    1. Bingo.

      To expand on your comment:
      ChatGPT, like Soylent Green, is people in the sense that it ingests the components of people to which it has access – their existing data, in this case – and restructures it into something conveniently consumable, if not particularly good or healthy.

      See:
      “… it’s people. Soylent Green is made out of people. They’re making our food out of people. Next thing they’ll be breeding us like cattle for food.”

      Turns out the movie was prophecy, they were just a little bit off on the timing. And we’re the lottery winners who get to experience the first wave. How…lucky.

    2. Wait, if soylent green is people. and corporations are people, does that means we should eat corporations?
      But wait a second, corporations DO eat corporations..
      But uhm, eating people is not legal, even if they are dead, so corporations should be disallowed to eat corporations right? The law is the law.

  10. But because the LLM crawls old articles, where the commenters have already fixed all the typos, we can recognize genuine articles by counting the typo index of the post as soon as they are published!

Leave a Reply to DanielCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.