ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations

By kallerna - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=122952945

In the communications surrounding LLMs and popular interfaces like ChatGPT the term ‘hallucination’ is often used to reference false statements made in the output of these models. This infers that there is some coherency and an attempt by the LLM to be both cognizant of the truth, while also suffering moments of (mild) insanity. The LLM thus effectively is treated like a young child or a person suffering from disorders like Alzheimer’s, giving it agency in the process. That this is utter nonsense and patently incorrect is the subject of a treatise by [Michael Townsen Hicks] and colleagues, as published in Ethics and Information Technology.

Much of the distinction lies in the difference between a lie and bullshit, as so eloquently described in [Harry G. Frankfurt]’s 1986 essay and 2005 book On Bullshit. Whereas a lie is intended to deceive and cover up the truth, bullshitting is done with no regard for, or connection with, the truth. The bullshitting is only intended to serve the immediate situation, reminiscent of the worst of sound bite culture.

When we consider the way that LLMs work, with the input query used to provide a probability fit across the weighted nodes that make up its vector space, we can see that the generated output is effectively that of an oversized word prediction algorithm. This precludes any possibility of intelligence and thus cognitive awareness of ‘truth’. Meaning that even if there is no intent behind the LLM, it’s still bullshitting, even if it’s the soft (unintentional) kind. When taking into account the agency and intentions of those who created the LLM, trained it, and created the interface (like ChatGPT), however, we enter into hard, intentional bullshit territory.

It is incidentally this same bullshitting that has led to LLMs being partially phased out already, with Retrieval Augmented Generation (RAG) turning a word prediction algorithm into more of a fancy search machine. Even venture capitalists can only take so much bullshit, after all.

99 thoughts on “ChatGPT And Other LLMs Produce Bull Excrement, Not Hallucinations

    1. Depending on the man, you might encounter some logical thinking. Rare, but possible. Are people still talking about that by the way? I thought that was like ten years ago

  1. Worth pointing out that a RAG still uses an LLM. The only difference is the RAG uses a vector database to pull relevant documents and feed them into the LLM alongside your prompt as additional context.

  2. “ The LLM is treated like a young child or a person suffering from Alzheimer’s”

    That’s an apt comparison in more ways than one. We constantly excuse toddlers’ nonsense because we value them without question, and we’re never going to take them back to the store for a refund.

    But with LLMs, it’s still an open question whether they’re any use in the first place, and pompous chat about “hallucinations” aims to skip right over that discussion. It’s what hucksters call “talking past the sale”: you pretend the mark already agreed to buy the car, and get them talking about what they’ll do once they own it, until they forget they never actually said yes.

    It’s the exact same reason AI grifters love to make noise about The Danger of When AI Rules the World. Because that “When” avoids the question of “if”.

    1. That’s indeed a very good point. All this talk about how ChatGPT may or may not go Skynet on us tomorrow and AIs will soon crush us underneath their metal exoskeletons helps to drown out the absolute stream of flying excrement that is the actual use of LLMs and diffusion models in reality.

      We’ll probably have real artificial intelligence some day, but today we just have a lot of natural & artificial idiocy.

      1. My private little conspiracy theory is that AI risk doomers are actually creating negative hype (any publicity is good publicity) to cover up another looming AI winter. Insiders have realized they’ve squeezed most of the juice out of this particular technique and are looking for an off-ramp that isn’t the usual disillusionment cycle that follows every tech bubble. “We slowed down to save the world! Not because we hit diminishing returns..”

        Does it have real uses? Yes, even though a lot of them are gimmicky or fraud-y, LLMs can do a lot of cool stuff. Will it bridge to a reliable artificial mind? Probably not. We’ve made super auto-complete which authors infinite reddit posts, and also a machine that makes infinite square aspect ratio images of pixiv anime girls. Both are starting to get boring. They probably won’t ever seize power through a dramatic cyber coup d’etat, but they will dilute media and academia with massive amounts of fluff output. Now what

        1. I tried to get an LLM to reply to your comment with references to wider conspiracy theories such as climate change denial, QNON and chem trails etc, but it refused and came up with this sarcastic comment of it’s own: ‘Maybe we can focus on fostering a more constructive conversation about AI development, acknowledging both its potential and the need for responsible practices.’

          1. And therein lies another problem with today’s “AI” tools. If everything they are allowed to output is supposed to be PG rated and brand safe, well that really limits their uses. I get how and why OpenAI and Friends don’t want bad actors using their tools to make their bad actions too easy, but I’ve had plenty of innocent and completely reasonable requests get the “I’m sorry Dave, I can’t let you do that” treatment that it turns me off using them for any “creative” purpose.

        2. It is much simpler than that: the AGI hoax, AI doom, etc. is about inflating the semiconductor bubble and triggering regulatory capture. The profiteers are the usual suspects: big corporations who want to control the narrative, the business and the technology. The rest is just noise.

    2. > But with LLMs, it’s still an open question whether they’re any use in the first place

      We’ve literally discovered new medicines with LLMs churning through possibilities faster than human researchers ever could alone.

      LLMs are also remarkably good at improving code quality, rewriting code to be more readable and more easily optimized by compilers, as well as generating useful documentation to assist junior devs entering a mature codebase. I’ve been using LLMs to clean up old, undocumented code with meaningful variable names and clearer structure with meaningful comments that make it easier to see how each function fits into the larger codebase.

      LLMs are also useful for processing data and finding connections. I’ve been working on a personal LLM RAG that uses my personal knowledgebase as the primary source of truth, years worth of professional, hobby, and personal notes, articles, and links to other sources. I’m able to ask the LLM things and it pulls up notes I forgot I wrote and does a good job of pointing out redundancy and gaps in my knowledgebase.

      The people who are questioning the value of LLMs lack the skills to actually make use of them and think chat bots are all LLMs are good for. It’s like looking at RC planes and wondering if planes are any use. Or looking at someone typing 80085 on a calculator and wondering what computers are good for. Ignorance of how technology works doesn’t mean the technology lacks value. It means you’ve fallen behind human advancement.

      1. You’re assuming that true intelligence is based on the information YOU have acquired within the confines of YOUR world. There are many many people on this planet who are much more intelligent than you, though they know nothing of coding.

      2. Correct. On the one hand, AGI is just a hoax, but LLMs have value for certain purposes. 30 years ago, the Internet was just a useless piece of technology for people who didn’t have a clue about what was going on.

      3. People write this kind of stuff, but there’s never a link to a full description or a place to see the results or to download the resulting code and compare it to the original form.

      4. It is not at all like your examples.

        It is like being given a box labeled “New book generator” which contains 100 books, a pair of scissors, and a glue stick.

        Your own project shows that you already know the real “use” for this tech.
        It is a fancy search engine that ALWAYS returns other people’s work, whether or not permission to use that work was given.

    3. Yeh i was taken aback seeing the very human word of hallucinations being used rather than the actual word: “Error”. It’s not hallucinating, its not working. its broken. you need to put in a ticket.

    4. Abortion should be legal to the 87th trimester or 4 trimesters after they last ask their parents for money, whichever is later.

      The are just helpless, useless clusters of cells at that age.

  3. The more appropriate term would be ‘confabulation’. While I lack any expertise in this area, here’s my understanding of what we seem to be discovering:

    LLMs excel in creative writing but can mislead in technical tasks. Their extensive vocabulary and linguistic abilities are valuable for rewriting provided texts, but when tasked with coding, they often make significant errors.

    With LLMs, the key lies in the specificity of the question. By presenting a text or code to evaluate, you’re posing a clearer query. Vague, general questions that leave everything to the model can lead it astray, resulting in expected failures.

    For those struggling, exploring tutorials on ‘Prompt Engineering’ can be helpful. The one available at W3 Schools serves as a solid starting point: https://www.w3schools.com/gen_ai/chatgpt-3-5/index.php

    It’s important not to hastily underestimate such potent technology if it initially falls short of expectations. While far from omnipotent, it does have specific applications and usefulness.

    1. Surely, this cannot be what classifies as “prompt engineering”, right? I know its supposed to be a starting point but its just instructions in English?

      I thought it was more about optimising the available token space to have as many tokens as possible, no matter if its just keywords separated by commas to pack as much context as possible. I also vaguely remember in stable-diffusion, there were optimizations where you could increase the weight a particular token was given to generated image have more of an influence of that keyword.

      I may be mistaken, I have no idea about prompt engineering. Its new to me.

      1. MBAs.
        Liberal arts is not the only part of the university taking money for babysitting…
        Education has been notorious for giving an average 4.0 GPA to the worst students on campus for more than half a century (USA).

        Sadly, many schools will give BAs in sciences. While removing any math or science from the program. Making it almost polysci/communications easy. Don’t let them fool you. ‘What first derivative of X^2’. Even ‘calc for business majors’ types should be able to answer that. But BA in physics? No chance.
        This is deliberate. The BA parts of the school recognize that they have devalued that degree. They now want some credibility from the BS parts. So they can continue to sell ‘blackout drunk’ (profs) for $50k/year.

    2. “LLMs excel in creative writing” — do they now? Ask any LLM to write a short story, and it’ll be boring and/or derivative.

      They are good at non-creative writing, although even there they often don’t quite get the style right, although I suspect that that’s going to be easier to fix than the lack of intelligence and creativity (and math, logic, joined-up thinking…).

      1. Good meaning it can pass as a middling hack human writer. Good enough to simulate nearly all the dreck you probably read throughout an average day (like what I am typing right now) but definitely won’t ever write something actually worth reading. It will be a great tool for flooding the world with an even greater proportion of mediocrity

          1. I realise my previous explanation may not have sufficiently emphasized the critical need for precise instructions in generating prompts for AI, and to refine those as necessary.

            Many still perceive AI as being merely derivative. It can effectively execute commands based only on the clarity of its instruction. It is a tool that functions akin to a calculator: simply instructing it to ‘do math’ yields little without knowing the correct sequence of inputs. Mastery of these inputs can lead to astonishing results.

            To quote Ray Kurzweil, a pioneering figure in AI: Given its current achievements and inevitable future trajectory, ‘do you really want to miss out?’ Will you choose to remain in the dismissive, ‘luddite’ category, and be overtaken by those that did no more than READ THE MANUAL?

          2. That’s a tell.

            ‘Do you really want to miss out?’ is _grifter_ speak.

            When you hear it, put your hand on your wallet and back away. Keep eyes on the MFer.
            Same as ‘We’re all in this together’. Just back away slowly. Treat like brown bear.

  4. I’ve been thinking about LLMs and uses for them for a year and a half or so now, like everyone else around here probably. The best I’ve come up with is dialogue generation for procedural games. Basically, routing everything that Dwarf Fortress knows about a character, their feelings, beliefs, relationships, health, surroundings, everything, though an LLM, along with the template-based text it already has. Make the LLM rephrase the monotonous dialogue that the game generates in the character’s own voice, remembering everything they’ve said in the (recent) past too.

    That’s about the best I’ve come up with: a pure-fiction application, not trying to pretend they know things like all the grifters are doing.

      1. LLMs are capable of simple logic, but they’re spotty at it because stochastic gradient descent is inefficient at developing cognitive structures. It is definitely capable of it, though. LLMs can generalize from the training data and have exhibited wide-ranging emergent capabilities.

        Using LLM output for training LLMs, a form of synthetic data, is a real thing already. Merits are debatable, but it has some uses (especially large model generating fine-tuning data for a small, efficient one.)

    1. That’s about the only white-hat use I can think up as well. There are black-hat uses: they might not display reliable logic or originality, but they are definitely good enough to simulate an arbitrarily large number of humans posting online, so they can be used to create prompted and directed social pressure. Almost certainly they are already being used this way.

      Of course this isn’t new. You could just hire a big team of people to do this before.

  5. It seems that recursive word predictors can easily be Turing-complete (with the usual limited vs. infinite memory consideration). Therefore it is not that straightforward to claim that the architecture is fundamentally incapable of intelligence.

    And in a sense, the nonsense output does have a relation to truth within the context of the conversation. It does not have that much relation to the truth as expressed by training data, let alone some unbiased truth.

    But I agree that LLMs are unlikely to be an effective architecture for a complete general AI. Instead they’ll just be the natural language interface to other computational functions. This is already visible in the development. In addition to RAG, there are web searches, compilers and domain-specific AI models being integrated. The most difficult part will be intelligent coordination of these different functions.

      1. That’s going to depend entirely on the definition of ‘intelligence’ used.

        Answers will range from ‘Done’ to (That’s just navel gazing middle school philosophy BS/not answerable/42/turtles all the way down/prove you are).

  6. More ignorant anti-AI propaganda. Who at Hackaday is pushing this agenda? RAG uses LLMs. LLMs solved natural language processing and are a major technological breakthrough. Of course, it’s being hyped to a ridiculous degree and there’s nothing wrong with shooting down inappropriate applications.

      1. The AI-related articles seem to me to have a pretty condemnatory and hostile tone to me. Of course, it’s good to discuss the limitations of a technology, and LLMs have many. But this article is just trying to suggest established terminology for one of those limitations isn’t nasty enough. What hallucinations are called is irrelevant, this is clearly editorializing against AI, if not propagandizing against it.

        1. No, it is not irrelevant what “hallucinations” are called. Calling them hallucinasions is an agenda to try to make LLMs much more than what they are, to fool the buyers they are actually an AI, to humanize LLMs. That’s the propaganda here.

          LLMs are nothing but statistics with a little bit of randomity thrown in. LLMs can’t know the facts and relying on them for facts is dumb.

    1. Your premise that HAD readers are ignorant is hard to believe.

      Saying that the position is propaganda, and then deducing that there’s an agenda at Hackaday is also hard to believe.

      Additionally, as mentioned above, everyone here has been using the LLMs so saying that LLMs have solved natural language processing is a bit of a stretch. We use the LLMs daily and see exactly how useful/useless they are.

      1. It’s pretty clear where the HaD bias is, and it’s boring.

        I’ve successfully used LLM to get work done in a fraction of the time, including verification of its work. The amplification of its poor performance characteristics is just that, and amplification.

        We’ve spent the past decade dealing with a news media that chooses to either: present falsehoods or pretend they serve unbiased news while choosing to turn down the volume on certain things and crank the volume on others. Let me know what you think of main stream media using these techniques.

        Right now I’d be happier if hack a day never covered LLM or AI image generators ever again.

        1. Assuming there’s some sort of “HaD bias” is one way to avoid engaging with a topic, especially given it’s false. We’re not a hivemind, we’re standalone writers, and each of us writes up what they consider noteworthy – which differs massively between writers. As far as conspiracies go, this is a wacky one to take as true.

          Remember, every little thing like this that you haphazardly assume to be true, only serves to poison your mind and distract it from the complexity of what’s actually going on.

      2. It is not my premise that HAD readers (or writers, for that matter) are ignorant, and nothing in my comment should be read that way.

        If you think LLMs aren’t useful, I wonder what you’re using them for? Factual information retrieval is not what they should be used for, but many people seem to be treating them like magic google and having a bad time when the LLM confidently makes stuff up.

    2. +1 ….. it’s so easy to fall into the trap of poo pooing new, incredible technology. I myself call it ‘CrapGPT’ just to remind myself of how we tend to go negative on stuff that threatens many of our deeply held beliefs on a sub-conscious level. In reality, people write this sort of article when their own belief in the superiority of human intelligence is threatened, but not understood, and not just on a day to day level, but buried very deep in the sub conscious. It’s a bit like when people say they’re not racist/biased, but deep down there’s a part of them that is unexplored and falls short of expected standards when tested. I’m not saying author is racist BTW, just biased in a way that is common to most of us – even HAD readers like myself.

    3. “LLMs solved natural language processing”

      I’m not sure what you even mean by that? That a stochastic parrot can somehow determine meaning, intent, the use of sarcasm, metaphor, simile, unique cultural references?

      That’s just as silly as assuming there’s an “anti-AI agenda”

      1. I used to think the metaphor of a “stochastic parrot” might be useful, but then I read the paper and realized the author had definitely never owned a parrot.

        If you want to pick a specific example, I’d say intent analysis was pretty well solved by LLMs.
        I do think the future is ~1-10B parameter models fine-tuned for specific applications and not so much computationally-inefficient massive models, but the former are still considered LLMs.

  7. Good comments. I would add that although the concerns are legitimate, there are steps you can take to limit the inaccuracies. I have developed several useful apps by forcing the LLM to use only my knowledge files (clean txt preferred), using effective prompts, and carefully designing my custom GPT instructions to get the results I want.

    1. This is exactly ‘the’ problem with LLMs (in general) even if you didn’t state it as such :) . Because you can force the LLM to use only ‘a set of data’ ie. carefully selected, you get the results ‘you’ want. Great! …for you! But now apply that ‘globally’. A government, a company, a group of individuals, can force big LLMs to use only the data they want to train it with…. and then present it as gospel to the masses…. See maybe a problem here in general? Not saying LLMs can’t be used, but there is a HUGE huge potential for abuse.

    2. That’s the good part of all neural networks.
      You train them until ur happy with the results.

      Leaves behind no embarrassing:
      ‘if (account.balance < Poor) Account.CheckholdDays += 7;'
      BofA hated having that found.
      Now they just 'adopt AI' and let it maximize fees.

  8. ChatGPT is bullshit?

    Well, the fact that in certain contexts it turns out to be useful means the world as it is needed a “bullshit as a service”.

    Maybe instead of automating bullshit we should cut useless entropy? If the document you’re writing can be made by copy-pasting from ChatGPT, perhaps you shouldn’t be writing it in the first place?

      1. No one is trying to ban hammers and screwdrivers.

        If there were morons going around saying that screwdrivers are too dangerous for individuals to own, and that we needed to transition to a model where a I would hire a specially licensed screwdriver-operator from Microsoft (with the tool chained to their wrist), and pay by the turn… I would definitely be interested in countering that bogus narrative before someone convinces a congresscritter that we need to regulate “screwdriver precursors” like metal rods and have tight oversight on grinders and heat-treating ovens that could be used to make screwdrivers.

        1. What about the UK? From what I’ve heard, a number of screwdrivers have been stolen by the police and bragged about as “seized weapons” because their owners couldn’t immediately prove and convince the officer that they had a “reason” to possess them at the time. Is that applicable to the AI thing? Maybe not. But it’s almost never true that no one anywhere would like to ban something.

        1. Japanese Industrial Standard (JIS) for the win!

          Throw away all your non JIS Phillips screwdrivers!
          They’re yesterdays news.
          Do you really want to miss out?

          Don’t believe the anti-JIS propaganda!

          Just because AI gets massively hyped every 20 years or so by swarmy salesweasels.
          Autonomous driving has been at 90% done (and holding) for a decade. If that doesn’t mean it will be done ‘real soon now’ I don’t know what does.

  9. ChatGPT is incapable of basic logic and basic arithmetic. It even fails to do basic inversion of conditions and it cannot count things. It just strings together millions of answers to common questions into a grammatically correct structure that happens to be factually correct quite often and even if it is not accurate it can still be useful.
    LLMs will lead to increased productivity in some areas, but also lower quality. Once LLMs become more mainstream people will start to write more like AI and AI will be trained on things generated by LLMs and people who write like LLMs. This will create a undesired regurgitation feedback loop. We need more creative people and LLMs will never replace them.

  10. Fuzzy logic is the name of the Machine Learning game, but also its limitation. For while Fuzzy logic is great for noisy data which makes it excel at dealing with vision, audio, lots of unsorted data. It sucks at doing binary logic and cannot give you an absolute answer.

    It is the old comparison that Humans suck at computer tasks such as computing with absolute accuracy, while computers suck at human tasks such as recognizing a cat. Now we created AI which can do human tasks, but counterintuitively: suck at computer tasks.

    Which is where the “hype” derails for me. A lot of companies are trying to put these AI to use for computer tasks. Selling the idea that the AI will supercharge it, but often it just makes them unreliable and that goes against why we use these machines in the first place. I don’t want a LLM to guess the answer to the equation, or to guess what the answer is to a question. I want cold hard computation telling me the answer or pointing me to a source that can tell me.

  11. I think LLMs are a victim of their own success (and ultimately humans own cognitive vulnerability).

    Well it speaks so well, so it must be right.

    But it’s a language model, not a truth model. It generates text which could follow a prompt.

    I’m sure I’m being too simplistic or entirely naïve.

    But why do we expect more than we put in?

    1. > Why do we expect more than we put in?

      Because AI grifters have primed the general public to have such expectations.

      If they didn’t, the venture capital would dry up and they’d all have to get real jobs.

      Just look at the teary-eyed hand-wringing from some of those same grifters further up in the comments. A year ago they were going around Twitter shilling NFTs and cryptocurrency. When the bottom fell out, of course they’d move on to the next grift.

  12. Another one failing to understand that hallucinations are the best and the most important feature of LLMs. You people are so hilarious when your coping spills out.

    Look. LLMs are a perfect tool for solving engineering problems. Any engineering problem is a constrained optimisation in a morphological box, and the only missing bit was a source of somewhat-reasonable chaos to drive this optimisation. LLMs filled the gap. Human engineers are on a way out.

    I already have LLMs designing mechanical parts with vaguely specified requirements better than any human.

  13. Outputs cannot exceed inputs. AI could be useful where exactness or correctness is of negligible importance. For assisting people with disabilities, AI could have many useful applications. But to apply complex knowledge to queries requires you have parity of understanding in order to correctly prompt and receive data which needs multiple passes of corrections if it’s usable at all. Since crypto and the cloud deflated. Ai companies have found a means to continue to monetize their data centers through LLM subscriptions.

    To be cynical on AI is prudent.

  14. I’m looking forward to the day when people stop being overly hostile to this technology and see it for what it is: a tool. The same way a calculator is. The same way a computer is. Washing Machine. Garage Door. Etc. Any given model is not perfect of course. You have to be smart about how you use it. But I, for one, have gotten a lot more done on certain projects than I would’ve without one.

    I’d love to hear from historians about how backlashes to other new technologies compare to this one. I know in the past I’ve read some things about the backlash to cars. And crazy rules created based on it, like someone having to walk in front of a car carrying a flag somewhere. I know there was a big todo about radio being treated as piracy when people started transmitting music over it. And VCRs got compared to the Boston strangler by Jack Valenti. All of it was nonsense then as it is now.

    1. From my own digging into history of technology. There isn’t any apparent certainty on the outcome of anything. It is good old “Survivor bias” that renders it more likely to assume a norm when that isn’t true. Technology has a long history of both supposed fads becoming entrenched in our daily lives (e.g. Computing itself) AND supposed revolutions that ended up falling apart (e.g. Blockchain technology).

      In general. Machine Learning is a particularly complex beast. One i would liken more to the invention of the Triode and Transistor. Itself is a massive milestone in technological approach, but the true impact is determined by what you make with it.

      In regards of Generative AI in particular. It seems to be presently stuck in the same limbo of inbetween success and failure as Blockchain was. It is a fascinating solution that can be interesting for some individuals, but you do gotta find the problems that it is the perfect solution for at a great scale to go from something with potential to a true commonly used thing. Which is the present struggle as everyone is trying to come up with great applications to justify the resources used with mixed results, while a bunch of so called “Tech-bros” are treating it as another speculative market to get rich off of.

  15. …”This precludes any possibility of intelligence and thus cognitive awareness of ‘truth’.”…
    Hmm, I suspect that a sophisticated prediction model is exactly what animal (and human)
    cognition is.

  16. You might be interested in this. We argue it is not bullshit but “botshit”.

    As per Table 2 in the paper, and slides 12 and 13 in the deck (links below), botshit is generated when users uncritically use chatbot content contaminated with hallucinations.

    Here is the publisher’s version of the article that coined the term ‘botshit’:
    https://doi.org/10.1016/j.bushor.2024.03.001

    Here is a pre-print free version: https://dx.doi.org/10.2139/ssrn.4678265

    And here is an accompanying slide deck: https://tinyurl.com/4jf7jcm6

  17. From: bpayne37
    Sent: Sunday, June 30, 2024 7:18 PM
    To: Ted Geoca
    Subject: s&p 500?

    Hello Ted,

    How is the s&p 500 computed?

    Reason is that high reliance on high-tech companies, we read.

    We are following, with field trips, server hardware/software technologies and data center power consumption.

    best,
    bill
    Hi Bill,

    I hope things are going well for you. The S&P 500 index is an index of 500 stocks that is market weighted. Because it is market weighted the bigger the market capitalization the larger weighting to the index for the stock. When the public starts buying up these tech stocks they rise in value forcing institutional investors to increase their weightings of the stock to match the S&P 500. Because the S&P 500 is the largest stock index and is the benchmark to most investors use, this becomes a self-fulling move pushing these mega-caps ever higher. Until it all falls apart.

    Regards,
    Ted

    Falls apart?
    1 Nuclear-powered data centers?
    2 Hackers?
    3 Buggy, malware vulnerable, ~unmaintainable software
    requiring expensive endless updates/upgrades?

    Google News sent today:

    “Laid-off tech workers advised to sell plasma, personal belongings to survive.”
    _

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.