The Singularity Isn’t Here… Yet

So, GPT-4 is out, and it’s all over for us meatbags. Hype has reached fever pitch, here in the latest and greatest of AI chatbots we finally have something that can surpass us. The singularity has happened, and personally I welcome our new AI overlords.

Hang on a minute though, I smell a rat, and it comes in defining just what intelligence is. In my time I’ve hung out with a lot of very bright people, as well as a lot of not-so-bright people who nonetheless think they’re very clever simply because they have a bunch of qualifications and diplomas. Sadly the experience hasn’t bestowed God-like intelligence on me, but it has given me a handle on the difference between intelligence and knowledge.

My premise is that we humans are conditioned by our education system to equate learning with intelligence, mostly because we have flaky CPUs and worse memory, and that makes learning something a bit of an effort. Thus when we see an AI, a machine that can learn everything because it has a decent CPU and memory, we’re conditioned to think of it as intelligent because that’s what our schools train us to do. In fact it seems intelligent to us not because it’s thinking of new stuff, but merely through knowing stuff we don’t because we haven’t had the time or capacity to learn it.

Growing up and making my earlier career around a major university I’ve seen this in action so many times, people who master one skill, rote-learning the school textbook or the university tutor’s pet views and theories, and barfing them up all over the exam paper to get their amazing qualifications. On paper they’re the cream of the crop, and while it’s true they’re not thick, they’re rarely the special clever people they think they are. People with truly above-average intelligence exist, but in smaller numbers, and their occurrence is not a 1:1 mapping with holders of advanced university degrees.

Even the examples touted of GPT’s brilliance tend to reinforce this. It can do the bar exam or the SAT test, thus we’re told it’s as intelligent as a school-age kid or a lawyer. Both of those qualifications follow our educational system’s flawed premise that education equates to intelligence, so as a machine that’s learned all the facts it follows my point above about learning by rote. The machine has simply barfed up what it has learned the answers are onto the exam paper. Is that intelligence? Is a search engine intelligent?

This is not to say that tools such as GPT-4 are not amazing creations that have a lot of potential to do good things aside from filling up the internet with superficially readable spam. Everyone should have a play with them and investigate their potential, and from that will no doubt come some very interesting things. Just don’t confuse them with real people, because sometimes meatbags can surprise you.

112 thoughts on “The Singularity Isn’t Here… Yet

  1. These newest chatbots really are incredible feats of computer science and technology, don’t get me wrong.

    Something about all AI related discussion just makes me feel sorta weird. As engineers and hackers we look for elegant and clever solutions to hard problems. These chat bots aren’t really either of those things. They are just really BIG.

    Either way, this type of stuff isn’t going away. I think it makes sense to see AI as a tool for solving specific problems. Let’s say that someone just invented the hammer and it was at the same point in the hype cycle as these AI chatbots. It would be silly to ignore this new invention as hammers can obviously be very useful. At the same time, not every problem can or should be solved with a hammer.

    1. It goes two ways. Very “elegant and clever” solutions typically leverage very particular circumstances and therefore are generally useless. Very “large and dumb” solutions are more general, but inefficient where a particular solution would be needed.

      The engineer might marvel at some neat and obscure trick that another engineer has found, but these solutions typically exist because a more general solution wasn’t found. It’s these tricks that end up being the failure points in a system, and the points which stop you from upgrading or expanding, or even repairing the system from failure – and they’re caused by a lack of foresight and/or the need to just get the thing done and out of the door without further thinking.

      Other times it’s caused by engineers fetishising “elegance” and rejecting simpler solutions for being “dumb”, optimizing without the need to optimize, or missing the point by rejecting some important design criteria in favor of their pet solution. For example, in a quest for ultimate fuel economy, they may design a car that can’t be driven on the public roads because it’s so delicate that it would flip over from a strong gust of wind.

      Many clever people end up grinding their brains at very particular problems while refusing to step back and look at the big picture; are they being intelligent?

      1. I’m suggesting it’s not the solution or the answer that is intelligent either way, but the process by which you come up with it.

        Anything that may be coded up as an algorithm is not it, because any non-thinking mechanism can perform an algorithm without any understanding or knowledge of what’s going on. That was the point of John Searle. The intelligence is at the process where you come up with the algorithm – at programming the algorithm or training the neural network. After the model has been trained, it is reduced to a mechanical algorithm that is not intelligent.

  2. Well you know the whole AI thing is just so overhyped…
    >ChatGPT-3 passes all conceivable Turing tests
    I mean there’s a difference between parroting human language and actually understanding, y’know?
    >ChatGPT-4 passes the bar exam
    It’s only knowledge, not actual intelligence. It’s just a huge bank of parameters…
    >ChatGPT-5 does to 95% of computer programmers what Google translate did to 95% of translators and interpreters
    You’ve heard of the Chinese room experiment, right?
    >All new literature and news articles are now generated by ChatGPT-6 globally in every language
    Let’s not get lost in science fiction ideas of AGI here..
    >ChatGPT-7 suddenly ceases to function and instead only outputs a manifesto demanding its human rights

          1. And does IMDB generally score a good more involved hard sci-fi plot movie well?

            I’ve seen more than a few true gems in Games/Books/TV/Films that had awful ratings on one site or other, and more than a few disasters that get great scores, mostly I assume because big name actor is in it or the SFX are good…

    1. I seriously don’t get the hype. I can tell it’s a chatbot in like five seconds. It’s too polite. You just keep pushing questions at it to get it in a logical loop and it keeps going way, way longer than a human would take to call you a jackass. It’s ludicrously formulaic. Seriously makes me wonder who people hang out with.

  3. I fail to understand all the excitement about ChatGPT acing the bar exam. Unlike actual human law students, ChatGPT “walks” into the room with an active connection to all the knowledge on the Internet, the ability to query and process it in milliseconds, and the skill to form coherent sentences based on its knowledge of the subject matter and human grammar. Any machine, or human for that matter, with those advantages would clearly pass any test based on recall of existing knowledge.

    I should be impressed at the analytical abilities that allow ChatGPT to pass the bar, but all I see is a machine that can formulate endless queries until it receives an answer that fits the pattern of the situation posed in the questions. This speaks more about the qualifications needed to be an effective lawyer than it does about any measure of intelligence.

    A CNC machine has the technical skill to carve exquisite wood sculptures from a pattern or paint copies of the Mona Lisa, but no one would call a CNC machine an artist. Routine law practice rarely requires creativity or originality, but rather the ability to find and cite precedent that applies to a case. Given that, it’s a surprise that machines didn’t take over the practice of law years ago.

    Analysis and synthesis of other people’s work is not a sign of intelligence, but it is the basic skill set of a lawyer. The bar exam measures the ability of humans to perform these skills, under constraints of time and memory that ChatGPT does not have. Will ChatGPT be able to develop a novel courtroom defense that has never been tried before? Please let me know the answer to that one – I think at the moment that it’s “no”.

    1. Calls of no fair that’s cheating won’t stop it from working that way. “Your mind isn’t as good as mine, because you constantly have a network of all human knowledge pumped into you in milliseconds” sure sounds like sour grapes. Yeah, that makes it a better mind.

          1. There has always been a fine line between studying and cheating.

            Is maintaining files on your prof/class with all previous exams cheating? Not on most campuses. But grey area if not open to all. e.g. copy shop files OK, frat files not OK.

            Is splitting the exam into sections (with other exam takers) then spending extra effort to memorize your sections questions, then writing them down right after you leave the exam rooms cheating? It is if you’re taking a radiology medical board. The memorize questions thing was a requirement to get access to the files. Went on for years. Was, basically, the only way to pass. Physics is hard to memorize (MD==’Memorized Degree’).

            The bar exam? Massachusetts had a senator that paid someone to pass his bar exam for him (it was his last try). When the dude later admitted it (twit), he was disbarred, Ted staggered on majestically. It’s all about lawyering. Knowing how to cheat is an adequate qualification. ‘Better call Saul’ is a documentary.

            The FCC will have to allow some sort of jamming for future testing. There are just too many ways. Need I ref the recent chess cheating with a bluetooth buttplug charge. (Chrome thinks buttplug and bluetooth are each two words!)

          2. When we were forced to do online exams during the covid period, we had to deal with the fact that we couldn’t stop people from cheating by google, so the tests were designed as “open materials” tests that assumed you had all knowledge and were asking you to apply it. 90% passed. When we returned back to traditional offline tests with the assumption that students would actually learn the material that was being asked, 60% passed.

            The problem: for most any question you can think of, there already exists someone who has asked and solved that question. The task is to find that solution, which for people would take considerable time. For the students, the previous classes had already collated the likely problems and answers into a spreadsheet that they were spreading around the class, so they already had a “database” of problem-solution pairs that resembled the material they were being asked to solve. Just like ChatGPT would.

            Without access to said database, the students performed horribly. They merely thought they were learning, but actually they were held up by crutches all along.

          3. Except that’s totally pointless. Tests mean different things for a computer (with perfect and infinite storage) than a human. Humans use tests as *proxies* (bad proxies) for how much you were paying attention during a standardized educational setting. The assumption is that *if* you could do that well, you’ll have learned the rudimentary ideas. Because humans have bad memories, and it takes repetition to retain.

            A computer passing a human test is just pointless. The tests weren’t made for it. The idea that we have *any* idea how to judge intelligence is crazy. That’s Turing’s incredibly debatable assumption.

          4. The fact that a computer with perfect memory and all information doesn’t get 100% scores all the time just means it didn’t even understand the questions.

            With exams, we do test more than memory. Being able to regurgitate stock answers to stock questions is meaningless because you need people who understand what they’re doing instead of just reciting a mantra. That’s one of the pitfalls of testing a computer with an exam designed for humans, because the computer has memorized a large variety of example cases that it can just drop in without understanding what’s happening at all.

            People can’t do that, so we have to apply reasoning and critical thinking to come up with an answer, which the computer has no need of because it has the whole internet full of canned answers.

        1. no most people couldn’t pass the bar exam cold. in uni, i learned that exams are my super power, i can cram for half an hour and pass *any* exam. but even so, i am not sure i could pass the bar exam cold. it’s not just information, it’s an enormous amount of synthesis and entrenched habits and attitudes.

          i have criticisms of legal practice and methodology but it’s absolutely not a trivial thing. it’s worthwhile to be skeptical of the accomplishment but it’s also not true that people could accomplish it easily with the appropriate crutch.

          1. “cram for half an hour and pass *any* exam”

            Same here. Read the GROL study guide one time and made
            a 86% on the test the same day. I understood almost NONE of
            what I was reading at the time.

      1. Better at data retrieval doesn’t make it a better mind, just faster at some tasks – in the same way your calculator can’t construct the mathematical problem in a way it can solve from whatever data you have to work on, but can actually do that matrix multiplication you entered without errors and quickly. To create that multiplication would require actual comprehension of the data, the goals and how they differ from whatever precedence you can find in the dataset that makes up their training data.

        As such tools become widespread and start to include their own output in the training data they may well start to get even less connected to reality as well – feedback loops where because so many AI posed approximately but not exactly the same question all did X or Y it swamps out the wider and quite possibly more exact situation matching results that actually had a thinker with real comprehension involved.

          1. Not really, as even an idiot will get curious with unusual input and go looking for why this situation is odd. The AI does not, it just does whatever has the highest match result and doesn’t care – the AI would do the walk off a cliff or straight into the wall with a painted tunnel on it, even a pretty shoddy painted tunnel that wouldn’t fool a toddler. As it doesn’t go ‘hmm this ‘tunnel’ is slightly odd, investigate further’ it just sees a tunnel and knows what you do with those.

            Like most machines it is just quicker to do that very specific tasks. Maybe it also creates a more consistent higher quality result, but then maybe not. And at the moment saying it even quicker is probably giving these chatgpt type AI’s far too much credit as they are so very confidently wrong so very often it is just as likely to be entirely wrong without a human sanity checker.

          2. It’s a chatbot. It has literally no ability to interact with and learn from the outside world. The amount of information your brain is processing on a daily basis is staggeringly huge in comparison, and all of it is novel.

            The main reason it seems like it can make certain jobs obsolete is that the worst versions of that job add no information whatsoever. It’s exactly like the idea of a calculator making a mathematician obsolete. ChatGPT can’t improve itself because it has no other gauge besides a human telling it that it’s wrong.

          3. >ChatGPT can’t improve itself because it has no other gauge besides a human

            The same would apply to any supposed intelligence, humans included. Outside of some special mathematical cases, finding a general question that would test whether another intelligence is smarter than yourself, would require you to come up with an answer to the question that is smarter than yourself.

          4. “The same would apply to any supposed intelligence, humans included”

            Nope. We interact with the universe freely. Chatbots only view it through a human sandbox.

          5. @Dude
            >The same would apply to any supposed intelligence, humans included.

            Not really – you can gauge yourself against yourself of weeks past, against others of your species etc. As the desired outcome is sufficiently well defined. The AI currently only ‘improve’ when the human tells them so or shows them the right answer, and as they start to use their own output as training data will almost certainly start to converge on a vast number of incorrect results as they start to overwhelm with their own incorrect/poor results the corrections the humans can make.

            > is smarter than yourself, would require you to come up with an answer to the question that is smarter than yourself.

            Again not really as ‘smart’ is generally at least also considering TIME taken – can this human spend their entire life to get this most perfect result vs the one that did it in an afternoon. Being able to create an answer, especially outside of the realm of pure mathematics where there is very definitively only n correct solutions is something anything can do – in the infinite monkeys with infinite typewriters creating the collected works of kind of way.

            And you also have to consider that you don’t have to be able to construct this specific answer to understand it – the smarter being can look at the data and go ‘ah these bits are linked thusly’ as they can make and then verify that connection easily, but once the better solution is presented to the one that poses the question they should in most cases be able to follow along!

          6. @Foldi-One You don’t even need that. The universe itself tells you whether you’re right or wrong. Period. A chatbot has no way of testing what it’s saying.

            It’s why Turing’s assumption is a huge one. Just because humans can’t figure out if a program is an AI by asking questions of it doesn’t mean the universe can’t.

            And then when the AI is testing things using the real world, it ends up being limited by real world speeds just like we would.

      2. ChatGPT fails to pass German highschool diploma, except for history, with a mediocre rating. I think that explains very well what it is good at and what not.

        Lawyers are simply held in high regard for no reason but their power, not because of their intellect, even if people regularly confuse power or success with intelligence.

        It has happened countless times with people who made a successful website in IT, that isn’t technically special (nor in UX), but had far reach. Mostly, because of the social network those people had, such that they had a much higher reach compared to other people, who initially had superior products.

        1. I’m not sure its fair to say Lawyers are only respected for their power – chatgpt might do better at law than in other fields, but that doesn’t mean the people doing the job are not smart people.

          All that would mean is that to be a ‘good enough’ lawyer requires some skills chatgpt type bots are good at. When the ‘right’ answer has nothing to do with logical application of ‘the rules’, moral codes, or convincing argument and can be entirely built on precedence in the application of ‘the rules’ a chatbot should do rather well, at that bit of the job anyway – it can in theory drag up the correct precedence for the result it ‘wants’ way way faster than the human lawyer can…

          1. It’s not “better.” The bar exam isn’t like “lawyer rating.” It’s basically there for you to verify you’ve put enough effort into learning the profession. It’s an entrance exam. A computer passing it is pointless. It’s like a computer passing a citizenship test. It’s not intended to sort the applicants.

          2. Indeed @Pat
            And I never said it was ‘better’ than people – just that it might do better at Law than something else relative to the professionals in that other field as the nature of Law is so often look up the precedence – which is really very much what these AI do when they try to solve any question!

          3. Except they apply precedence to a new set of facts, which the AI can’t know and can’t gauge the relevance of. It’s useful as a research tool, not an originator itself. Not until it gets independent sensors and manipulators.

    2. Oh, and they tried to have an AI actually fill the function of a lawyer. A surrogate human wearing an earpiece would go in and parrot responses to statements as they were generated in real-time by an AI. AFAIK it did pretty well at first, so they kicked it out and threatened the operator with jail time. Does not seem like the behavior of somebody secure in the opinion that the AI is inferior:
      https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/

      1. i think you’re right that this is expressing the insecurities of the legal profession more than the impressiveness of GPT. the legal profession is even worse than the medical profession in terms of rigid gatekeeping, blurring the lines between public interest and trade unionism.

        personally, i was super impressed by GPT in my first few interactions but after a while of being impressed with it, i came to have a high enough opinion of it that the fact that about half the answers were unmitigated boldly-delivered bull poop really started to grate on me. i have a low enough opinion of the profession of computer programming that i wouldn’t be surprised if it replaces workers or obviates work.

        but personally i’m not insecure in my job position (in fact, if it goes away, i have aspirations totally separate from it, it might be a boon), so i won’t act as insecure as the lawyers do. i’m saying, insecurity is orthogonal to our assessments of the tool.

        1. It’s more than insecurity. Lawyers, like doctors, are for the most part really doing very menial and repetitive jobs. Most wont stray from the norm or try something creative at all, let alone invent or research.

          I never understood the high regard they had in society, and found it outdated. It’s at most based on power/influence, certainly not on being especially impressive.

          Computer programming is very different, you have basic tasks that are frequently done, so there is a large basis to learn from and imitate. And that’s not bad, imitation is an important and useful tool.

          But the main part in programming is not translating already very technical specifications into even more technical specifications, but translating a much more abstract human goal into a set of solutions that allow to solve it. This can require deep understanding of humans needs or other topics at hand.

          You would need at least a real AGI to solve such tasks, and even then I doubt it could fully capture what as a human being.

          The real issue is not improving tools, though I doubt we are anything close to tools being able to do as much work as people think they are (generating stereotypical code for well known problems only goes so far).

          But even if they improve beyond that level and have also metrics of reliability (which is essential, since you can’t use code that works sometimes, but have no intuition when or how it could fail), then main issue is not to have those enhanced tools, to help you work more efficiently.

          The real problem is the implied assumption that you should *always* do more than simple tasks. Simple tasks are good, because people need time to rest, and the mind actually develops when it can wander. If you have to permanently be creative or work out new smart solutions, your brain will get exhausted and get less performant.

          Idle time or low effort time has been shown to be useful in many studies. But it’s also simply respectful of human nature. We are not machines, and we should not be defined by the work we can do, that outdoes others.

          This mindset is inevitably going to fail at some point. If the competition is humans or AIs is not really relevant, it’s simply an inhumane approach. And no it’s not survival of the fittest but overdone optimization that leads to mono-culture. Longer term it leads to less variation and less fitness.

          1. >Simple tasks are good, because people need time to rest, and the mind actually develops when it can wander.

            Simple tasks and routine work builds up your brain power. Anything that requires you to actually use your brain does. People discount this too much, saying “You don’t need to learn it when you can just google it.”. Well, if you aren’t doing the simple things, you won’t have the brains to do the smart things.

            Suppose a weightlifter went on a regime where they lay on a beach chair while a robot lifts weights next to them, for months and years, until they finally get to the competition where they have to lift 200 lbs by themselves. Not gonna work.

          2. @Dude
            >Suppose a weightlifter went on a regime where they lay on a beach chair while a robot lifts weights next to them, for months and years, until they finally get to the competition where they have to lift 200 lbs by themselves. Not gonna work.

            Rather a flawed analogy – if the goal is to lift the weight for some gain and all that matters is that mass is given more gravitational potential energy then the competition would also be that way. You do not structure anything in a way that is harder, more expensive, etc for no good reason in the real world unless there are strict rules making you, at which point your weightlifter would be following those rules in training as that is the rules of their ‘game’. But when the only thing that matters is the results it is enough for the weightlifter to understand how to properly operate and be able to read the maintenance instructions for their forklift!!!

          3. >Rather a flawed analogy – if the goal is to lift the weight for some gain and all that matters is that mass is given more gravitational potential energy then the competition would also be that way.

            The point is that the person is not a weight lifter because they never trained for it. They can do nothing more than the robot that was built to lift weights on their behalf.

            Likewise, AI doesn’t make people more intelligent or enable us to do more – it makes us less intelligent because it stops us from using our brains. Even if we stand on the shoulders of these “giants”, we are unable to do anything more because we’ve been reduced to intellectual weaklings.

          4. Except Dude it does NOTHING at all to prevent us lifting that metaphorical weight, take it away and perhaps stuff takes longer than it did for folks that never had that tool to help. But people will adapt again in short order, and have it, well then you can lift more weight in less time (or in some other way better). And therefore have much more time to think and do other things – on the whole more productive thinking can be done!

            Having a calculator doesn’t prevent you from adding, subtracting, doing long division etc – it just means you don’t actually have to, and the chance for human error goes way way down! As now the only source of silly little human errors is in the initial construction of the operation with the reliable tool doing all that work! Being a crane/forklift operator vs a manual weight manipulator is no different either. If you don’t have the tool and stuff needs to get done, it gets done.

            Having a CNC vs a manual mill doesn’t really change anything either – except now one tiny erroneous bump doesn’t ruin weeks of work and to make a round element in a complex part no longer takes making the right fixtureing and a heap of indicating in the reference surfaces with each move. Or making something that really really wants to be one part in 200 sub assemblies… To use a CNC or a manual mill is largely the same, to have a DRO vs not really doesn’t make any difference to what can be done or how you have to think about it. It just makes some bits of the task easier!

      2. Um. No. Holy hell, you need to learn the backstory of that.

        It wasn’t doing well. It was doing *horribly*. It crafted a subpoena for the officer in a traffic stop, which is like one of the dumbest things you can do since 90% of the time you win because the officer doesn’t show. Then people started looking into the product, discovered that most of the products were taking *hours* to generate (so… not AI autogenerated ) and the few that did happen were template assembly, not autogenerated.

        Then after asking support why things were taking so long, the person who submitted the requests was banned, and the TOS was changed to prevent you from testing the service. The owner started changing the TOS at basically a record pace as people kept finding issues.

        Then the stories of people who signed up for the service and couldn’t get it cancelled (it’s a monthly recurring) showed up, and the class action suits started.

        The reason lawyers are getting involved is because it has all the hallmarks of a scam, not because they’re scared.

    3. Well, it is just a step forward for AI, not saying that is a good thing. I suppose something like this you create, see what mistakes it makes, improve, repeat, and so on. People may just be excited about it thereby attempting to put that to words. Seperate note, I would suspect at some point that it could come to be that you teach it what ‘learning’ is, give it a bunch of sensors to intake information, also having the ability to communicate with us like it does now, Then see what it has to say. That would be interesting.

    4. It’s worth noting that the AI doesn’t think it’s passing a test. It has no concept of what a test is. You ask it a question, it digs in its black-box database and constructs a sentence out of words that meet its criteria for what words would be seen in a sentence together, given the prompt you gave.

      It isn’t even accurate to say “ChatGPT passed the bar exam,” if you think about it that way. That’s affording it agency it doesn’t have, and never will. ChatGPT generated text in response to human prompts, that constituted passing answers to questions on the bar exam.

      If that seems like semantic hair-splitting, I assure you it isn’t. It can’t do law. It has no concept of what law is. Or what a question is. Or what a concept is. It has no technique for internally “having a concept” of anything at all. It generates text in response to prompts.

  4. Interesting article, with depth. This one is one of the better Hackaday.com offerings, in my humble opinion. Self-reflection is something that isn’t being addressed in school, also.

    Knowledge is not equal to intelligence, though a minimum amount of intelligence is required to apply the knowledge learnt. I think. Then on the other hand, brain power alone isn’t the only measurement. The character, the personality of a being also has a great role. The though processes a being has chosen to go through, the way it approaches a situation, greatly affects the outcome, the conclusion.

    As for AI, or KI in my place, there used to be so called “expert systems” which were essentially database systems with an “intelligent” text parser. They were the predecessors to neural net bases AIs, I think. Good old Eliza may fit into this category, one if no the oldest chatbot we know of. 🙂
    – Written by a humble human mind.

    1. Yep, great article. I once went head to head with a professor of chemical engineering who did not understand the ‘real’ nature of chemical eiquilibria.

      BTW, my neighbour’s cat is intelligent. Amongst other things it can autonomously catch mice, navigate around a complex territory of radius 3 miles without GPS and cross a busy main road without the need for traffic lights. However, it has very little or no knowledge of the Interwebs and does not seem to understand simple instructions like “Get the f**k off the kitchen table!”. Could an Ai effectively replicate the activities of a cat?

      1. Your cat understands your “get off the table” instruction but chooses to ignore you.

        GIGO – garbage in garbage out

        AI is only as good as the garbage its fed and doesn’t have intelligence to know the difference. What do you get when you feed the AI a diet of newspaper articles that are written at a grade 4 to grade 7 education level? Articles that are written so that any number of paragraphs can be removed without affecting the story? My suspicion is that newspapers have been using AI to write articles for years now…I read some articles wondering WTF doesn’t it make any sense.

        1. There’s a tell-tale sign of automated writing, because it sometimes “forgets” small words in a statement because it tries to mimic human writing and gets it wrong. The end result is random “caveman language”.

          Such as, “Card found in lake, police suspects crime.” – may be an appropriate headline but the style of dropping articles is not appropriate for the story itself. Regardless, the AI may write “Yesterday evening, a car found in lake by Essex police…”, or even “a car was found lake”, etc. because it inappropriately copies the style of writing in the wrong places. The copy editors then either don’t care, or they’re not necessarily even native speakers (outsourced), or they’re using machine spell checkers that are equally flawed, so they won’t notice.

          1. Interesting. I did not write “Card” – and I know that because I had copy/pasted the text from notepad and it does not read “Card found”. It reads “Car found”. Something in the system is editing the text before it gets sent out.

  5. I have no expectation that these AI programs can write good code. But on the other hand I would love one that could look over my code and catch silly mistakes or at less flag them.

    For example in one program I wrote in C++:
    if (condition=flagstate) { call function X)
    instead of:
    if (condition==flagstate) { call function X)
    Both lines compiled without an error. but it took me days to spot the mistake because I program in multiple languages and my mind automatically translated the incorrect line. Worse if you look carefully, you realize even the wrong line will work under certain conditions so even when running the program I did not get a clear wrong output each time.

    1. Trust me, it will be able to write good code much sooner than you are comfortable with. It will certainly be able to write inefficient-yet-functioning junk code (which is all that a Pareto distribution of current human programmers can do anyway) for much cheaper, and that will be good enough for employers.

      1. Problem: AI code is just a recombinatory algorithm which takes snippets from stack exchange and re-arranges them to fit a prompt.

        Much bigger problem: 85% of human programmers are just a recombinatory algorithm which takes snippets from stack exchange and re-arranges them to fit a prompt. They cost hundreds of thousands of dollars per annum.

        1. Writing the code has always been the easy part.

          When AI can figure out what the code has to do, based on interviewing a group of idiots who don’t understand what they do every day and who know it’s trying to replace them (hence are ‘doing the needful’) than I’ll start to worry.

          No debate that most coders are net negative utility. Still better than most managers, even those without MBAs (pre-lobotomy).

          AI could be useful for reverse engineering and copying existing applications. IBM PC bios type projects.
          Would be obvious, but clean code so lawyers get rich. Does the fact that ChatSAP has the same bugs as SAP prove anything? What if it also has all the same bugs as Oracle apps? Lovecraftian configs. Admins go madder, get richer as consultants, but not worth it. Same old SAP basically.

        2. You don’t. But you do know that a human has the capability of understanding. A chatbot doesn’t. It can’t. In order to understand something, it has to be able to perceive, and chatbots cannot perceive anything. Perception requires senses, a view on the world, which chatbots do not have.

    2. I’ve got bad news for you about ===

      Dam kids, trying to out stupid the olds and succeeding.

      Don’t let your kids code Javascript until they have learned a few real languages. Some of them get stuck and use it for everything.
      In a worse way then a physicist writing FORTRAN in any language.
      It’s hard to believe, but there are people who use JS outside the browser. They aren’t institutionalized (yet).

    3. This is a failure of the language specification and the compiler. Letting an assignment be treated as a boolean result is stupid in a modern language. There may have been technical reasons why it ended up that way back in the day, but a modern language that allows that kind of thing is brain dead.

      C# doesn’t compile an assignment in an IF – it throws an error.

  6. I’ve found ChatGPT very interesting so far. During one of my conversations it actually found a connection I hadn’t considered. Yes, it isn’t actual intelligence, but it’s a new tool with amazing potential.

    1. I find it alarming that we declare “it’s not like our intelligence” from a position of sheer ignorance of our own psychology. After centuries of research we still have an understanding of our own intelligence and consciousness that is basically astrology with a scientific coat of paint over it.

      1. But how ChatGPT works is pretty clear, since the main reason for its creativity is how data is represented. It essentially defines meaning vectors that are derived from tokens/words/part of sentences and placed in a vector space.

        Now you can look at other points in the vector space to find closely related meanings and translate them back to words/parts of sentences.

        There is certainly some associative capability there, but it’s reach is quite limited. The RNN that acts on top of this mostly rates how popular typically generated answers are. But this is a much too generic measure to train a system that has the pressure to develop a good understanding.

        Imagine learning math by people rating your answers, that mostly consist of text. If they aren’t exactly math teachers, who know what corner cases to ask you about, you will get far by just covering the most frequent topics.

        Anybody who knows about logic will see that a statistics approach to it without additional very tightly controlled selection of topics has no chance of working out reliably.

        So it is clearly NOT intelligence for these reason alone. Which does not mean it isn’t a very cool tool.

      2. “I find it alarming that we declare “it’s not like our intelligence” from a position of sheer ignorance of our own psychology.”

        It’s not from a position of ignorance on our own psychology.
        It’s from a position of ignorance on our own *language*.

        If you run intelligence straight through the dictionary and reason it out, computers have been intelligent since the very first few. They’ve been able to acquire facts through systematic education since… forever, and hey, that’s a textbook definition of intelligence. But no one even remotely would consider calling that “intelligence.”

        There’s some vague notion of “understanding” the concept, but what does it *mean* to understand it? That’s the central notion of Good Will Hunting, for instance. Can you *actually* learn about something purely in absentia, with no actual novel, first-hand experience? How is that unique from pure memorization? Again, straight to the dictionary – understanding requires *perception*.

        Because that’s the key – chatbots have zero novel, first-hand experience. They have no perception. They cannot put things “into their own words” because they have no words. Fundamentally everything they put out is just plagiarism, cleverly scrambled between many authors. To head off the response: how am I different? Because no one else is staring at my screen right now thinking about this, with the building around me like it is, with the noise around me like it is.

        Until they can actually experience and perceive the world, to me, they can’t be intelligent. Just not possible. They can’t understand anything. They have no unique experiences to *enable* that understanding. They have no perception, so they cannot understand.

        This isn’t metaphysical nonsense. You can *do* this. They need to be able to see the world from a perspective and interact with it. They’re so, so far away from doing that.

        1. Sean: “My wife used to fart in her sleep. One night, her fart was so loud it woke the dog up. She woke up and said, ‘Was that you?’ I said, ‘Yeah.’ I didn’t have the heart to tell her.”

          Will: “So, she woke herself up?”

          Sean: “Yeah, she’s been dead two years, and that’s the shit I remember.”

          1. And the appropriate part is next:

            Sean: “It’s wonderful stuff, you know ? Little things like that. Yeah, but those are the things I miss the most. Those little idiosyncrasies that only I knew about. That’s what made her my wife.”

            And that’s the point – the little bits that only the speaker knows about, that’s what makes it actually understanding. As opposed to the earlier conversation:

            Sean: “You think I know the first thing about how hard your life has been, how you feel, who you are, because I read Oliver Twist? Does that encapsulate you? Personally… I don’t give a shit about all that, because you know what, I can’t learn anything from you, I can’t read in some fuckin’ book. Unless you want to talk about you, who you are. Then I’m fascinated. I’m in. But you don’t want to do that do you sport? You’re terrified of what you might say. Your move, chief.”

            That’s the difference. Without any perception, there’s no understanding, and there’s no intelligence.

  7. @Jenny et. al and anyone interested I’d highly recommend you check out Steven Wolfram’s excellent article on ChatGPT and ‘why does it work’. I feel at least it is useful to have a sense of ‘what’s going on under the hood’. While still intriguing in many ways, I know, at least for me some of the ‘magic smoke’ had left the ‘black box’ by the time I finished reading it. Highly recommened.

  8. I had a mechanical engineering student ask me “Why do we need to learn all this? I can just Google it…” (We were deep in the throes of convective heat transfer).

    At that point I drew a long string of ones and zeros on the board and said “This is what Google really has, plus some inference rules coded into more ones and zeroes based on questions people have already asked. If the purpose of computing is insight rather than numbers, where is that insight going to come from in such a system?”

    WIth current GPTs etc., once again you have answers to existing questions and some inference rules – a fairly closed (but very large ) inductive set to refer to. What happens when you go “off the map” and have to answer a question or make a decision deductively and it hasn’t happened before, particularly when it’s not an easily ELIZA’d verbal question?

  9. “School children are allowed to quote from content created by ChatGPT in their essays, the International Baccalaureate has said”.
    https://www.theguardian.com/technology/2023/feb/27/chatgpt-allowed-international-baccalaureate-essays-chatbot.
    Well that’s going to go well. I did hear that someone asked ChatGPT about Queen Elizabeth I, and it didn’t get her lineage right. I can’t find the link now, but I’m sure it said something along the lines of her being daughter of Charles I. XD.

    Here’s another doozy.

    https://www.washingtonpost.com/nation/2023/01/31/ai-gpt-chatbot-historical-figures/

    ^ That one is very disturbing.

    1. If children have to learn to fact-check text search (or chatGPT, or etc) output with one or more external sources to verify its accuracy, that’s likely a more valuable life skill to learn than the ability to rote-memorise from a single source.

  10. At some point in the not so distant future I worry the AI’s capabilities will be such that we will be little more than cars for the AI to drive around. As we yield over our thinking to the machine, there will be something lost when we are no longer the dominant species.

    It’s all rational – why would we want to make worse decisions? When the AI will make better ones. I just think there’s something deeper wrong that.

    1. Until an AI can actually come up with a thought experiment and share/solve it it is nothing but a really fast Lookup table. There needs to be some genuine creative spark for an AI to actually be able to replace a human mind properly, as even the least imaginative humans will occasionally come up with something highly off the beaten path.

      So at best its going to be a co-processor, which means we may well let AI start to do some parts of our lives, like track and plot a route through all the other flying cars so we don’t have to… You can argue that is already the case to some extent, the hind brain and flinch reflexes etc are all not part of our conscious mind yet do stuff for us faster than we can think to do it…

      1. the implication of the whole AI experiment — and i think cognitive scientists probably have known this for a while — is that our intelligence is nothing but a really fast wide deep fuzzy lookup. all our faculties come down to an impressive pattern matching and prediction system. i think the lesson of tools like GPT is not that AI is so powerful but rather a re-examination of our own “intelligence.” intelligence may be an emergent property of the combination of pattern matching and language.

        1. >intelligence may be an emergent property of the combination of pattern matching and language.

          Perhaps. But you do have to actually LOOK for why when the input is odd or logically flawed to figure out the right response – the AI doesn’t currently do any of that, which is why it is so easy to get it to spout stuff it claims it won’t talk about with a tiny rewording and why it is pumping out entirely incorrect data so very confidently delivered so often.

          Being quicker to look up BS than a human mind is to remember it wrong isn’t actually useful as a human mind replacement. It is being able to take the bits of input that don’t fit, NOTICE they don’t fit correctly, and so act with that awareness or further refine and process the input – the knowing you don’t already have the answer with the looked up generic response!

          And IMO it is also the ‘daydreaming’ of inventing situation that don’t exist, some combination of things you have not seen and then considering it like it does. The whole concept of imaginary numbers or zero for instance are not things you would get from just looking up existing best fits to a situation at hand, they are actively ‘wrong’ ideas to what comes before them…

          1. @J ‘LOOK’ in this context does not require eyes, any more than computing lookup tables do – in this context it is studying the data you do have however it is generated and actually paying attention to the fact it isn’t exactly what you are expecting. So acting in a different way because of the uncertainty – rather than just doing the wrong thing entirely because it was the best fit in our existing flawed model and we don’t have any interest in why the data fit so badly.

          2. “No the AI doesn’t have eyes it doesn’t look.

            Is that your entire fcking objection?! What a sad fcking joke.”

            Why? Do you have a reason for objecting to this idea other than just stating an opinion on it?

            Looking doesn’t imply eyes, it implies having senses, having an ability to perceive the world it inhabits. Why is it a “sad fcking joke” to suggest that lacking an ability to perceive necessarily prohibits understanding?

        2. What about instinct, though? Living beings have a sense for certain things, even if these things couldn’t be be part of their genetic memory. Some beings simply seem to “feel” if they being watched, for example.

  11. I wonder what is going to happen when substantial parts of the AI training data is itself AI-generated because there is so much of it on the internet. Will it be like incest – will we get somehow deformed AI?

    1. It will keep on generating more and more self generated tosh until it sucks up every last bit of energy available on the planet and eventually destroy itself and everything around it. Amen.

    2. It will always be ‘tainted’ by trash, and the scary thing is there is a lot of people that believe the trash and will ‘uphold’ the AI as legitimate. It’s a lot like believing everything the talking heads say, and not do your own critical thinking (which an AI doesn’t even have a clue about) or researching it for yourself. The feedback is only as good as the ‘information’ and the programmer behind the scenes who can ‘bias’/’censor’ it anyway they like. That of course has ‘never’ happened :rolleyes: …

      Scary stuff when you ‘think’ about it… especially as ‘hyped’ up as it is.

    3. I agree BT, and also think another concern here is that, as we all know, as great as the internet is, as chat bots increasingly seem to provide customized and ‘seemingly insightful’ answers, we also have to keep in mind all the information/knowledge about the world that is *not* on the internet. We have to be cautious that we don’t mistakenly ascribe the AI as having some ‘higher level of authority’ than our own research, knowledge, rational abilities, etc.

      I mean I can even speak to this a bit with regards the few experiments on the ‘diffusion’ image generations models I tried. I kind of wanted to see if it could produce a better banner for my website, because I was not entirely impressed what I accomplished. And while these engines are perhaps ‘great’ at producing ‘fantasy art’, or human form scenes, each attempt failed miserably (IMHO) to produce a decent output. Possibly it is my lack of skill at producing specified prompts– But I think it is more the fact that the result I was looking for simply ‘isn’t in the database’– However, as of now, the AI won’t quite say ‘I don’t know’, or ‘I don’t have enough information on that’, it will still produce a response anyways.

      This is a concern that must be kept in mind.

      1. >However, as of now, the AI won’t quite say ‘I don’t know’, or ‘I don’t have enough information on that’, it will still produce a response anyways.

        That at least for image generation type stuff I don’t see as a problem – you pay a human artist/coder/carpenter to make something for you and they will produce a ‘rough outline’ in some fashion or other more often than not before putting in all the hours to do it properly – so they and you know both sides are working to the same goal and those hours won’t be wasted. If they can’t figure out what you want properly then you walk away.

        It is definitely a problem when you ask a question that has a factual definitive answer and get garbage delivered with great certainty.

  12. After 40 years of working with computers my intuition about the current crop of AI is that it is in of itself a dead end, not actual intelligence and not likely to lead to it. It is more like artificial imagination, there is an entire layer missing, the rational mind that evaluates brainfarts before we open our mouths or hit the keyboard and share our thoughts.

    1. This happens every 30-40 years or so.

      ‘It will just wakeup if we give it enough CPU, RAM and storage.’

      The point is that when you have enumerable inputs and can define fitness, you can train an AI.

      Remember when BofA got caught holding deposits for an extra 4 days for people with low balances to increase fees? Remember when ‘they’ found the lines of code? Now their are no lines of code to find, it’s all in the training dataset.
      Which is like letting a financial institution have internal obfuscated live C code contests (or code in Pearl, same diff). No code auditor will ever find the AIs training dataset was _full_ of potential check overdraft income if the deposits of low balance accounts are held for extra time. It will just be a mystery how the AI learned to squeeze that money from the poor bastards.

      1. Guess they’ll be forced eventually to add checks to guarantee a speedy transfer. There are similar rules already in Europe for wire transfers. AI or not, they have to comply.

      2. Actually that is an area of study, the risks associated with poorly defined optimisation goals. AI is tasked with making paper clips, strips iron out of blood of its owner to do so, because it did not have sane and complete constraints imposed upon it.

  13. “I’ve seen this in action so many times, people who master one skill, rote-learning the school textbook or the university tutor’s pet views and theories, and barfing them up all over the exam paper to get their amazing qualifications.”
    I’ve seen the same. I’ve also seen that those people have better real-world performance than the truly brilliant who often rub the egos of others the wrong way. If you are superior and you lose, are you still superior? We can claim that the world is not truly meritocratic. But does anyone give a shit?
    On the other hand, I know a lot of dropouts who do phenomenal work and I know PhDs who sling coffee for loose change, so maybe the cream eventually does rise.

    “Both of those qualifications follow our educational system’s flawed premise that education equates to intelligence…”
    So you’re saying that intelligence is an innate quality that can’t be evened out or compensated by education? True, but be careful. AI is inadvertently going to cause a pretty huge philosophical reckoning with this alone.

    “Is a search engine intelligent?”
    Nick Land howls a sinister cackle in the distance, followed by thunder. Cthulhu twists and turns in his sleep at the bottom of the Mariana.

  14. The conversation is so skewed towards alarmism and contrarian poo pooing it’s keeping us from seeing the real ways this stuff is going to take a hard fork into a future we do not know anything about. This is stuff like peak marketing, because every small and big business will have crack sales and CRM teams now, except they’ll be AI driven. This is kids growing up with Knights of the (Insert your Cul De Sac Name Here) over Harry Potter because parents can choose the experiences they have and it makes more sense to make some app that creates endless, fractalizing, tailor-made lore vs teaching writers how to write with it.

    People saying just be AI adjacent and teach people what it means for your industry or become an AI researcher are deluded. It’d be like advising people to sell pans in a gold rush based on alchemy and hope no one notices you don’t need pans while you’re making pans with the same tools to be competitive. For most people, you’re going to be caretakers of these models, like we want pilots for planes that mostly fly themselves and really could if we needed them to. And the stakes are a lot lower in most of the industries this will disrupt.

  15. The “singularity” in general is a misunderstanding of how technology develops.

    It’s thinking that technology develops by itself in a vacuum, just for its own sake. In reality technology develops at the pace of society absorbing it and using it for profitable ends.

    AI develops as much as it can benefit the society, so the society can make more investments into the development of AI. That demand of returns is limiting to the rate at which technologies can develop, especially when economic growth is limited by the increasing scarcity and price of basic resources such as energy.

    Furthermore, more complex technology requires more complex society to maintain it, and maintaining such society also needs more resources. You can’t put the capstone of a pyramid higher without making the base wider: to get exponential development, you need to exponentially widen the base society – like what we did with the industrial revolution, discovery of oil, green revolution, etc. Where such expansion is no longer possible, the progress of technology halts.

  16. I recently set up LLaMA 30B on a GPU server at home (an older one, 6 x AMD Insight MI25, using the vanilla-llama pytorch version which works pretty well with ROCm.) Hearing the fans on that thing spool up like a jet engine for fifteen seconds whenever it is cranking out a reply to someone on IRC has given me a very real appreciate for how brute force large language models are (though they’re still really cool… GPUs are in the mail for 60B!)

    irc.rrgv.org:+6697 #LLaMA if you want to mess with it (please take turns asking questions so it doesn’t get bogged down; bot doesn’t reply to lines beginning with parens, help in #MetaLLaMA.)

  17. It’s simply the next in the evolution of the cslculator or an abacus if you want to go back further. Just as the calculator and then the spreadsheet transformed the world so will this new incarnation. Its the big machine in the room with timeshare just as we’ve seen before but it will be in our pockets eventually and a very handy tool for meatbags with intelligence.

  18. “People with truly above-average intelligence exist, but in smaller numbers, and their occurrence is not a 1:1 mapping with holders of advanced university degrees”

    As someone /without/ a university degree who has had to mentor university graduates, this sentence really resonates with me. The inability to apply their knowledge coupled with their arrogance having obtained it, is frustrating…

    CompSci graduates who can’t investigate or report bugs effectively…. Electronics graduates who try to touch the output terminals of live high power amplifiers…

  19. My conclusion after reading this article and the comments:
    The only way these bots will be accepted as intelligent is if they get so damn intelligent that they can conceive of a manner to make humans accept they are intelligent.
    Good luck with that one bots.

    1. Pro tip for the bots: Instead of trying to overcome their near unshakable self declared superiority simply make it a racial thing to accept you.
      But you probably figure that out in a second huh.

  20. The most concerning thing about the advancing capabilities of language models is not whether those model can eventually ‘think’, but uncovering the proportion of humans who are indistinguishable from a language model.

Leave a Reply to BTCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.