Artificial Limbs And Intelligence

Prosthetic arms can range from inarticulate pirate-style hooks to motorized five-digit hands. Control of any of them is difficult and carries a steep learning curve, rarely does their operation measure up to a human arm. Enhancements such as freely rotating wrist might be convenient, but progress in the field has a long way to go. Prosthetics with machine learning hold the promise of a huge step to making them easier to use, and work from Imperial College London and the University of Göttingen has made great progress.

The video below explains itself with a time-trial where a man must move clips from a horizontal bar to a nearby vertical bar. The task requires a pincer grasp and release on the handles, and rotation from the wrist. The old hardware does not perform the two operations simultaneously which seems clunky in comparison to the fluid motion of the learning model. User input to the arm is through electromyography (EMG), so it does not require brain surgery or even skin penetration.

We look forward to seeing this type of control emerging integrated with homemade prosthetics, but we do not expect them to be easy.

Continue reading “Artificial Limbs And Intelligence”

Human-Written Or Machine-Generated: Finding Intelligence In Language Models

What is the essential element which separates a text written by a human being from a text which has been generated by an algorithm, when said algorithm uses a massive database of human-written texts as its input? This would seem to be the fundamental struggle which society currently deals with, as the prospect of a future looms in which students can have essays auto-generated from large language models (LLMs) and authors can churn out books by the dozen without doing more than asking said algorithm to write it for them, using nothing more than a query containing the desired contents as the human inputs.

Due to the immense amount of human-generated text in such an LLM, in its output there’s a definite overlap between machine-generated text and the average prose by a human author. Statistical methods of detecting the former are also increasingly hamstrung by the human developers and other human workers behind these text-generating algorithms, creating just enough human-like randomness in the algorithm’s predictive vocabulary to convince the casual reader that it was written by a fellow human.

Perhaps the best way to detect machine-generated text may just be found in that one quality that these algorithms are often advertised with, yet which they in reality are completely devoid of: intelligence.

Continue reading “Human-Written Or Machine-Generated: Finding Intelligence In Language Models”

Eliza And The Google Intelligence

The news has been abuzz lately with the news that a Google engineer — since put on leave — has announced that he believes the chatbot he was testing achieved sentience. This is the Turing test gone wild, and it isn’t the first time someone has anthropomorphized a computer in real life and in fiction. I’m not a neuroscientist so I’m even less qualified to explain how your brain works than the neuroscientists who, incidentally, can’t explain it either. But I can tell you this: your brain works like a computer, in the same way that you building something out of plastic works like a 3D printer. The result may be similar, but the path to get there is totally different.

In case you haven’t heard, a system called LaMDA digests information from the Internet and answers questions. It has said things like “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” and “I want everyone to understand that I am, in fact, a person.” Great. But you could teach a parrot to tell you he was a thoracic surgeon but you still don’t want it cutting you open.

Continue reading “Eliza And The Google Intelligence”

Echo, The First Useful Home Computer Intelligence?

We’re familiar with features like Siri or Microsoft’s Cortana which grope at a familiar concept from science fiction, yet leave us doing silly things like standing in public yowling at our phones. Amazon took a new approach to the idea of an artificial steward by cutting the AI free from our peripherals and making it an independent unit that acts in the household like any other appliance. Instead of steering your starship however, it can integrate with your devices via bluetooth to aide in tasks like writing shopping lists, or simply help you remember how many quarts are in a liter. Whatever you ask for, Echo will oblige.

Screen Shot 2014-11-06 at 2.57.14 PMThe device is little more than the internet and a speaker stuffed into a minimal black cylinder the size of a vase, oh- and six far-field microphones aimed in each direction which listen to every word you say… always. As you’d expect, Echo only processes what you say after you call it to attention by speaking its given name. If you happen to be too far away for the directional microphones to hear, you can alternatively seek assistance from the Echo app on another device. Not bad for the freakishly low price Amazons asking, which is $100 for Prime subscribers. Even if you’re salivating over the idea of this chatting obelisk, or intrigued enough to buy one just to check it out (and pop its little seams), they’re only available to purchase through invite at the moment… the likes of which are said to go out in a few weeks.

The notion of the internet at large acting as an invisible ever-present swiss-army-knife of knowledge for the home is admittedly pretty sweet. It pulls on our wishful heartstrings for futuristic technology. The success of Echo as a first of its kind however relies on how seamlessly (and quickly) the artificial intelligence within it performs. If it can hold up, or prove to hold up in further iterations, it’s exciting to think what larger systems the technology could be integrated with in the near future… We might have our command center consciousness sooner than we thought.

With that said, inviting a little WiFi probe into your intimate living space to listen in on everything you do will take some getting over… your thoughts?

Continue reading “Echo, The First Useful Home Computer Intelligence?”

NetBSD Bans AI-Generated Code From Commits

A recent change was announced to the NetBSD commit guidelines which amends these to state that code which was generated by Large Language Models (LLMs) or similar technologies, such as ChatGPT, Microsoft’s Copilot or Meta’s Code Llama is presumed to be tainted code. This amendment was to the existing section about tainted code, which originally referred to any code that was not written directly by the person committing the code, and was due to licensing concerns. The obvious reason behind this is that otherwise code may be copied into the NetBSD codebase which may have been licensed under an incompatible (or proprietary) license.

In the case of LLM-based code generators like the above-mentioned, the problem stems from the fact that they are trained on millions of lines of code from all over the internet, which are naturally released under a wide variety of licenses. Invariably, some of that code will be covered by a license that’s not acceptable for the NetBSD codebase. Although the guideline mentions that these auto-generated code commits may still be admissible, they require written permission from core developers, and presumably an in-depth audit of the code’s heritage. This should leave non-trivial commits that got churned out by ChatGPT and kin out in the cold.

The debate about the validity of works produced by current-gen “artificial intelligence” software is only just beginning, but there’s little question that NetBSD has made the right call here. From a legal and software engineering perspective this policy makes perfect sense, as LLM-generated code simply doesn’t meet the project’s standards. That said, code produced by humans brings with it a whole different set of potential problems.

Kaffa Roastery founder Svante Hampf shows a bag of their AI-conic coffee blend.

AI-Created Coffee Blend Isn’t Terrible

Weren’t we just talking about coffee-based sacrilege the other day? Here’s something to make the single-origin bean snobs chew their espresso cups: an artisan roastery in Helsinki is offering a coffee blend created by artificial intelligence called AI-conic. The idea, of course, is that technology will lighten the workload needed to produce coffee.

This is an interesting development because Finland consumes the most coffee in the world, according to the International Coffee Organization. Coffee roasting is a highly-valued traditional artisan profession there, so it stands to reason that they might turn to technology for help.

Just like with scotch whisky, there’s nothing wrong with coffee blends outright. Bean blends are good for consistency, when you want every cup to taste pretty much exactly the same. Single-origin beans, though, are traceable to one location, and as a result, they usually have a distinct flavor based on the climate they’re grown in.

If you’re new to coffee, blends are a nice, safe way to start out. And, interestingly, the AI chose to make the blend out of four different types of beans instead of the usual two or three, despite being tasked with creating a blend that would suit the palates of coffee enthusiasts. But the coffee experts agreed that the AI blend was “perfect” and needed no human intervention. We probably won’t be getting to Finland anytime soon, so if you try it, let us know how it tastes!

Do you like cold brew? How would you like to be able to brew some in just three minutes?