Since ELIZA was created by [Joseph Weizenbaum] in the 1960s, its success had led to many variations and ports being written over the intervening decades. The goal of the ELIZA Archaeology Project by Stanford, USC, Oxford and other university teams is to explore and uncover as much of this history as possible, starting with the original 1960s code. As noted in a recent blog post by [Anthony Hay], most of the intervening ‘ELIZA’ versions seem to have been more inspired by the original rather than accurate replicas or extensions of the original. This raises the question of what the original program really looked like, a question which wasn’t answered until 2020 when the original source code was rediscovered. Continue reading “The ELIZA Archaeology Project: Uncovering The Original ELIZA”
artificial intelligence109 Articles
Human-Written Or Machine-Generated: Finding Intelligence In Language Models
What is the essential element which separates a text written by a human being from a text which has been generated by an algorithm, when said algorithm uses a massive database of human-written texts as its input? This would seem to be the fundamental struggle which society currently deals with, as the prospect of a future looms in which students can have essays auto-generated from large language models (LLMs) and authors can churn out books by the dozen without doing more than asking said algorithm to write it for them, using nothing more than a query containing the desired contents as the human inputs.
Due to the immense amount of human-generated text in such an LLM, in its output there’s a definite overlap between machine-generated text and the average prose by a human author. Statistical methods of detecting the former are also increasingly hamstrung by the human developers and other human workers behind these text-generating algorithms, creating just enough human-like randomness in the algorithm’s predictive vocabulary to convince the casual reader that it was written by a fellow human.
Perhaps the best way to detect machine-generated text may just be found in that one quality that these algorithms are often advertised with, yet which they in reality are completely devoid of: intelligence.
Continue reading “Human-Written Or Machine-Generated: Finding Intelligence In Language Models”
AI On The Hunt For Better Batteries
While certain dystopian visions of the future have humans power the grid for AIs, Microsoft and Pacific Northwest National Laboratory (PNNL) set a machine learning system on the path of better solid state batteries instead.
Solid state batteries are the current darlings of battery research, promising a step-change in packaging size and safety among other advantages. While they have been working in the lab for some time now, we’re still yet to see any large-scale commercialization that could shake up the consumer electronics and electric vehicle spaces.
With a starting set of 32 million potential inorganic materials, the machine learning algorithm was able to select the 150 most promising candidates for further development in the lab. This smaller subset was then fed through a high-performance computing (HPC) algorithm to winnow the list down to 23. Eliminating previously explored compounds, the scientists were able to develop a promising Li/Na-ion solid state battery electrolyte that could reduce the needed Li in a battery by up to 70%.
For those of us who remember when energy materials research often consisted of digging through dusty old journal papers to find inorganic compounds of interest, this is a particularly exciting advancement. A couple more places technology can help in the sciences are robots doing the work in the lab or on the surgery table.
FedEx Robot Solves Complex Packing Problems
Despite the fact that it constantly seems like we’re in the midst of a robotics- and artificial intelligence-driven revolution, there are a number of tasks that continue to elude even the best machine learning algorithms and robots. The clothing industry is an excellent example, where the flimsy materials can easily trip up robotic manipulators. But one task like this that seems like it might soon be solve is packing cargo into trucks, as FedEx is trying to do with one of their new robots.
Part of the reason this task is so difficult is that packing problems, similar to “traveling salesman” problems, are surprisingly complex. The packages are not presented to the robot in any particular order, and need to be efficiently placed according to weight and size. This robot, called DexR, uses artificial intelligence paired with an array of sensors to get an idea of each package’s dimensions, which allows it to then plan stacking and ordering configurations and ensure secure fits between all of the other packages. The robot must also be capable of quickly adapting if any packages shift during stacking and re-order or re-stack them.
As robotics platforms and artificial intelligence continue to improve, it’s likely we’ll see a flurry of complex problems like these solved by machine instead of by human. Tackling real-world tasks are often more complex than they seem, as anyone with a printer an a PC LOAD LETTER error can attest to, even handling single sheets of paper can be a difficult task for a robot. Interfacing with these types of robots can be a walk in the park, though, provided you read the documentation first.
Humans And Balloon Hands Help Bots Make Breakfast
Breakfast may be the most important meal of the day, but who wants to get up first thing in the morning and make it? Well, there may come a day when a robot can do the dirty work for you. This is Toyota Research Institute’s vision with their innovatively-trained breakfast bots.
Going way beyond pick and place tasks, TRI has, so far, taught robots how to do more than 60 different things using a new method to teach dexterous skills like whisking eggs, peeling vegetables, and applying hazelnut spread to a substrate. Their method is built on generative AI technique called Diffusion Policy, which they use to create what they’re calling Large Behavior Models.
Instead of hours of coding and debugging, the robots learn differently. Essentially, the robot gets a large flexible balloon hand with which to feel objects, their weight, and their effect on other objects (like flipping a pancake). Then, a human shows them how to perform a task before the bot is let loose on an AI model. After a number of hours, say overnight, the bot has a new working behavior.
Now, since TRI claims that their aim is to build robots that amplify people and not replace them, you may still have to plate your own scrambled eggs and apply the syrup to that short stack yourself. But they plan to have over 1,000 skills in the bag of tricks by the end of 2024. If you want more information about the project and to learn about Diffusion Policy without reading the paper, check out this blog post.
Perhaps the robotic burger joint was ahead of its time, but we’re getting there. How about a robot barista?
Continue reading “Humans And Balloon Hands Help Bots Make Breakfast”
Programming A Poker Game With GPT Help
Although ChatGPT generated a huge amount of hype around replacing white collar workers completely when it was first released to the public, the general consensus now is that it won’t outright replace anyone yet, but rather people who know how to use it as a tool will replace those who don’t. Getting started with it is not too hard, either, but you’ll of course need a project to work on to familiarize yourself with the tool. [Volos Projects] gave himself the challenge of writing a poker game using ChatGPT not as the opposing player, but as a co-designer in order to learn more about it as an assistant.
The poker game is being built on an ESP32 board with a built-in AMOLED screen. Five buttons are wired to the microcontroller to allow the player to select which cards to discard and which to keep. The bet for each hand can be raised or lowered much like the tabletop poker games often seen in bars and restaurants. To program it, though, ChatGPT was used to help design the code at each step of the way, first describing the overall goal and then building each function one-by-one like shuffling the deck, dealing the hand, and then replacing and dealing new cards.
For anyone who hasn’t yet explored using ChatGPT to help design their programming projects, this effort goes a long way to showing just how useful a tool it can be. For more complex tasks, though, it does take a little bit of knowledge on the part of the user because ChatGPT can often turn out nonsense or factually inaccurate information, but at least in a programming environment you’ll generally find out quickly when that happens. It’s not just a useful tool for writing programs, either. It can accomplish a lot of ancillary tasks related to programming as well, even if it’s not writing the code directly.
Thanks to [Peter] for the tip!
Next-Gen Autopilot Puts A Robot At The Controls
While the concept of automotive “autopilots” are still in their infancy, pretty much any aircraft larger than an ultralight will have some mechanism to at least hold a fixed course and altitude. Typically the autopilot system is built into the airplane’s controls, but this new system replaces the pilot themselves in a manner reminiscent of the movie Airplane.
The robot pilot, known as PIBOT, uses both AI and robotics technology to fly the airplane without altering the aircraft. Unlike a normal autopilot system, this one can be fed the aircraft’s manuals in natural language, understand them, and use that information to fly the airplane. That includes operating any of the aircraft’s cockpit controls, not just the control column and pedal assembly. Supposedly, the autopilot can handle everything from takeoff to landing, and operate capably during heavy turbulence.
The Korea Advanced Institute of Science and Technology (KAIST) research team that built the machine hopes that it will pave the way for more advanced autopilot systems, and although this one has only been tested in simulators so far it shows enormous promise, and even has certain capabilities that go far beyond human pilots’ abilities including the ability to remember a much wider variety of charts. The team also hopes to eventually migrate the technology to the land, especially military vehicles, although we’ve seen how challenging that can be already.






