DIY Rabbit R1 Clone Could Be Neat With More Hardware

The Teenage Engineering badging usually appears on some cool gear that almost always costs a great deal of money. One such example is the Rabbit R1, an AI-powered personal assistant that retails for $199. It was also revealed that it’s basically a small device running a simple Android app. That raises the question — could build your own dupe for $20? That’s what [Thomas the Maker] did.

Meet Rappit. It’s basically [Thomas]’s take on an AI friend that doesn’t break the bank. It runs on a Raspberry Pi Zero 2W, which has the benefit of integrated wireless connectivity on board. It’s powered by rechargeable AA batteries or a USB power bank to keep things simple. [Thomas] then wrapped it all up in a cute 3D printed enclosure to give it some charm.

It’s software that makes the Rappit what it is. Rather than including a screen, microphone, or speakers on the device itself, [Thomas] interacts with the Pi-based device via smartphone. It makes it a less convincing dupe of the self-contained Rabbit R1, but the basic concept is the same. [Thomas] can make queries of the Rappit via a simple Android or iOS app he created called “Comfyspace,” and the Rappit responds with the aid of Google’s Gemini AI.

If you’re really trying to duplicate the trend of AI assistants, you really need standalone hardware. To that end, the Rappit design could really benefit from a screen, microphone, speaker, and speech synth. Honestly, though, that would only take you a few hours extra work compared to what [Thomas] has already done here. As it is, [Thomas] could simply throw away the Raspberry Pi and just use the smartphone with Gemini directly, right? But he chose this route of using the smartphone as an interface to keep costs down by minimizing hardware outlay.

If you want a real Rabbit R1, you can order one here. We’ve discussed controversy around the device before, too. Video after the break.

Continue reading “DIY Rabbit R1 Clone Could Be Neat With More Hardware”

Taco Bell To Bring Voice AI Ordering To Hundreds Of US Drive-Throughs

Drive-throughs are a popular feature at fast-food places, where you can get some fast grub without even leaving your car. For the fast-food companies running them they are also a big focus of automation, with the ideal being a voice assistant that can take orders and pass them on to the (still human) staff. This probably in lieu of being able to make customers use the touch screens-equipped order kiosks that are common these days. Pushing for this drive-through automation change is now Taco Bell, or specifically the Yum Brands parent company.

This comes interestingly enough shortly after McDonalds deemed its own drive-through voice assistant to be a failure and removing it. Meanwhile multiple Taco Bell in the US in 13 states and five KFC restaurants in Australia are trialing the system, with results apparently encouraging enough to start expanding it. Company officials are cited as it having ‘improved order accuracy’, ‘decreased wait times’ and ‘increased profits’. Considering the McDonalds experience which was pretty much the exact opposite in all of these categories we will remain with bated breath. Feel free to share your Taco Bell or other Voice AI-enabled drive-through experiences in the comments. Maybe whoever Yum Brands contracted for their voice assistant did a surprisingly decent job, which would be a pleasant change.

Top image: Taco Bell – Vadnais Heights, MN (Credit: Gabriel Vanslette, Wikimedia)

AI Image Generator Twists In Response To MIDI Dials, In Real-time

MIDI isn’t just about music, as [Johannes Stelzer] shows by using dials to adjust AI-generated imagery in real-time. The results are wild, with an interactivity to them that we don’t normally see in such things.

[Johannes] uses Stable Diffusion‘s SDXL Turbo to create a baseline image of “photo of a red brick house, blue sky”. The hardware dials act as manual controls for applying different embeddings to this baseline, such as “coral”, “moss”, “fire”, “ice”, “sand”, “rusty steel” and “cookie”.

By adjusting the dials, those embeddings are applied to the base image in varying strengths. The results are generated on the fly and are pretty neat to see, especially since there is no appreciable amount of processing time required.

The MIDI controller is integrated with the help of lunar_tools, a software toolkit on GitHub to facilitate creating interactive exhibits. As for the image end of things, we’ve previously covered how AI image generators work.

Analyzing Feature Learning In Artificial Neural Networks And Neural Collapse

Artificial Neural Networks (ANNs) are commonly used for machine vision purposes, where they are tasked with object recognition. This is accomplished by taking a multi-layer network and using a training data set to configure the weights associated with each ‘neuron’. Due to the complexity of these ANNs for non-trivial data sets, it’s often hard to make head or tails of what the network is actually matching in a given (non-training data) input. In a March 2024 study (preprint) by [A. Radhakrishnan] and colleagues in Science an approach is provided to elucidate and diagnose this mystery somewhat, by using what they call the average gradient outer product (AGOP).

Defined as the uncentered covariance matrix of the ANN’s input-output gradients averaged over the training dataset, this property can provide information on the data set’s features used for predictions. This turns out to be strongly correlated with repetitive information, such as the presence of eyes in recognizing whether lipstick is being worn and star patterns in a car and truck data set rather than anything to do with the (highly variable) vehicles. None of this was perhaps too surprising, but a number of the same researchers used the same AGOP for elucidating the mechanism behind neural collapse (NC) in ANNs.

NC occurs when an ANN gets overtrained (overparametrized). In the preprint paper by [D. Beaglehole] et al. the AGOP is used to provide evidence for the mechanism behind NC during feature learning. Perhaps the biggest take-away from these papers is that while ANNs can be useful, they’re also incredibly complex and poorly understood. The more we learn about their properties, the more appropriately we can use them.

Credit: Daniel Baxter

Mechanical Intelligence And Counterfeit Humanity

It would seem fair to say that the second half of last century up till the present day has been firmly shaped by our relation with technology and that of computers in particular. From the bulking behemoths at universities, to microcomputers at home, to today’s smartphones, smart homes and ever-looming compute cloud, we all have a relationship with computers in some form. One aspect of computers which has increasingly become underappreciated, however, is that the less we see them as physical objects, the more we seem inclined to accept them as humans. This is the point which [Harry R. Lewis] argues in a recent article in Harvard Magazine.

Born in 1947, [Harry R. Lewis] found himself at the forefront of what would become computer science and related disciplines, with some of his students being well-know to the average Hackaday reader, such as [Bill Gates] and [Mark Zuckerberg]. Suffice it to say, he has seen every attempt to ‘humanize’ computers, ranging from ELIZA to today’s ChatGPT. During this time, the line between humans and computers has become blurred, with computer systems becoming increasingly more competent at imitating human interactions even as they vanished into the background of daily life.

These counterfeit ‘humans’ are not capable of learning, of feeling and experiencing the way that humans can, being at most a facsimile of a human for all but that what makes a human, which is often referred to as ‘the human experience’. More and more of us are communicating these days via smartphone and computer screens with little idea or regard for whether we are talking to a real person or not. Ironically, it seems that by anthropomorphizing these counterfeit humans, we risk becoming less human in the process, while also opening the floodgates for blaming AI when the blame lies square with the humans behind it, such as with the recent Air Canada chatbot case. Equally ridiculous, [Lewis] argues, is the notion that we could create a ‘superintelligence’ while training an ‘AI’ on nothing but the data scraped off the internet, as there are many things in life which cannot be understood simply by reading about them.

Ultimately, the argument is made that it is humanistic learning that should be the focus point of artificial intelligence, as only this way we could create AIs that might truly be seen as our equals, and beneficial for the future of all.

Cloudflare Adds Block For AI Scrapers And Similar Bots

It’s no big secret that a lot of the internet traffic today consists out of automated requests, ranging from innocent bots like search engine indexers to data scraping bots for LLM and similar generative AI companies. With enough customers who are less than amused by this boost in useless traffic, Cloudflare has announced that it’s expanding its blocking feature for the latter category of scrapers. Initially this block was only for ‘poorly behaving’ scrapers, but now it apparently targets all of such bots.

The block seems to be based around a range of characteristics, including the user agent string. According to Cloudflare’s data on its network, over 40% of identified AI bots came from ByteDance (Bytespider), followed by GPTBot at over 35% and ClaudeBot with 11% and a whole gaggle of smaller bots. Assuming that Imperva’s claims of bots taking up over half of today’s internet traffic are somewhat correct, that means that even if these bots follow robots.txt, that is still a lot of bandwidth being drained and the website owner effectively subsidizing the training of some company’s models. Unsurprisingly, Cloudflare notes that many website owners have already taken measures to block these bots in some fashion.

Naturally, not all of these scraper bots are well-behaved. Spoofing the user agent is an obvious way to dodge blocks, but scraper bot activity has many tell-tale signs which Cloudflare uses, as well as statistical data across its global network to compute a ‘bot score‘ for any requests. Although it remains to be seen whether false positives become an issue with Cloudflare’s approach, it’s definitely a sign of the times that more and more website owners are choosing to choke off unwanted, AI-related traffic.

Peering Into The Black Box Of Large Language Models

Large Language Models (LLMs) can produce extremely human-like communication, but their inner workings are something of a mystery. Not a mystery in the sense that we don’t know how an LLM works, but a mystery in the sense that the exact process of turning a particular input into a particular output is something of a black box.

This “black box” trait is common to neural networks in general, and LLMs are very deep neural networks. It is not really possible to explain precisely why a specific input produces a particular output, and not something else.

Why? Because neural networks are neither databases, nor lookup tables. In a neural network, discrete activation of neurons cannot be meaningfully mapped to specific concepts or words. The connections are complex, numerous, and multidimensional to the point that trying to tease out their relationships in any straightforward way simply does not make sense.

Continue reading “Peering Into The Black Box Of Large Language Models”