Torment Poor Milton With Your Best Pixel Art

One of the great things about new tech tools is just having fun with them, like embracing your inner trickster god to mess with ‘Milton’, an AI trapped in an empty room.

Milton is trapped in a room is a pixel-art game with a simple premise: use a basic paint interface to add objects to the room, then watch and listen to Milton respond to them. That’s it? That’s it. The code is available on the GitHub repository, but there’s also a link to play it live without any kind of signup or anything. Give it a try if you have a few spare minutes.

Under the hood, the basic loop is to let the user add something to the room, send the picture of the room (with its new contents) off for image recognition, then get Milton’s reaction to it. Milton is equal parts annoyed and jumpy, and his speech and reactions reflect this.

The game is a bit of a concept demo for Open Souls whose “thing” is providing AIs with far more personality and relatable behaviors than one typically expects from large language models. Maybe this is just what’s needed for AI opponents in things like the putting game of Connect Fore! to level up their trash talking.

Hackaday Links Column Banner

Hackaday Links: June 16, 2024

Attention, slackers — if you do remote work for a financial institution, using a mouse jiggler might not be the best career move. That’s what a dozen people learned this week as they became former employees of Wells Fargo after allegedly being caught “simulating keyboard activity” while working remotely. Having now spent more than twice as many years working either hybrid or fully remote, we get it; sometimes, you’ve just got to step away from the keyboard for a bit. But we’ve never once felt the need to create the “impression of active work” during those absences. Perhaps that’s because we’ve never worked in a regulated environment like financial services.

For our part, we’re curious as to how the bank detected the use of a jiggler. The linked article mentions that regulators recently tightened rules that require employers to treat an employee’s home as a “non-branch location” subject to periodic inspection. More than enough reason to quit, in our opinion, but perhaps they sent someone snooping? More likely, the activity simulators were discovered by technical means. The article contains a helpful tip to avoid powering a jiggler from the computer’s USB, which implies detecting the device over the port. Our guess is that Wells tracks mouse and keyboard activity and compares it against a machine-learning model to look for signs of slacking.

Continue reading “Hackaday Links: June 16, 2024”

Giant Brains, Or Machines That Think

Last week, I stumbled on a marvelous book: “Giant Brains; or, Machines That Think” by Edmund Callis Berkeley. What’s really fun about it is the way it sounds like it could be written just this year – waxing speculatively about the future when machines do our thinking for us. Except it was written in 1949, and the “thinking machines” are early proto-computers that use relays (relays!) for their logic elements. But you need to understand that back then, they could calculate ten times faster than any person, and they would work tirelessly day and night, as long as their motors keep turning and their contacts don’t get corroded.

But once you get past the futuristic speculation, there’s actually a lot of detail about how the then-cutting-edge machines worked. Circuit diagrams of logic units from both the relay computers and the brand-new vacuum tube machines are on display, as are drawings of the tricky bits of purely mechanical computers. There is even a diagram of the mercury delay line, and an explanation of how circulating audio pulses through the medium could be used as a form of memory.

All in all, it’s a wonderful glimpse at the earliest of computers, with enough detail that you could probably build something along those lines with a little moxie and a few thousands of relays. This grounded reality, coupled with the fantastic visions of where computers would be going, make a marvelous accompaniment to a lot of the breathless hype around AI these days. Recommended reading!

AI Kayak Controller Lets The Paddle Show The Way

Controlling an e-bike is pretty straightforward. If you want to just let it rip, it’s a no-brainer — or rather, a one-thumber, as a thumb throttle is the way to go. Or, if you’re still looking for a bit of the experience of riding a bike, sensing when the pedals are turning and giving the rider a boost with the motor is a good option.

But what if your e-conveyance is more of the aquatic variety? That’s an interface design problem of a different color, as [Braden Sunwold] has discovered with his DIY e-kayak. We’ve detailed his work on this already, but for a short recap, his goal is to create an electric assist for his inflatable kayak, to give you a boost when you need it without taking away from the experience of kayaking. To that end, he used the motor and propeller from a hydrofoil to provide the needed thrust, while puzzling through the problem of building an unobtrusive yet flexible controller for the motor.

His answer is to mount an inertial measurement unit (IMU) in a waterproof container that can clamp to the kayak paddle. The controller is battery-powered and uses an nRF link to talk to a Raspberry Pi in the kayak’s waterproof electronics box. The sensor also has an LED ring light to provide feedback to the pilot. The controller is set up to support both a manual mode, which just turns on the motor and turns the kayak into a (low) power boat, and an automatic mode, which detects when the pilot is paddling and provides a little thrust in the desired direction of travel.

The video below shows the non-trivial amount of effort [Braden] and his project partner [Jordan] put into making the waterproof enclosure for the controller. The clamp is particularly interesting, especially since it has to keep the sensor properly oriented on the paddle. [Braden] is working on a machine-learning method to analyze paddle motions to discern what the pilot is doing and where the kayak goes. Once he has that model built, it should be time to hit the water and see what this thing can do. We’re eager to see the results.
Continue reading “AI Kayak Controller Lets The Paddle Show The Way”

Hackaday Links Column Banner

Hackaday Links: June 2, 2024

So you say you missed the Great Solar Storm of 2024 along with its attendant aurora? We feel you on that; the light pollution here was too much for decent viewing, and it had been too long a day to make a drive into the deep dark of the countryside survivable. But fear not — the sunspot that raised all the ruckus back at the beginning of May has survived the trip across the far side of the sun and will reappear in early June, mostly intact and ready for business. At least sunspot AR3664 seems like it’s still a force to be reckoned with, having cooked off an X-class flare last Tuesday just as it was coming around from the other side of the Sun. Whether 3664 will be able to stir up another G5 geomagnetic storm remains to be seen, but since it fired off an X-12 flare while it was around the backside, you never know. Your best bet to stay informed in these trying times is the indispensable Dr. Tamitha Skov.

Continue reading “Hackaday Links: June 2, 2024”

Tabletop Handybot Is Handy, And Powered By AI

Decently useful AI has been around for a little while now, and robotic arms have been around much longer. Yet somehow, we don’t have little robot helpers on our desks yet! Thankfully, [Yifei] is working towards that reality with Tabletop Handybot.

What [Yifei] has developed is a robotic arm that accepts voice commands. The robot relies on a Realsense D435 RGB-D camera, which provides color vision with depth information as well. Grounding DINO is used for object detection on the RGB images. Segment Anything and Open3D are used for further processing of the visual and depth data to help the robot understand what it’s looking at. Meanwhile, voice commands are interpreted via OpenAI Whisper, which can feed prompts to ChatGPT for further processing.

[Yifei] demonstrates his robot picking up markers on command, which is a pretty cool demo. With so many modern AI tools available, we’re getting closer to the ideal of robots that can understand and execute on general spoken instructions. This is a great example. We may not be all the way there yet, but perhaps soon. Video after the break.

Continue reading “Tabletop Handybot Is Handy, And Powered By AI”

Generative AI Hits The Commodore 64

Image-generating AIs are typically trained on huge arrays of GPUs and require great wads of processing power to run. Meanwhile, [Nick Bild] has managed to get something similar running on a Commodore 64. (via Tom’s Hardware).

A figure generated by [Nick]’s C64. We shall name him… “Sword Guy”!
As you might imagine, [Nick’s] AI image generator isn’t churning out 4K cyberpunk stills dripping in neon. Instead, he aimed at a smaller target, more befitting the Commodore 64 itself. His image generator creates 8×8 game sprites instead.

[Nick’s] model was trained on 100 retro-inspired sprites that he created himself. He did the training phase on a modern computer, so that the Commodore 64 didn’t have to sweat this difficult task on its feeble 6502 CPU. However, it’s more than capable of generating sprites using the model, thanks to some BASIC code that runs off of the training data. Right now, it takes the C64 about 20 minutes to run through 94 iterations to generate a decent sprite.

8×8 sprites are generally simple enough that you don’t need to be an artist to create them. Nonetheless, [Nick] has shown that modern machine learning techniques can be run on slow archaic hardware, even if there is limited utility in doing so. Video after the break.

Continue reading “Generative AI Hits The Commodore 64”