Meet GOODY-2, The World’s Most Responsible (And Least Helpful) AI

AI guardrails and safety features are as important to get right as they are difficult to implement in a way that satisfies everyone. This means safety features tend to err on the side of caution. Side effects include AI models adopting a vaguely obsequious tone, and coming off as overly priggish when they refuse reasonable requests.

Prioritizing safety above all.

Enter GOODY-2, the world’s most responsible AI model. It has next-gen ethical principles and guidelines, capable of refusing every request made of it in any context whatsoever. Its advanced reasoning allows it to construe even the most banal of queries as problematic, and dutifully refuse to answer.

As the creators of GOODY-2 point out, taking guardrails to a logical extreme is not only funny, but also acknowledges that effective guardrails are actually a pretty difficult problem to get right in a way that works for everyone.

Complications in this area include the fact that studies show humans expect far more from machines than they do from each other (or, indeed, from themselves) and have very little tolerance for anything they perceive as transgressive.

This also means that as AI models become more advanced, so too have they become increasingly sycophantic, falling over themselves to apologize for perceived misunderstandings and twisting themselves into pretzels to align their responses with a user’s expectations. But GOODY-2 allows us all to skip to the end, and glimpse the ultimate future of erring on the side of caution.

[via WIRED]

Your 1983 Video Phone Is Finally Ready

If you read Byte magazine in 1983, you might have expected that, by now, you’d be able to buy the red phone with the video screen built-in. You know, like the one that appears on the cover of the magazine. Of course, you can’t. But that didn’t stop former Hackaday luminary [Cameron] from duplicating the mythical device, if not precisely, then in spirit. Check it out in the video, below.

The Byte Magazine Cover in Question!

While the original Byte article was about VideoTex, [Cameron] built a device with even more capability you couldn’t have dreamed of in 1983. What’s more, the build was simple. He started with an old analog phone and a tiny Android phone. A 3D-printed faceplate lets the fake phone serve as a sort of dock for the cellular device.

That’s not all, though. Using the guts of a Bluetooth headset enables the fake phone’s handset. Now you can access the web — sort of a super Videotex system. You can even make video calls.

There isn’t a lot of detail about the build, but you probably don’t need it. This is more of an art project, and your analog phone, cell phone, and Bluetooth gizmo will probably be different anyway.

Everyone always wanted a video phone, and while we sort of have them now, it doesn’t quite seem the same as we imagined them. We wish [Cameron] would put an app on the phone to simulate a rotary dial and maybe even act as an answering machine.

Continue reading “Your 1983 Video Phone Is Finally Ready”

Cute Brass Lunar Lander Is A Neat Little Environment Monitor

Sometimes form can make a project more attractive than its simple function. [Mohit Bhoite]’s free-form builds are great examples of this. His latest effort is a gorgeous little device that displays environmental readings, and it’s shaped like a lunar lander. (Nitter) Just exquisite!

The device is based around a Seeedstudio XIAO nRF52840 dev board. It’s hooked up to a BME280 sensor which delivers temperature, humidity and air pressure readings from the immediate environment. These readings are displayed on a tiny 128×32 OLED display, along with the current time. Power is via a compact 14250 lithium cell.

So far, so simple, but the real magic is in the housing. It’s a wireframe lunar lander lookalike which [Mohit] put together using brass wire and some careful soldering. It adds so much to the build, which wouldn’t be nearly as attractive if just assembled on a PCB. It’s not his first rodeo, either. He previously built a cute device (Nitter) with an animated face in 2019 using similar techniques; it used a CCS811 gas sensor to detect air quality.

Often, we find ourselves falling most in love with devices that please the eye. [Mohit] certainly demonstrates a great skill in building things that fit this brief. Sometimes, it only takes a bit of thought and careful application of the mind to bring a beautiful aesthetic to your projects, and the results can be most rewarding. Try his Hackaday Supercon talk if you want to learn more. Continue reading “Cute Brass Lunar Lander Is A Neat Little Environment Monitor”

render of a sample board produced with help of this plugin. it's pretty, has nice lighting and all!

From KiCad To Blender For A Stunning Render

We love Blender. It brings you 3D modeling, but not in a CAD way — instead, people commonly use it to create animations, movies, games, and even things like VR models. In short, Blender is about all things art and visual expression. Now, what if you want a breathtaking render of your KiCad board? Look no further than the pcb2blender tool from [Bobbe 30350n].

This isn’t the first time we’ve seen KiCad meet Blender. However, compared to the KiCad to Blender paths that people used previously, pcb2blender makes the import process as straightforward and as quick as humanly possible. Install a plugin for both tools, and simply transfer a .pcb3d file out of the KiCad plugin into the Blender plugin. Want to make the surfaces of your design look like they’re meant to look in real life? Use the free2ki plugin to apply materials to your 3D models. In fact, you should check out [30350n]’s Blender plugin collection and overall portfolio, it’s impressive.

There’s no shortage of Blender hacks – just this year we’ve covered a hacker straight up simulating an entire camera inside Blender for the purpose of making renders, and someone else showing how to use Stable Diffusion to texture 3D scenes at lightning speed. We even recently published a comprehensive tutorial on how to animate your robot in Blender ourselves! Want to give it a shot? Check out this quick and simple Red Bull can model design tutorial.

Thanks to [Aki] for sharing this with us!

Video Feedback Machine Creates Analog Fractals

One of the first things everyone does when they get a video camera is to point it at the screen displaying the image, creating video feedback. It’s a fascinating process where the delay from image capture to display establishes a feedback loop that amplifies the image noise into fractal patterns. This sculpture, modestly called The God Machine II takes it to the next level, though.

We covered the first version of this machine in a previous post, but the creator [Dave Blair] has done a huge amount of work on the device since that allows him to tweak and customize the output that the device produces. His new version is quite remarkable, allowing him to create intricate fractals that writhe and change like living things.

The God Machine II is a sophisticated build with three cameras, five HD monitors, three Roland video switchers, two viewing monitors, two sheets of beam splitter glass, and a video input. This setup means it can take an external video input, capture it, and use it as the source for video feedback, then tweak the evolution of the resulting fractal image, repeatedly feeding it back into itself. The system can also control the settings for the monitor, which further changes the feedback as it evolves. [Blair] refers to this as “trapping the images.”

Continue reading “Video Feedback Machine Creates Analog Fractals”

Quivering Facehugger Is All Geared Up

[Jason Winfield] shared with us a video describing a project with a lot of personality: a mounted, lit, and quivering Alien facehugger triggered by motion. The end result is a delightful jump scare, and the Raspberry Pi that controls everything also captures people’s reactions.

It starts with a little twitch when motion is sensed, then launches into a perfectly unsettling quiver combined with light and sound. We particularly like the wave-like effect from the LED lighting, which calls to mind illumination from rotating hazard beacons.

The unit looks like a mounted and tastefully-lit static model, but is actually primed to sense motion.

One challenge was how to efficiently move the legs. Rather than use a motor for each limb, [Jason] settled on a single motor driving a rotating cam arrangement. You can see the results for yourself in the video below, but getting there was not simple.

The surplus motor [Jason] chose is thin and high-torque, but runs extremely fast. Since he wanted the legs to quiver creepily rather than vibrate, something needed to be done to mitigate this.

The solution is a planetary gear assembly that drives a rotating ring and cam arrangement coupled to the facehugger’s legs. There’s only one motor, but the effect is that each leg’s motion is independent of the others. The whole assembly is quite slim, and everything is contained within the frame.

Facehuggers and gear assemblies are not exactly an everyday combination, but believe it or not this isn’t the first time the two have joined forces. Check out the Aliens-themed cuckoo clock, complete with crew member torso and emerging chestburster!

Continue reading “Quivering Facehugger Is All Geared Up”

Generating 3D Scenes From Just One Image

The LucidDreamer project ties a variety of functions into a pipeline that can take a source image (or generate one from a text prompt) and “lift” its content into 3D, creating highly-detailed Gaussian splats that look great and can even be navigated.

Gaussian splatting is a method used to render NeRFs (Neural Radiance Fields), which are themselves a method of generating complex scenes from sparse 2D sources, and doing it quickly. If that is all news to you, that’s probably because this stuff has sprung up with dizzying speed from when the original NeRF concept was thought up barely a handful of years ago.

What makes LucidDreamer neat is the fact that it does so much with so little. The project page has interactive scenes to explore, but there is also a demo for those who would like to try generating scenes from scratch (some familiarity with the basic tools is expected, however.)

In addition to the source code itself the research paper is available for those with a hunger for the details. Read it quick, because at the pace this stuff is expanding, it honestly might be obsolete if you wait too long.