Four images in one. Top left is an image of four individuals in a room with whiteboards and desks in the background along with various clutter on the floor. Over the people is a wireframe overlay of their poses. The image on the top right is just the wireframe people on a black background. Bottom left image is of a single individual standing in a room with the pose wireframe overlay. Bottom right image is the single pose wireframe on a black background.

Tracking Humans With WiFi

In case you thought that cameras, LiDAR, infrared sensors, and the like weren’t enough for Big Brother to track you, researchers from Carnegie Mellon University have found a way to track human movements via WiFi. [PDF via VPNoverview]

The process uses the signals from WiFi routers for an inexpensive way to determine human poses that isn’t hampered by lack of illumination or object occlusion. The system produces UV coordinates of human bodies by analyzing signal strength and phase data to generate a 2D feature map and then feeding that through a modified DensePose-RCNN architecture which corresponds to 3D human poses. The system does have trouble with unusual poses that are not in the training set or if there are more than three subjects in the detection area.

While there are probably applications in Kinect-esque VR Halo games, this will probably go straight into the toolbox of three letter agencies and advertising-fueled tech companies. The authors claim this to use “privacy-preserving algorithms for human sensing,” but only time will tell if they’re correct.

If you’re interested in other creepy surveillance tools, checkout the Heat-Sensing Crotch Monitor or this Dystopian Peep Show.

Can AI Replace Your DM?

The current hotness is anything to do with artificial intelligence, and along with some interesting experiments comes a lot of mindless hype. The question is, what can it do for us! [Jesse] provides a fun answer by asking ChatGPT to perform as a Dungeons and Dragons dungeon master.

There are many ways to approach a game of D&D, and while some take the whole thing very seriously indeed we prefer to treat it as a lightly inebriated band of intrepid heroes smacking each other and assorted monsters with imaginary swords and war hammers. Would the AI follow the nerdiest cliches to their pedantic conclusions, or would it sense that the point of a game is to have fun?

Continue reading “Can AI Replace Your DM?”

Giving Stable Diffusion Some Depth

You’ve likely heard quite a bit of buzz over the last few months about Stable Diffusion. The new version (v2) has come out, and in addition to the standard image-to-image and text-to-image modes, it also has a depth-image-to-image that can be incredibly useful. [Andrew] has a write-up that guides you on using this mode.

The basic idea is that you can take both an image and depth into the model, which allows you to control what gets put where. Stable Diffusion is a bit confusing, but we already have some great resources to wrap your head around it. In terms of input, you can use a depth map from a camera with lidar (many recent phones include this) or have another model (like MiDaS) estimate it from a 2D picture. This becomes powerful when you can preserve a specific composition, such as an iconic scene from a well-known movie. You can keep the characters’ poses on the screen but transform the style of the scene into whatever you wish (as seen above).

We have already covered a technique to generate textures right in blender, but this new depth information has already been implemented to provide better accuracy of the textures.

[Justin Alvey] used it to create architectural photos from dollhouse furniture. Using the MiDaS model, he estimated the depth and threw away the RGB aspects by setting the denoising strength to maximum. The simplified dollhouse furniture was easily recognizable to the model, which helped produce great results.

However, the only downside is that the perspective produces a rather dollhouse feel. Changing the focal length and moving farther away helps. Overall, it’s a clever use of what the new AI model can do. It’s a fast-moving space, so this will likely be out of date in a few months.

 

Image-Generating AI Can Texture An Entire 3D Scene In Blender

[Carson Katri] has a fantastic solution to easily add textures to 3D scenes in Blender: have an image-generating AI create the texture on demand, and do it for you.

It’s not perfect — the odd door or window feature might suffer from a lack of right angles — but it’s pretty amazing.

As shown here, two featureless blocks on a featureless plain become run-down buildings by wrapping the 3D objects in a suitable image. It’s all done with the help of the Dream Textures add-on for Blender.

The solution uses Stable Diffusion to generate a texture for a scene based on a text prompt (e.g. “sci-fi abandoned buildings”), and leverages an understanding of a scene’s depth for best results. The AI-generated results aren’t always entirely perfect, but the process is pretty amazing. Not to mention fantastically fast compared to creating from scratch.

AI image generation capabilities are progressing at a breakneck pace, and giving people access to tools that can be run locally is what drives interesting and useful applications like this one here.

Curious to know more about how systems like Stable Diffusion work? Here’s a pretty good technical primer, and the Washington Post recently published a less-technical (but accurate) interactive article explaining how AI image generators work, as well as the impact they are having.

A VM In An AI

AI knoweth everything, and as each new model breaks upon the world, it attracts a new crowd of experimenters. The new hotness is ChatGPT, and [Jonas Degrave] has turned his attention to it. By asking it to act as a Linux terminal, he discovered that he could gain access to a complete Linux virtual machine within the model’s synthetic imagination.

The AI’s first response was a prompt, so he of course first tried to list the files. Up came a list of directories, so the next step was to create a file and put some text in it. All of this resulted in a readable file, so there was some promise in this unexpected computing resource. But can it run code? Continue reading “A VM In An AI”

Love AI, But Don’t Love It Too Much

The up-and-coming Wonder of the World in software and  information circles , and particularly in those circles who talk about them, is AI. Give a magic machine a lot of stuff, ask it a question, and it will give you a meaningful and useful answer. It will create art, write books, compose music, and generally Change The World As We Know It. All this is genuinely impressive stuff, as anyone who has played with DALL-E will tell you. But it’s important to think about what the technology can and can’t do that’s new so as to not become caught up in the hype, and in doing that I’m immediately drawn to a previous career of mine. Continue reading “Love AI, But Don’t Love It Too Much”

Here’s A Plain C/C++ Implementation Of AI Speech Recognition, So Get Hackin’

[Georgi Gerganov] recently shared a great resource for running high-quality AI-driven speech recognition in a plain C/C++ implementation on a variety of platforms. The automatic speech recognition (ASR) model is fully implemented using only two source files and requires no dependencies. As a result, the high-quality speech recognition doesn’t involve calling remote APIs, and can run locally on different devices in a fairly straightforward manner. The image above shows it running locally on an iPhone 13, but it can do more than that.

Implementing a robust speech transcription that runs locally on a variety of devices is much easier with [Georgi]’s port of OpenAI’s Whisper.
[Georgi]’s work is a port of OpenAI’s Whisper model, a remarkably-robust piece of software that does a truly impressive job of turning human speech into text. Whisper is easy to set up and play with, but this port makes it easier to get the system working in other ways. Having such a lightweight implementation of the model means it can be more easily integrated over a variety of different platforms and projects.

The usual way that OpenAI’s Whisper works is to feed it an audio file, and it spits out a transcription. But [Georgi] shows off something else that might start giving hackers ideas: a simple real-time audio input example.

By using a tool to stream audio and feed it to the system every half-second, one can obtain pretty good (sort of) real-time results! This of course isn’t an ideal method, but the robustness and accuracy of Whisper is such that the results look pretty great nevertheless.

You can watch a quick demo of that in the video just under the page break. If it gives you some ideas, head over to the project’s GitHub repository and get hackin’!

Continue reading “Here’s A Plain C/C++ Implementation Of AI Speech Recognition, So Get Hackin’”