3D Scanning A Room With A Steam Deck And A Kinect

It may not be obvious, but Valve’s Steam Deck is capable of being more than just a games console. Demonstrating this is [Parker Reed]’s experiment in 3D scanning his kitchen with a Kinect and Steam Deck combo, and viewing the resulting mesh on the Steam Deck.

The two pieces of hardware end up needing a lot of adapters and cables.

[Parker] runs the RTAB-Map software package on his Steam Deck, which captures a point cloud and color images while he pans the Kinect around. After that, the Kinect’s job is done and he can convert the data to a mesh textured with the color images. RTAB-Map is typically used in robotic applications, but we’ve seen it power completely self-contained DIY 3D scanners.

While logically straightforward, the process does require some finessing and fiddling to get it up and running. Reliability is a bit iffy thanks to the mess of cables and adapters required to get everything hooked up, but it does work. [Parker] shows off the whole touchy process, but you can skip a little past the five minute mark if you just want to see the scanning in action.

The Steam Deck has actual computer chops beneath its games console presentation, and we’ve seen a Steam Deck appear as a USB printer that saves received print jobs as PDFs, and one has even made an appearance in radio signal direction finding.

Continue reading “3D Scanning A Room With A Steam Deck And A Kinect”

Wolfram Alpha With ChatGPT Looks Like A Killer Combo

Ever looked at Wolfram Alpha and the development of Wolfram Language and thought that perhaps Stephen Wolfram was a bit ahead of his time? Well, maybe the times have finally caught up because Wolfram plus ChatGPT looks like an amazing combo. That link goes to a long blog post from Stephen Wolfram that showcases exactly how and why the two make such a wonderful match, with loads of examples. (If you’d prefer a video discussion, one is embedded below the page break.)

OpenAI’s ChatGPT is a large language model (LLM) neural network, or more conventionally, an AI system capable of conversing in natural language. Thanks to a recently announced plugin system, ChatGPT can now interact with remote APIs and therefore use external resources.

ChatGPT’s natural language processing ability enables some pretty impressive interactions with Wolfram, enabling the kind of exchange you see here (click to enlarge.)

This is meaningful because LLMs are very good at processing natural language and generating plausible-sounding output, but whether or not the output is factually correct can be another matter. It’s not so much that ChatGPT is especially prone to confabulation, it’s more that the nature of an LLM neural network makes it difficult to ask “why exactly did you come up with your answer, and not something else?” In addition, asking ChatGPT to do things like perform nontrivial calculations is a bit of a square peg and round hole situation.

So how does the Wolfram plugin change that? When asked to produce data or perform computations, ChatGPT can now hand it off to Wolfram Alpha instead of attempting to generate the answer by itself.  Both sides use their strengths in this arrangement. First, ChatGPT interprets the user’s question and formulates it as a query, which is then sent to Wolfram Alpha for computation, and ChatGPT structures its response based on what it got back. In short, ChatGPT can now ask for help to get data or perform a computation, and it can show the receipts when it does.

Continue reading “Wolfram Alpha With ChatGPT Looks Like A Killer Combo”

3D-Printable Foaming Nozzle Shows How They Work

[Jack]’s design for a 3D-printable foaming nozzle works by mixing air with a fluid like liquid soap or hand sanitizer. This mixture gets forced through what looks like layers of fine-mesh sieve and eventually out the end by squeezing the bottle. The nozzle has no moving parts but does have an interesting structure to make this possible.

The fine meshes are formed by multiple layers of bridged filament.

Creating a foam with liquid soap requires roughly one part soap to nine parts air. The idea is that the resulting foam makes more efficient use of the liquid soap compared to dispensing an un-lathered goop directly onto one’s hands.

The really neat part is that the fine mesh structure inside the nozzle is created by having the printer stretch multiple layers of filament across the open span on the inside of the model. This is a technique similar to that used for creating bristles on 3D-printed brushes.

While this sort of thing may require a bit of expert tweaking to get the best results, it really showcases the way the fundamentals of how filament printers work. Once one knows the process, it can be exploited to get results that would be impossible elsewhere. Here are a few more examples of that: printing only a wall’s infill to allow airflow, manipulating “vase mode” to create volumes with structural ribs, and embedding a fine fabric mesh (like tulle) as either a fan filter or wearable and flexible armor. Everything’s got edge cases, and clever people can do some pretty neat things with them (when access isn’t restricted, that is.)

Need To Pick Objects Out Of Images? Segment Anything Does Exactly That

Segment Anything, recently released by Facebook Research, does something that most people who have dabbled in computer vision have found daunting: reliably figure out which pixels in an image belong to an object. Making that easier is the goal of the Segment Anything Model (SAM), just released under the Apache 2.0 license.

The online demo has a bank of examples, but also works with uploaded images.

The results look fantastic, and there’s an interactive demo available where you can play with the different ways SAM works. One can pick out objects by pointing and clicking on an image, or images can be automatically segmented. It’s frankly very impressive to see SAM make masking out the different objects in an image look so effortless. What makes this possible is machine learning, and part of that is the fact that the model behind the system has been trained on a huge dataset of high-quality images and masks, making it very effective at what it does.

Continue reading “Need To Pick Objects Out Of Images? Segment Anything Does Exactly That”

Tree Supports Are Pretty, So Why Not Make Them Part Of The Print?

Here’s an idea that [Nephlonor] shared a couple years ago, but is worth keeping in mind because one never knows when it might come in handy. He 3D printed a marble run track and kept the generated tree supports. As you can see in the image above, the track resembles a roller-coaster and the tree supports function as an automatically-generated scaffolding for the whole thing. Clever!

As mentioned, these results are from a couple of years ago; so this idea should work even better nowadays. Tree supports have come a long way since then, and are available in more slicers than just Cura.

Tree supports without an interface layer is easy mode for “generate me some weird-looking scaffolding”

If you’re going to do this, we suggest reducing or eliminating the support interface and distance, which is the spacing between the supports and the rest of the model. The interface makes supports easier to remove, but if one is intending to leave it attached, it makes more sense to have a solid connection.

And while we’re on the topic of misusing supports, we’d like to leave you with one more trick to keep in mind. [Angus] of Maker’s Muse tucked a great idea into one of his videos: print just the support structure, and use it as a stand for oddly-shaped objects. Just set the object itself to zero walls and zero infill, and the printer will generate (and print) only the support structure. Choose an attractive angle, and presto! A display stand that fits the object like a glove.

You can watch a brief video of the marble run embedded below. Again, tree supports both look better and are available in more slicers nowadays. Have you tried this? If so we’d love to hear about it, so let us know in the comments!

Continue reading “Tree Supports Are Pretty, So Why Not Make Them Part Of The Print?”

Wolverine Gives Your Python Scripts The Ability To Self-Heal

[BioBootloader] combined Python and a hefty dose of of AI for a fascinating proof of concept: self-healing Python scripts. He shows things working in a video, embedded below the break, but we’ll also describe what happens right here.

The demo Python script is a simple calculator that works from the command line, and [BioBootloader] introduces a few bugs to it. He misspells a variable used as a return value, and deletes the subtract_numbers(a, b) function entirely. Running this script by itself simply crashes, but using Wolverine on it has a very different outcome.

In a short time, error messages are analyzed, changes proposed, those same changes applied, and the script re-run.

Wolverine is a wrapper that runs the buggy script, captures any error messages, then sends those errors to GPT-4 to ask it what it thinks went wrong with the code. In the demo, GPT-4 correctly identifies the two bugs (even though only one of them directly led to the crash) but that’s not all! Wolverine actually applies the proposed changes to the buggy script, and re-runs it. This time around there is still an error… because GPT-4’s previous changes included an out of scope return statement. No problem, because Wolverine once again consults with GPT-4, creates and formats a change, applies it, and re-runs the modified script. This time the script runs successfully and Wolverine’s work is done.

LLMs (Large Language Models) like GPT-4 are “programmed” in natural language, and these instructions are referred to as prompts. A large chunk of what Wolverine does is thanks to a carefully-written prompt, and you can read it here to gain some insight into the process. Don’t forget to watch the video demonstration just below if you want to see it all in action.

While AI coding capabilities definitely have their limitations, some of the questions it raises are becoming more urgent. Heck, consider that GPT-4 is barely even four weeks old at this writing.

Continue reading “Wolverine Gives Your Python Scripts The Ability To Self-Heal”

Tired Of Web Scraping? Make The AI Do It

[James Turk] has a novel approach to the problem of scraping web content in a structured way without needing to write the kind of page-specific code web scrapers usually have to deal with. How? Just enlist the help of a natural language AI. Scrapeghost relies on OpenAI’s GPT API to parse a web page’s content, pull out and classify any salient bits, and format it in a useful way.

What makes Scrapeghost different is how data gets organized. For example, when instantiating scrapeghost one defines the data one wishes to extract. For example:

from scrapeghost import SchemaScraper
scrape_legislators = SchemaScraper(
schema={
"name": "string",
"url": "url",
"district": "string",
"party": "string",
"photo_url": "url",
"offices": [{"name": "string", "address": "string", "phone": "string"}],
}
)

The kicker is that this format is entirely up to you! The GPT models are very, very good at processing natural language, and scrapeghost uses GPT to process the scraped data and find (using the example above) whatever looks like a name, district, party, photo, and office address and format it exactly as requested.

It’s an experimental tool and you’ll need an API key from OpenAI to use it, but it has useful features and is certainly a novel approach. There’s a tutorial and even a command-line interface, so check it out.