Need To Pick Objects Out Of Images? Segment Anything Does Exactly That

Segment Anything, recently released by Facebook Research, does something that most people who have dabbled in computer vision have found daunting: reliably figure out which pixels in an image belong to an object. Making that easier is the goal of the Segment Anything Model (SAM), just released under the Apache 2.0 license.

The online demo has a bank of examples, but also works with uploaded images.

The results look fantastic, and there’s an interactive demo available where you can play with the different ways SAM works. One can pick out objects by pointing and clicking on an image, or images can be automatically segmented. It’s frankly very impressive to see SAM make masking out the different objects in an image look so effortless. What makes this possible is machine learning, and part of that is the fact that the model behind the system has been trained on a huge dataset of high-quality images and masks, making it very effective at what it does.

Continue reading “Need To Pick Objects Out Of Images? Segment Anything Does Exactly That”

Tree Supports Are Pretty, So Why Not Make Them Part Of The Print?

Here’s an idea that [Nephlonor] shared a couple years ago, but is worth keeping in mind because one never knows when it might come in handy. He 3D printed a marble run track and kept the generated tree supports. As you can see in the image above, the track resembles a roller-coaster and the tree supports function as an automatically-generated scaffolding for the whole thing. Clever!

As mentioned, these results are from a couple of years ago; so this idea should work even better nowadays. Tree supports have come a long way since then, and are available in more slicers than just Cura.

Tree supports without an interface layer is easy mode for “generate me some weird-looking scaffolding”

If you’re going to do this, we suggest reducing or eliminating the support interface and distance, which is the spacing between the supports and the rest of the model. The interface makes supports easier to remove, but if one is intending to leave it attached, it makes more sense to have a solid connection.

And while we’re on the topic of misusing supports, we’d like to leave you with one more trick to keep in mind. [Angus] of Maker’s Muse tucked a great idea into one of his videos: print just the support structure, and use it as a stand for oddly-shaped objects. Just set the object itself to zero walls and zero infill, and the printer will generate (and print) only the support structure. Choose an attractive angle, and presto! A display stand that fits the object like a glove.

You can watch a brief video of the marble run embedded below. Again, tree supports both look better and are available in more slicers nowadays. Have you tried this? If so we’d love to hear about it, so let us know in the comments!

Continue reading “Tree Supports Are Pretty, So Why Not Make Them Part Of The Print?”

Wolverine Gives Your Python Scripts The Ability To Self-Heal

[BioBootloader] combined Python and a hefty dose of of AI for a fascinating proof of concept: self-healing Python scripts. He shows things working in a video, embedded below the break, but we’ll also describe what happens right here.

The demo Python script is a simple calculator that works from the command line, and [BioBootloader] introduces a few bugs to it. He misspells a variable used as a return value, and deletes the subtract_numbers(a, b) function entirely. Running this script by itself simply crashes, but using Wolverine on it has a very different outcome.

In a short time, error messages are analyzed, changes proposed, those same changes applied, and the script re-run.

Wolverine is a wrapper that runs the buggy script, captures any error messages, then sends those errors to GPT-4 to ask it what it thinks went wrong with the code. In the demo, GPT-4 correctly identifies the two bugs (even though only one of them directly led to the crash) but that’s not all! Wolverine actually applies the proposed changes to the buggy script, and re-runs it. This time around there is still an error… because GPT-4’s previous changes included an out of scope return statement. No problem, because Wolverine once again consults with GPT-4, creates and formats a change, applies it, and re-runs the modified script. This time the script runs successfully and Wolverine’s work is done.

LLMs (Large Language Models) like GPT-4 are “programmed” in natural language, and these instructions are referred to as prompts. A large chunk of what Wolverine does is thanks to a carefully-written prompt, and you can read it here to gain some insight into the process. Don’t forget to watch the video demonstration just below if you want to see it all in action.

While AI coding capabilities definitely have their limitations, some of the questions it raises are becoming more urgent. Heck, consider that GPT-4 is barely even four weeks old at this writing.

Continue reading “Wolverine Gives Your Python Scripts The Ability To Self-Heal”

Tired Of Web Scraping? Make The AI Do It

[James Turk] has a novel approach to the problem of scraping web content in a structured way without needing to write the kind of page-specific code web scrapers usually have to deal with. How? Just enlist the help of a natural language AI. Scrapeghost relies on OpenAI’s GPT API to parse a web page’s content, pull out and classify any salient bits, and format it in a useful way.

What makes Scrapeghost different is how data gets organized. For example, when instantiating scrapeghost one defines the data one wishes to extract. For example:

from scrapeghost import SchemaScraper
scrape_legislators = SchemaScraper(
schema={
"name": "string",
"url": "url",
"district": "string",
"party": "string",
"photo_url": "url",
"offices": [{"name": "string", "address": "string", "phone": "string"}],
}
)

The kicker is that this format is entirely up to you! The GPT models are very, very good at processing natural language, and scrapeghost uses GPT to process the scraped data and find (using the example above) whatever looks like a name, district, party, photo, and office address and format it exactly as requested.

It’s an experimental tool and you’ll need an API key from OpenAI to use it, but it has useful features and is certainly a novel approach. There’s a tutorial and even a command-line interface, so check it out.

Blinks Are Useful In VR, But Triggering Blinks Is Tricky

In VR, a blink can be a window of opportunity to improve the user’s experience. We’ll explain how in a moment, but blinks are tough to capitalize on because they are unpredictable and don’t last very long. That’s why researchers spent time figuring out how to induce eye blinks on demand in VR (video) and the details are available in a full PDF report. Turns out there are some novel, VR-based ways to reliably induce blinks. If an application can induce them, it makes it easier to use them to fudge details in helpful ways.

It turns out that humans experience a form of change blindness during blinks, and this can be used to sneak small changes into a scene in useful ways. Two examples are hand redirection (HR), and redirected walking (RDW). Both are ways to subtly break the implicit one-to-one mapping of physical and virtual motions. Redirected walking can nudge a user to stay inside a physical boundary without realizing it, leading the user to feel the area is larger than it actually is. Hand redirection can be used to improve haptics and ergonomics. For example, VR experiences that use physical controls (like a steering wheel in a driving simulator, or maybe a starship simulator project like this one) rely on physical and virtual controls overlapping each other perfectly. Hand redirection can improve the process by covering up mismatches in a way that is imperceptible to the user.

There are several known ways to induce a blink reflex, but it turns out that one novel method is particularly suited to implementing in VR: triggering the menace reflex by simulating a fast-approaching object. In VR, a small shadow appears in the field of view and rapidly seems to approach one’s eyes. This very brief event is hardly noticeable, yet reliably triggers a blink. There are other approaches as well such as flashes, sudden noise, or simulating the gradual blurring of vision, but to be useful a method must be unobtrusive and reliable.

We’ve already seen saccadic movement of the eyes used to implement redirected walking, but it turns out that leveraging eye blinks allows for even larger adjustments and changes to go unnoticed by the user. Who knew blinks could be so useful to exploit?

Continue reading “Blinks Are Useful In VR, But Triggering Blinks Is Tricky”

Arc Overhangs In PrusaSlicer Are A Simple Script Away

Interested in the new hotness of printing previously-impossible overhangs? You can now integrate Arc Overhangs into PrusaSlicer and give it a shot for yourself. Arc overhangs is a method of laying filament into a pattern of blossoming concentric rings instead of stringing filament bridges over empty space (or over supports).

These arcs are remarkably stable, and result in the ability to print overhangs that need to be seen to be believed. We covered this clever technique in the past and there are now two ways for the curious hacker to try it out with a minimum of hassle: either run the Python script on a G-code file via the command line, or integrate the functionality into PrusaSlicer directly by adding it as an automatic post-processing script. The project’s GitHub repository has directions for both methods.

Here’s how it works: the script looks for layers with a “bridge infill” tag (which PrusaSlicer helpfully creates) and replaces that G-code with that for arc overhangs. It is still a work in progress, so keep a few things in mind for best results. Arc overhangs generally work best when the extruded plastic cools as fast as possible. So it is recommended to extrude at the lowest reliable temperature, slowly, and with maximum cooling. It’s not fast, but it’s said to be faster than wrestling with supports and their removal.

A few things could use improvement. Currently the biggest issue is warping of the arc overhangs when new layers get printed on top of them. Do you have a solution or suggestion? Don’t keep it to yourself; discuss in the comments, or consider getting involved in the project.

Rubber Bands And O-Rings Give 3D Prints Some Stretch

Sometimes it would be helpful if a 3D printed object could stretch & bend. Flexible filament like TPU is one option, but [NagyBig] designed a simple bracelet to ask: how about embedding rubber bands or o-rings into the print itself?

Embedding objects into prints usually involves hardware like fasteners or magnets, but this is the first one (we can think of) that uses rubber bands. Though we have seen rubber bracelets running on printed wheels, and o-rings used to provide tension on a tool holder.

The end result is slightly reminiscent of embedding 3D printed shapes into tulle in order to create fantastic, armor-like flexible creations. But using rubber bands means the result is stretchy and compliant to a degree we haven’t previously seen. Keep it in mind the next time you’re trying to solve a tricky design problem; an embedded o-ring or rubber band might just do the trick.

Continue reading “Rubber Bands And O-Rings Give 3D Prints Some Stretch”