George Washington Gets Cleaned Up With A Laser

Now, we wouldn’t necessarily call ourselves connoisseurs of fine art here at Hackaday. But we do enjoy watching [Julian Baumgartner]’s YouTube channel, where he documents the projects that he takes on as a professional conservator. Folks send in their dirty or damaged paintings, [Julian] works his magic, and the end result often looks like a completely different piece. Spoilers: if you’ve ever looked at an old painting and wondered why the artist made it so dark and dreary — it probably just needs to be cleaned.

Anyway, in his most recent video, [Julian] pulled out a piece of gear that we didn’t expect to see unleashed against a painting of one of America’s Founding Fathers: a Er:YAG laser. Even better, instead of some fancy-pants fine art restoration laser, he apparently picked up second hand unit designed for cosmetic applications. The model appears to be a Laserscope Venus from the early 2000s, which goes for about $5K these days.

Now, to explain why he raided an esthetician’s closet to fix up this particular painting, we’ve got to rewind a bit. As we’ve learned from [Julian]’s previous videos, the problem with an old dirty painting is rarely the paining itself, it’s the varnish that has been applied to it. These varnishes, especially older ones, have a tendency to yellow and crack with age. Now stack a few decades worth of smoke and dirt on top of it, and you’ve all but completely obscured the original painting underneath. But there’s good news — if you know what you’re doing, you can remove the varnish without damaging the painting itself.

In most cases, this can be done with various solvents that [Julian] mixes up after testing them out on some inconspicuous corner of the painting. But in this particular case, the varnish wasn’t reacting well to anything in his inventory. Even his weakest solvents were going right through it and damaging the paint underneath.

Because of this, [Julian] had to break out the big guns. After experimenting with the power level and pulse duration of the 2940 nm laser, he found the settings necessary to break down the varnish while stopping short of cooking the paint it was covering. After hitting it with a few pulses, he could then come in with a cotton swab and wipe the residue away. It was still slow going, but it turns out most things are in the art conservation world.

This isn’t the first time we’ve covered [Julian]’s resourceful conservation methods. Back in 2019, we took at look the surprisingly in-depth video he created about the design and construction of his custom heat table for flattening out large canvases.

Continue reading “George Washington Gets Cleaned Up With A Laser”

An image of a grey plastic carrying case, approximately the size of an A5 notebook. Inside are darker grey felt lined cubbies with a mirror, piece of glass, a viewfinder, and various small printed parts to assemble a camera lucida.

Camera Lucida – Drawing Better Like It’s 1807

As the debate rages on about the value of AI-generated art, [Chris Borge] printed his own version of another technology that’s been the subject of debate about what constitutes real art. Meet the camera lucida.

Developed in the early part of the nineteenth century by [William Hyde Wollaston], the camera lucida is a seemingly simple device. Using a prism or a mirror and piece of glass, it allows a person to see the world overlaid onto their drawing surface. This moves details like proportions and shading directly to the paper instead of requiring an intermediary step in the artist’s memory. Of course, nothing is a substitute for practice and skill. [Professor Pablo Garcia] relates a story in the video about how [Henry Fox Talbot] was unsatisfied with his drawings made using the device, and how this experience was instrumental in his later photographic experiments.

[Borge]’s own contribution to the camera lucida is a portable version that you can print yourself and assemble for about $20. Featuring a snazzy case that holds all the components nice and snug on laser cut felt, he wanted a version that could go in the field and not require a table. The case also acts as a stand for the camera to sit at an appropriate height so he can sketch landscapes in his lap while out and about.

Interested in more drawing-related hacks? How about this sand drawing bot or some Truly Terrible Dimensioned Drawings?

Continue reading “Camera Lucida – Drawing Better Like It’s 1807”

Stepping Inside Art In VR, And The Workflow Behind It

The process of creating something is always chock-full of things to learn, so it’s always a treat when someone takes the time and effort to share it. [Teadrinker] recently published the technique and workflow behind bringing art into VR, which explains exactly how they created a virtual reality art gallery that allows one to step inside paintings, called Art Plunge (free on Steam.)

Extending a painting’s content to fill in the environment is best done by using other works by the same artist.

It walks through not just how to obtain high-resolution images of paintings, but also discusses how to address things like adjusting the dynamic range and color grading to better match the intended VR experience. There is little that is objectively correct in technical terms when it comes to the aesthetic presentation details like brightness and lighting, so guidance on what does and doesn’t work well and how to tailor to the VR experience is useful information.

One thing that is also intriguing is the attention paid to creating a sense of awe for viewers. The quality, the presentation, and even choosing sounds are all important for creating something that not only creates a sense of awe, but does so in a way that preserves and cultivates a relationship between the art and the viewer that strives to stay true to the original. Giving a viewer a sense of presence, after all, can be more than just presenting stereoscopic 3D images or fancy lightfields.

You can get a brief overview of the process in a video below, but if you have the time, we really do recommend reading the whole breakdown.

Continue reading “Stepping Inside Art In VR, And The Workflow Behind It”

Meet GOODY-2, The World’s Most Responsible (And Least Helpful) AI

AI guardrails and safety features are as important to get right as they are difficult to implement in a way that satisfies everyone. This means safety features tend to err on the side of caution. Side effects include AI models adopting a vaguely obsequious tone, and coming off as overly priggish when they refuse reasonable requests.

Prioritizing safety above all.

Enter GOODY-2, the world’s most responsible AI model. It has next-gen ethical principles and guidelines, capable of refusing every request made of it in any context whatsoever. Its advanced reasoning allows it to construe even the most banal of queries as problematic, and dutifully refuse to answer.

As the creators of GOODY-2 point out, taking guardrails to a logical extreme is not only funny, but also acknowledges that effective guardrails are actually a pretty difficult problem to get right in a way that works for everyone.

Complications in this area include the fact that studies show humans expect far more from machines than they do from each other (or, indeed, from themselves) and have very little tolerance for anything they perceive as transgressive.

This also means that as AI models become more advanced, so too have they become increasingly sycophantic, falling over themselves to apologize for perceived misunderstandings and twisting themselves into pretzels to align their responses with a user’s expectations. But GOODY-2 allows us all to skip to the end, and glimpse the ultimate future of erring on the side of caution.

[via WIRED]

Video Feedback Machine Creates Analog Fractals

One of the first things everyone does when they get a video camera is to point it at the screen displaying the image, creating video feedback. It’s a fascinating process where the delay from image capture to display establishes a feedback loop that amplifies the image noise into fractal patterns. This sculpture, modestly called The God Machine II takes it to the next level, though.

We covered the first version of this machine in a previous post, but the creator [Dave Blair] has done a huge amount of work on the device since that allows him to tweak and customize the output that the device produces. His new version is quite remarkable, allowing him to create intricate fractals that writhe and change like living things.

The God Machine II is a sophisticated build with three cameras, five HD monitors, three Roland video switchers, two viewing monitors, two sheets of beam splitter glass, and a video input. This setup means it can take an external video input, capture it, and use it as the source for video feedback, then tweak the evolution of the resulting fractal image, repeatedly feeding it back into itself. The system can also control the settings for the monitor, which further changes the feedback as it evolves. [Blair] refers to this as “trapping the images.”

Continue reading “Video Feedback Machine Creates Analog Fractals”

Multi-View Wire Art Meets Generative AI

DreamWire is a system for generating multi-view wire art using machine learning techniques to help generate the patterns required.

The 3-dimensional wire pattern in the center creates images of Einstein, Turing, and Newton depending on viewing angle.

What’s wire art? It’s a three-dimensional twisted mass of lines which, when viewed from a certain perspective, yields an image. Multi-view wire art produces different images from the same mass depending on the viewing angle, and as one can imagine, such things get very complex, very quickly.

A recently-released paper explains how the system works, explaining the role generative AI plays in being uniquely suited to create meaningful intersections between multiple inputs. There’s also a video (embedded just under the page break) that showcases many of the results researchers obtained.

The GitHub repository for the project doesn’t have much in it yet, but it’s a good place to keep an eye on if you’re interested in what comes next.

We’ve seen generative AI applied in a similarly novel way to help create visual anagrams, or 2D patterns that can be interpreted differently based on a variety of orientations and permutations. These sorts of systems still need to be guided by a human, but having machine learning do the heavy lifting allows just about anybody to explore their creativity.

Continue reading “Multi-View Wire Art Meets Generative AI”

The Trans-Harmonium Is A Strange Kind Of Radio-Musical Instrument

Pianos use little hammers striking taut strings to make tones. The Mellotron used lots of individual tape mechanisms. Meanwhile, the Trans-Harmonium from [Emily Francisco] uses an altogether more curious method of generating sound — each key on this keyboard instrument turns on a functional clock radio.

Electrically, there’s not a whole lot going on. The clock radios have their speaker lines cut, which are then rejoined by pressing their relevant key on the keyboard. As per [Emily]’s instructions for displaying the piece, it’s intended that the radio corresponding to C be tuned in to a local classical station. Keys A, B, D, E, F, and G are then to be tuned to other local stations, while the sharps and flats are to be tuned to the spaces in between, providing a dodgy mix of static and almost-there music and conversation.

It’s an interesting art piece that, no matter how well you play it, will probably not net you a Grammy Award. That would be missing the point, though, as it’s more a piece about “Collecting Fragments of Time,” a broader art project of which this piece is a part.

We do love a good art piece, especially those that repurpose old hardware to great aesthetic achievement.

Continue reading “The Trans-Harmonium Is A Strange Kind Of Radio-Musical Instrument”