Ultra-Black Material, Sustainably Made From Wood

Researchers at the University of British Columbia leveraged an unusual discovery into ultra-black material made from wood. The deep, dark black is not the result of any sort of dye or surface coating; it’s structural change to the wood itself that causes it to swallow up at least 99% of incoming light.

One of a number of prototypes for watch faces and jewelry.

The discovery was partially accidental, as researchers happened upon it while looking at using high-energy plasma etching to machine the surface of wood in order to improve it’s water resistance. In the process of doing so, they discovered that with the right process applied to the right thickness and orientation of wood grain, the plasma treatment resulted in a surprisingly dark end result. Fresh from the plasma chamber, a wood sample has a thin coating of white powder that, once removed, reveals an ultra-black surface.

The resulting material has been dubbed Nxylon (the name comes from mashing together Nyx, the Greek goddess of darkness, with xylon the Greek word for wood) and has been prototyped into watch faces and jewelry. It’s made from natural materials, the treatment doesn’t create or involve nasty waste, and it’s an economical process. For more information, check out UBC’s press release.

You have probably heard about Vantablack (and how you can’t buy any) and artist Stuart Semple’s ongoing efforts at making ever-darker and accessible black paint. Blacker than black has applications in optical instruments and is a compelling thing in the art world. It’s also very unusual to see an ultra-black anything that isn’t the result of a pigment or surface coating.

AI Image Generator Twists In Response To MIDI Dials, In Real-time

MIDI isn’t just about music, as [Johannes Stelzer] shows by using dials to adjust AI-generated imagery in real-time. The results are wild, with an interactivity to them that we don’t normally see in such things.

[Johannes] uses Stable Diffusion‘s SDXL Turbo to create a baseline image of “photo of a red brick house, blue sky”. The hardware dials act as manual controls for applying different embeddings to this baseline, such as “coral”, “moss”, “fire”, “ice”, “sand”, “rusty steel” and “cookie”.

By adjusting the dials, those embeddings are applied to the base image in varying strengths. The results are generated on the fly and are pretty neat to see, especially since there is no appreciable amount of processing time required.

The MIDI controller is integrated with the help of lunar_tools, a software toolkit on GitHub to facilitate creating interactive exhibits. As for the image end of things, we’ve previously covered how AI image generators work.

George Washington Gets Cleaned Up With A Laser

Now, we wouldn’t necessarily call ourselves connoisseurs of fine art here at Hackaday. But we do enjoy watching [Julian Baumgartner]’s YouTube channel, where he documents the projects that he takes on as a professional conservator. Folks send in their dirty or damaged paintings, [Julian] works his magic, and the end result often looks like a completely different piece. Spoilers: if you’ve ever looked at an old painting and wondered why the artist made it so dark and dreary — it probably just needs to be cleaned.

Anyway, in his most recent video, [Julian] pulled out a piece of gear that we didn’t expect to see unleashed against a painting of one of America’s Founding Fathers: a Er:YAG laser. Even better, instead of some fancy-pants fine art restoration laser, he apparently picked up second hand unit designed for cosmetic applications. The model appears to be a Laserscope Venus from the early 2000s, which goes for about $5K these days.

Now, to explain why he raided an esthetician’s closet to fix up this particular painting, we’ve got to rewind a bit. As we’ve learned from [Julian]’s previous videos, the problem with an old dirty painting is rarely the paining itself, it’s the varnish that has been applied to it. These varnishes, especially older ones, have a tendency to yellow and crack with age. Now stack a few decades worth of smoke and dirt on top of it, and you’ve all but completely obscured the original painting underneath. But there’s good news — if you know what you’re doing, you can remove the varnish without damaging the painting itself.

In most cases, this can be done with various solvents that [Julian] mixes up after testing them out on some inconspicuous corner of the painting. But in this particular case, the varnish wasn’t reacting well to anything in his inventory. Even his weakest solvents were going right through it and damaging the paint underneath.

Because of this, [Julian] had to break out the big guns. After experimenting with the power level and pulse duration of the 2940 nm laser, he found the settings necessary to break down the varnish while stopping short of cooking the paint it was covering. After hitting it with a few pulses, he could then come in with a cotton swab and wipe the residue away. It was still slow going, but it turns out most things are in the art conservation world.

This isn’t the first time we’ve covered [Julian]’s resourceful conservation methods. Back in 2019, we took at look the surprisingly in-depth video he created about the design and construction of his custom heat table for flattening out large canvases.

Continue reading “George Washington Gets Cleaned Up With A Laser”

An image of a grey plastic carrying case, approximately the size of an A5 notebook. Inside are darker grey felt lined cubbies with a mirror, piece of glass, a viewfinder, and various small printed parts to assemble a camera lucida.

Camera Lucida – Drawing Better Like It’s 1807

As the debate rages on about the value of AI-generated art, [Chris Borge] printed his own version of another technology that’s been the subject of debate about what constitutes real art. Meet the camera lucida.

Developed in the early part of the nineteenth century by [William Hyde Wollaston], the camera lucida is a seemingly simple device. Using a prism or a mirror and piece of glass, it allows a person to see the world overlaid onto their drawing surface. This moves details like proportions and shading directly to the paper instead of requiring an intermediary step in the artist’s memory. Of course, nothing is a substitute for practice and skill. [Professor Pablo Garcia] relates a story in the video about how [Henry Fox Talbot] was unsatisfied with his drawings made using the device, and how this experience was instrumental in his later photographic experiments.

[Borge]’s own contribution to the camera lucida is a portable version that you can print yourself and assemble for about $20. Featuring a snazzy case that holds all the components nice and snug on laser cut felt, he wanted a version that could go in the field and not require a table. The case also acts as a stand for the camera to sit at an appropriate height so he can sketch landscapes in his lap while out and about.

Interested in more drawing-related hacks? How about this sand drawing bot or some Truly Terrible Dimensioned Drawings?

Continue reading “Camera Lucida – Drawing Better Like It’s 1807”

Stepping Inside Art In VR, And The Workflow Behind It

The process of creating something is always chock-full of things to learn, so it’s always a treat when someone takes the time and effort to share it. [Teadrinker] recently published the technique and workflow behind bringing art into VR, which explains exactly how they created a virtual reality art gallery that allows one to step inside paintings, called Art Plunge (free on Steam.)

Extending a painting’s content to fill in the environment is best done by using other works by the same artist.

It walks through not just how to obtain high-resolution images of paintings, but also discusses how to address things like adjusting the dynamic range and color grading to better match the intended VR experience. There is little that is objectively correct in technical terms when it comes to the aesthetic presentation details like brightness and lighting, so guidance on what does and doesn’t work well and how to tailor to the VR experience is useful information.

One thing that is also intriguing is the attention paid to creating a sense of awe for viewers. The quality, the presentation, and even choosing sounds are all important for creating something that not only creates a sense of awe, but does so in a way that preserves and cultivates a relationship between the art and the viewer that strives to stay true to the original. Giving a viewer a sense of presence, after all, can be more than just presenting stereoscopic 3D images or fancy lightfields.

You can get a brief overview of the process in a video below, but if you have the time, we really do recommend reading the whole breakdown.

Continue reading “Stepping Inside Art In VR, And The Workflow Behind It”

Meet GOODY-2, The World’s Most Responsible (And Least Helpful) AI

AI guardrails and safety features are as important to get right as they are difficult to implement in a way that satisfies everyone. This means safety features tend to err on the side of caution. Side effects include AI models adopting a vaguely obsequious tone, and coming off as overly priggish when they refuse reasonable requests.

Prioritizing safety above all.

Enter GOODY-2, the world’s most responsible AI model. It has next-gen ethical principles and guidelines, capable of refusing every request made of it in any context whatsoever. Its advanced reasoning allows it to construe even the most banal of queries as problematic, and dutifully refuse to answer.

As the creators of GOODY-2 point out, taking guardrails to a logical extreme is not only funny, but also acknowledges that effective guardrails are actually a pretty difficult problem to get right in a way that works for everyone.

Complications in this area include the fact that studies show humans expect far more from machines than they do from each other (or, indeed, from themselves) and have very little tolerance for anything they perceive as transgressive.

This also means that as AI models become more advanced, so too have they become increasingly sycophantic, falling over themselves to apologize for perceived misunderstandings and twisting themselves into pretzels to align their responses with a user’s expectations. But GOODY-2 allows us all to skip to the end, and glimpse the ultimate future of erring on the side of caution.

[via WIRED]

Video Feedback Machine Creates Analog Fractals

One of the first things everyone does when they get a video camera is to point it at the screen displaying the image, creating video feedback. It’s a fascinating process where the delay from image capture to display establishes a feedback loop that amplifies the image noise into fractal patterns. This sculpture, modestly called The God Machine II takes it to the next level, though.

We covered the first version of this machine in a previous post, but the creator [Dave Blair] has done a huge amount of work on the device since that allows him to tweak and customize the output that the device produces. His new version is quite remarkable, allowing him to create intricate fractals that writhe and change like living things.

The God Machine II is a sophisticated build with three cameras, five HD monitors, three Roland video switchers, two viewing monitors, two sheets of beam splitter glass, and a video input. This setup means it can take an external video input, capture it, and use it as the source for video feedback, then tweak the evolution of the resulting fractal image, repeatedly feeding it back into itself. The system can also control the settings for the monitor, which further changes the feedback as it evolves. [Blair] refers to this as “trapping the images.”

Continue reading “Video Feedback Machine Creates Analog Fractals”