3D Model Subscriptions Are Coming, But Who’s Buying?

We’ve all been there before — you need some 3D printable design that you figure must be common enough that somebody has already designed it, so you point your browser to Thingiverse or Printables, and in a few minutes you’ve got STL in hand and are ready to slice and print. If the design worked for you, perhaps you’ll go back and post an image of your print and leave a word of thanks to the designer.

Afterwards, you’ll probably never give that person a second thought for the rest of your life. Within a day or two, there’s a good chance you won’t even remember their username. It’s why most of the model sharing sites will present you with a list of your recently downloaded models when you want to upload a picture of your print, otherwise there’s a good chance you wouldn’t be able to find the thing.

Now if you really liked the model, you might go as far as following the designer. But even then, there would likely be some extenuating circumstances. After all, even the most expertly designed widget is still just a widget, and the chances of that person creating another one that you’d also happen to need seems exceedingly slim. Most of the interactions on these model sharing sites are like two ships passing in the night; it so happened that you and the creator had similar enough needs that you could both use the same printable object, but there’s no telling if you’ll ever cross paths with them again.

Which is why the recent announcements, dropped just hours from each other, that both Thangs and Printables would be rolling out paid subscription services seems so odd. Both sites claim that not only is there a demand for a service that would allow users to pay designers monthly for their designs, but that existing services such as Patreon are unable to meet the unique challenges involved.

Both sites say they have the solution, and can help creators turn their passion for 3D design into a regular revenue stream — as long as they get their piece of the action, that is.

Continue reading “3D Model Subscriptions Are Coming, But Who’s Buying?”

Oscillon by Ben F. Laposky

Early Computer Art From The 1950s And 1960s

Modern day computer artist, [Amy Goodchild] surveys a history of Early Computer Art from the 1950s and 1960s. With so much attention presently focused on AI-generated artwork, we should remember that computers have been used to created art for many decades.

Our story begins in 1950 when Ben Laposky started using long exposure photography of cathode ray oscilloscopes to record moving signals generated by electronic circuits. In 1953, Gordon Pask developed the electromechanical MusiColor system. MusiColor empowered musicians to control visual elements including lights, patterns, and motorized color wheels using sound from their instruments. The musicians could interact with the system in real-time, audio-visual jam sessions.

In the early 1960s, BEFLIX (derived form Bell Flix) was developed by Ken Knowlton at Bell Labs as a programming language for generating video animations. The Graphic 1 computer featuring a light pen input device was also developed at Bell Labs. Around the same timeframe, IBM introduced novel visualization technology in the IBM 2250 graphics display for its System/360 computer. The 1967 IBM promotional film Frontiers in Computer Graphics demonstrates the capabilities of the system.

Continue reading “Early Computer Art From The 1950s And 1960s”

Making Music By Probing Magnetite Crystals

Well, noises anyway. [Dmitry Morozov] and [Alexandra Gavrilova] present an interesting electronics-based art installation, which probes a large chunk of crystalline magnetite, using a pair of servo-mounted probes, ‘measuring’ the surface conductivity and generating some sound and visuals.

It appears to have only one degree of freedom per probe, so we’re not so sure all that much of the surface gets probed per run, but however it works it produces some interesting, almost random results. The premise is that the point-to-point surface resistivity is unpredictable due to the chaotically formed crystals all jumbled up, but somehow uses these measured data to generate some waveshapes vaguely reminiscent of the resistivity profile of the sample, the output of which is then fed into a sound synthesis application and pumped out of a speaker. It certainly looks fun.

From a constructional perspective, hardware is based around a LattePanda fed samples by an ADS1115 ADC, which presumably is also responsible for driving the LCD monitor and the sound system. An Arduino is also wedged in there perhaps for servo-driving duty, maybe also as part of the signal chain from the probes, but that is just a guess on our part. The software uses the VVVV (Visual Live-programming suite) and the Pure Data environment.

We haven’t seen magnetite used for this type of application before, we tend to see it as a source of Iron for DIY knifemaking, as a medium to help separate DNA or just to make nanoparticles, for erm, reasons.

Supercon 2022: Chris Combs Reveals His Art-World Compatibility Layer

[Chris Combs] is a full time artist who loves using technology to create unique art projects and has been building blinky artwork since about a decade now. In his 2022 Supercon talk “Art-World Compatibility Layer: How to Hang and Sell Your Blinky Goodness as Art” (Slides, PDF), [Chris] takes us behind the scenes and shows us how to turn our blinky doodads in to coveted art works. There is a big difference between a project that just works, and a work of art, and it’s the attention to small details that differentiates the two.

Just like the field of engineering and technology, the art world has its own jargon and requires knowledge of essential skills that make it intimidating to newcomers. It’s not very easy to define what makes an artwork “art” or even “Art”, and sometimes it’s difficult to distinguish if you are looking at a child’s scrawls or a master’s brushstrokes. But there are a few distinguishing requirements that a piece of artwork, particularly one revolving around the use of technology, must meet.

Continue reading “Supercon 2022: Chris Combs Reveals His Art-World Compatibility Layer”

Enormous Metal Sculpture Becomes An Antenna

Those who have worked with high voltage know well enough that anything can be a conductor at high enough voltages. Similarly, amateur radio operators will jump at any chance to turn a random object into an antenna. Flag poles, gutters, and even streams of water can be turned into radiating elements for a transmitter, but the members of this amateur radio club were thinking a little bit bigger when they hooked up their transmitter to this giant sculpture.

For those who haven’t been to the Rochester Institute of Technology (RIT) in upstate New York, the enormous metal behemoth is not a subtle piece of artwork and sits right at the entrance to the university. It’s over 70 feet tall and made out of bronze and steel, a dream for any amateur radio operator. With the university’s permission and some help to ensure everyone’s safety during the operation, the group attached a feedline to the sculpture with a magnet, while the shield wire was attached to a ground rod nearby. A Yaesu FT-991 running on only 5 watts and transmitting in the 20-meter band was able to make contacts throughout much of the eastern United States with this setup.

This project actually started as an in-joke within the radio club, as reported by Reddit user [bbbbbthatsfivebees] who is a member. Eventually the joke became reality, as the sculpture is almost a perfect antenna for certain ham bands. Others in the comments noted that they might have better luck with lower frequency bands such as the 40-meter band or possibly the 60-meter band, due to the height of the structure. And, for those who are still wondering if you really can use a stream of water to transmit radio waves, it is indeed possible.

AI And Savvy Marketing Create Dubious Moon Photos

Taking a high-resolution photo of the moon is a surprisingly difficult task. Not only is a long enough lens required, but the camera typically needs to be mounted on a tracking system of some kind, as the moon moves too fast for the long exposure times needed. That’s why plenty were skeptical of Samsung’s claims that their latest smart phone cameras could actually photograph this celestial body with any degree of detail. It turns out that this skepticism might be warranted.

Samsung’s marketing department is claiming that this phone is using artificial intelligence to improve photos, which should quickly raise a red flag for anyone technically minded. [ibreakphotos] wanted to put this to the test rather than speculate, so a high-resolution image of the moon was modified in such a way that most of the fine detail of the image was lost. Displaying this image on a monitor, standing across the room, and using the smartphone in question reveals details in the image that can’t possibly be there.

The image that accompanies this post shows the two images side-by-side for those skeptical of these claims, but from what we can tell it looks like this is essentially an AI system copy-pasting the moon into images it thinks are of the moon itself. The AI also seems to need something more moon-like than a ping pong ball to trigger the detail overlay too, as other tests appear to debunk a more simplified overlay theory. It seems like using this system, though, is doing about the same thing that this AI camera does to take pictures of various common objects.

Sneaky Clock Displays Wrong Time If It Catches You Looking

We have a soft spot for devices that subvert purpose and expectation, and that definitely sums up [Guy Dupont]’s Clock That Is Wrong. It knows the correct time, but whether or not it displays the correct time is another story. That’s because nestled just above the 7-segment display is a person sensor module, and when it detects that a person is looking towards it, the clock will display an incorrect time, therefore self-defeating both the purpose and primary use case of a clock in one stroke.

The person sensor is a tiny board with tiny camera that constantly does its best to determine whether a person is in view, and whether they are looking towards the sensor. It’s a good fit for a project like this, and it means that one can look at the clock from an oblique angle (meaning one is out of view of the sensor) and see the correct time. But once one moves in front of it, the time changes. You can watch a brief video of it in action in this Twitter thread.

One interesting bit is that [Guy] uses an ESP32-based board to drive everything, but had some reservations about making a clock without an RTC. However, he found that simply syncing time over the network every 10 minutes or so using the board’s built-in WiFi was perfectly serviceable, at least for a device like this.

This reminds us a little of other clocks with subtly subversive elements, like the Vetinari Clock which keeps overall accurate time despite irregularly drifting in and out of sync. Intrigued by such ideas? You’re not alone, because there are even DIY hobby options for non-standard clock movements. Adding the ability to detect when someone is looking directly at such a device opens up possibilities, so keep it in mind if it’s time for a weekend project.