While sick with the flu a few months ago, [CroMagnon] had a vision. A face with eyes that would follow you – no matter where you walked in the room. He brought this vision to life in the form of Gawkerbot. This is no static piece of art. Gawkerbot’s eyes slowly follow you as you walk through its field of vision. Once the robot has fixed its gaze upon you, the eyes glow blue. It makes one wonder if this is an art piece, of if the rest of the robot is about to pop through the wall and attack.
Gawkerbot’s sensing system is rather simple. A PIR sensor detects motion in the room. If any motion is detected, two ultrasonic sensors which make up the robots pupils start taking data. Code running on an ATmega328 determines if a person is detected on the left or right, and moves the eyes appropriately.
[CroMagnon] used an old CD-ROM drive optics sled to move Gawkerbot’s eyes. While the motor is small, the worm drive has plenty of power to move the 3D-printed eyes and linkages. Gawkerbot’s main face is a 3D-printed version of a firefighters smoke helmet.
The ultrasonic sensors work, but it took quite a bit of software to tame the jitters noisy data stream. [CroMagnon] is thinking of using PIR sensors on Gawkerbot 2.0. Ultrasonic transducers aren’t just for sensing. Given enough power, you can solder with them. Ultrasonics even work for wireless communications.
Check out the video after the break to see Gawkerbot in action.
Continue reading “Gawkerbot is Watching You”
[Robby Cuthbert,] an artist and designer based out of Fort Collins, Colorado is creating stable cable tables that are simultaneously a feat of engineering and a work of art.
[Cuthbert’s] tables are held together by 1/16″ stainless steel cables that exert oppositional tensions that result in a structurally stable and visually appealing coffee table. In his video, [Cuthbert] leads us through his process for creating his tables, step by step. [Cuthbert] starts by cutting out bamboo legs on his CNC mill. He then drills holes in each leg for cables and mounts each leg on his custom table jig. Then, he attaches the stainless steel cabling taking care to alternate tension direction. The cables are threaded through holes in the legs and affixed with copper crimps. After many cables, he has a mechanical structure that can support his weight that also looks fantastic. All in all, [Cuthbert’s] art is a wonderful example of the intersection of art and engineering.
If we’ve whet your appetite, fear not, we have featured many tension based art/engineering hacks before. You might be interested in these computer-designed portraits or, if the thought of knitting by hand gives you the heebie-jeebies, the Autograph, a string art printer might be more your style.
Video after the break.
Continue reading “Making Tension Based Furniture”
If you’re producing documentation for a PCB project, you might as well make the board renders look good. But then, that’s a lot of work and you’re not an artist. Enter [Jan]’s new tool that takes KiCad board files, replaces each footprint with (custom) graphics, and provides a nice SVG representation, ready for labelling. If you like the output of a Fritzing layout, but have higher expectations of the PCB tool, this is just the ticket.
We all love [pighixx]’s pinout diagrams. Here’s his take on the Arduino Uno, for instance. It turns out that he does these largely by hand. That’s art for ya.
Sparkfun has taken a stab at replicating the graphical style for the pin labels, but then they toss in a photo of the real item. [Jan]’s graphic PCB generator fills in the last step toward almost putting [pighixx] out of a job. Get the code for yourself on GitHub.
An oasis in the desert is the quintessential image of salvation for the wearied wayfarer. At Burning Man 2016, Grove — ten biofeedback tree sculptures — provided a similar, interactive respite from the festival. Each tree has over two thousand LEDs, dozens of feet of steel tube, two Teensy boards used by the custom breath sensors to create festival magic.
Grove works like this: at your approach — detected by dual IR sensors — a mechanical flower blooms, meant to prompt investigation. As you lean close, the breath sensors in the daffodil-like flower detect whether you’re inhaling or exhaling, translating the input into a dazzling pulse of LED light that snakes its way down the tree’s trunk and up to the bright, 3W LEDs on the tips of the branches.
Debugging and last minute soldering in the desert fixed a few issues, before setup — no project is without its hiccups. The entire grove was powered by solar-charged, deep-cycle batteries meant to least from sunset to sunrise — or close enough if somebody forgot to hook the batteries up to charge.
Continue reading “An Interactive Oasis At Burning Man”
Musician [Mari Lesteberg] is making music that paints pictures. Or maybe she’s making pictures that paint music. It’s complicated. Check out the video (embedded below) and you’ll see what we mean. The result is half Chinese scroll painting, and half musical score, and they go great together.
Lots of MIDI recorders/players use the piano roll as a model for input — time scrolls off to the side, and a few illuminated pixels represent a note played. She’s using the pixels to paint pictures as well: waves on a cartoon river make an up-and-down arpeggio. That’s a (musical) hack. And she’s not the only person making MIDI drawings. You’ll find a lot more on reddit.
Of course, one could do the same thing with silent pixels — just set a note to play with a volume of zero — but that’s cheating and no fun at all. As far as we can tell, you can hear every note that’s part of the scrolling image. The same can not be said for music of the black MIDI variety, which aims to pack as many notes into a short period of time as possible. To our ears, it’s not as beautiful, but there’s no accounting for taste.
It’s amazing what variations we’re seeing in the last few years on the ancient piano roll technology. Of course, since piano rolls are essentially punch-cards for musical instruments, we shouldn’t be too surprised that this is all possible. Indeed, we’re a little bit surprised that new artistic possibilities are still around. Has anyone seen punch-card drawings that are executable code? Or physical piano rolls with playable images embedded in them?
Continue reading “MIDI Drawings Paint with Piano Keyboards”
Tech artist [Alexander Reben] has shared some work in progress with us. It’s a neural network trained on various famous peoples’ speech (YouTube, embedded below). [Alexander]’s artistic goal is to capture the “soul” of a person’s voice, in much the same way as death masks of centuries past. Of course, listening to [Alexander]’s Rob Boss is no substitute for actually watching an old Bob Ross tape — indeed it never even manages to say “happy little trees” — but it is certainly recognizable as the man himself, and now we can generate an infinite amount of his patter.
Behind the scenes, he’s using WaveNet to train the networks. Basically, the algorithm splits up an audio stream into chunks and tries to predict the next chunk based on the previous state. Some pre-editing of the training audio data was necessary — removing the laughter and applause from the Colbert track for instance — but it was basically just plugged right in.
The network seems to over-emphasize sibilants; we’ve never heard Barack Obama hiss quite like that in real life. Feeding noise into machines that are set up as pattern-recognizers tends to push them to the limits. But in keeping with the name of this series of projects, the “unreasonable humanity of algorithms”, it does pretty well.
He’s also done the same thing with multiple speakers (also YouTube), in this case 110 people with different genders and accents. The variation across people leads to a smoother, more human sound, but it’s also not clearly anyone in particular. It’s meant to be continuously running out of a speaker inside a sculpture’s mouth. We’re a bit creeped out, in a good way.
We’ve covered some of [Alexander]’s work before, from the wince-inducing “Robot Bites Man” to the intellectual-conceptual “All Prior Art“. Keep it coming, [Alexander]!
Continue reading “Creepy Speaking Neural Networks”
Is [SpongeBob SquarePants] art? Opinions will differ, but there’s little doubt about how cool it is to render a pixel-mapped time-lapse portrait of Bikini Bottom’s most famous native son with a roving light painting robot.
Inspired by the recent trend of long exposure pictures of light-adorned Roombas in darkened rooms, [Hacker House] decided to go one step beyond and make a lighted robot with less random navigational tendencies. A 3D-printed frame and wheels carries a pair of steppers and a Raspberry Pi. An 8×8 Neopixel matrix on top provides the light. The software is capable of rendering both simple vector images and rastering across a large surface to produce full-color images. You’ll notice the careful coordination between movement and light in the video below, as well as the impressive turn-on-a-dime performance of the rover, both of which make the images produced so precise.
We’ve covered a lot of light-painting videos before, including jiggering a 3D-printer and using a hanging plotter to paint. But we haven’t seen a light-painter with an essentially unlimited canvas before. We’d also love to see what two or more of these little fellows could accomplish working together.
Continue reading “Light-Painting Robot Turns any Floor into Art”