Playing The Pixelflut

Every hacker gathering needs as many pixels as its hackers can get their hands on. Get a group together and you’ll be blinded by the amount of light on display. (We propose “a blinkenlights” as the taxonomic name for such a group.) At a large gathering, what better way to show of your elite hacking ability than a “competition” over who can paint an LED canvas the best? Enter Pixelflut, the multiplayer drawing canvas.

Pixelflut has been around since at least 2012, but it came to this author’s attention after editor [Jenny List] noted it in her review of SHA 2017. What was that beguiling display behind the central bar? It turns out it was a display driven by a server running Pixelflut. A Pixelflut server exposes a display which can be drawn on by sending commands over the network in an extremely simple protocol. There are just four ASCII commands supported by every server — essentially get pixel, set pixel, screen size, and help — so implementing either a client or server is a snap, and that’s sort of the point.

While the original implementations appear to be written by [defnull] at the link at the top, in some sense Pixelflut is more of a common protocol than an implementation. In a sense, one “plays” one of a variety of Pixelflut minigames. When there is a display in a shared space the game is who can control the most area by drawing the fastest, either by being clever or by consuming as much bandwidth as possible.

Then there is the game of who can write the fastest more battle-hardened server possible in order to handle all that traffic without collapsing. To give a sense of scale, one installation at 36c3 reported that a truly gargantuan 0.5 petabytes of data were spent at a peak of rate of more than 30 gigabits/second, just painting pixels! That’s bound to bog down all but the most lithe server implementation. (“Flut” is “flood” in German.)

While hacker camps may be on pause for the foreseeable future, writing a performant Pixelflut client or server seems like an excellent way to sharpen one’s skills while we wait for their return. For a video example check out the embed after the break. Have a favorite implementation? Tell us about it in the comments!

Continue reading “Playing The Pixelflut”

How Many Of You Are There, Really?

We’re now accustomed to hearing, “We’re all special in our own unique ways.” But what if we weren’t really aren’t all that unique? Many people think there are no more than two political opinions, maybe a handful of religious beliefs, and certainly no more than one way to characterize a hack. But despite this controversy in other aspects as life, at least we can all rely on the uniqueness of our individual names. Or can you?

You ever thought there were too many people named [insert name here]? Well, [Nicole] thought there were too many people who shared her name in her home country of Belgium and decided to make an art piece out of it.

She was able to find data on the first names of people in Belgium and wrote a Python script…er…used Excel to find the number of Nicoles in each zip code. She then created a 3D map of Belgium divided into each province with the height of each province proportional to the number of Nicoles in that area. A pretty simple print job that any standard 3D printer can probably do these days.

Not much of a “do something” hack, but could make for a cool demotivational ornament that will constantly remind us just how unique we really are.

Happy hacking!

Continue reading “How Many Of You Are There, Really?”

Art Generated From The Dubious Comments Section

[8BitsAndAByte] are back, and this time they’re taking on the comments section with art. They wondered whether or not they can take something as dubious as the comments section and redeem it into something more appealing like art.

They started by using remo.tv, a tool they’ve used in other projects, to read comments from their video live feeds and extract random phrases. The phrases are then analyzed by text to speech, and a publicly available artificial intelligence algorithm that generates an image from a text description. They can then specify art styles like modern, abstract, cubism, etc to give their image a unique appeal. They then send the image back to the original commenter, crediting them for their comment, ensuring some level of transparency.

We were a bit surprised that the phrase dog with a funny hat generated an image of a cat, so I think it’s fair to say that their AI engine could use a bit of work. But really, we could probably say that about AI as a whole.

Continue reading “Art Generated From The Dubious Comments Section”

Bringing Back The Fidget Toy Craze With The Magic Microcontroller Cube

[Rickysisodia] had a few dead ATmega128 chips laying around that he didn’t want to just throw away, so he decided to turn them into his own light-up fidget toy. The toy is in the form of a six-sided die so small that you can hang it on a keychain. He soldered an ATmega128 on each side of the cube and added a few dot circles to give his toy the look of a functional dice. We were pretty amazed by his impressive level of dexterity. Soldering those 0.8 mm-pitch leads together seems pretty tedious if you ask us.

Then he wired a simple, battery-powered tilt switch LED circuit on perfboard that he was able to sneakily place inside the cube. He used a mercury switch, which, as you may figure, uses a small amount of mercury to short two metal contacts inside the switch, completing the circuit and lighting the LED. We would suggest going with the non-mercury variety of tilt switches just to avoid any possible contamination. You know us, anything to mitigate unnecessary disasters is kind of a good route. But anyway, the die lights up a different color LED based on the orientation of the cube and it even blinks.

This is a pretty cool hack for wowing your friends at your next PCB art meet-up. We’ll probably put this in the electronics art category, so it doesn’t get lumped in with those other ever-beloved fidget toys.

Continue reading “Bringing Back The Fidget Toy Craze With The Magic Microcontroller Cube”

Filmmaking From Home With Projection Mapping

Stuck at home in self-quarantine, artist and filmmaker [Kira Bursky] had fewer options than normal for her latest film project. While a normal weekend film sprint would have involved collaborating with actors, set designers, and cinematographers in a frenzied attempt to finish in less than 48 hours, she instead chose to indulge in her curiosity for projection mapping, a technique that involves projecting visuals onto three-dimensional or flat surfaces.

In order for the images to properly map onto a surface, the surface first has to be mapped so that the projection is able to properly transform the flat image in order to produce the illusion of the light wrapping around the object. The technique is done in layers, in software similar to Photoshop, making it easier for the designer to organize the different interacting components in their animation.

[Kira] used a tool called Lightform to design her projections, which relies on a camera to calibrate the location of the surface and a projector to display the visuals. Her animated figures are drawn with loose lines and characterized by their slow gradients and ethereal movements. In the background of her film, a rhythmic sound plays while she brings the figures closer to view. Their outlines come into greater focus until the figures transform into her physical body, which also dances with the meandering lights.

Check out the short film below.

Continue reading “Filmmaking From Home With Projection Mapping”

Elegant Shoji Lamps From Your 3D Printer

The gorgeous Shoji-style lamps you’re seeing here aren’t made of wood or paper. Beyond the LEDs illuminating them from within, the lamps are completely 3D printed. There aren’t any fasteners or glue holding them together either, as creator [Dheera Venkatraman] used authentic Japanese wood joinery techniques to make their components fit together like a puzzle.

While we’re usually more taken with the electronic components of the projects that get sent our way, we have to admit that in this case, the enclosure is really the star of the show. [Dheera] has included a versatile mounting point where you could put anything from a cheap LED candle to a few WS2812B modules, but otherwise leaves the integration of electronic components as an exercise for the reader.

All of the components were designed in OpenSCAD, which means it should be relatively easy to add your own designs to the list of included panel types. Despite the colorful details, you won’t need a multi-material printer to run them off either. Everything you see here was printed on a Prusa i3 MK3S in PETG. Filament swaps and careful design were used to achieve the multiple colors visible on some of the more intricate panels.

If the timeless style of these Japanese lanterns has caught your eye, you’ll love this beautiful sunrise clock we covered last year.

Recreating Paintings By Teaching An AI To Paint

The Timecraft project by [Amy Zhao] and team members uses machine learning to figure out a way how an existing painting may have been originally been painted, stroke by stroke. In their paper titled ‘Painting Many Pasts: Synthesizing Time Lapse Videos of Paintings’, they describe how they trained a ML algorithm using existing time lapse videos of new paintings being created, allowing it to probabilistically generate the steps needed to recreate an already finished painting.

The probabilistic model is implemented using a convolutional neural network (CNN), with as output a time lapse video, spanning many minutes. In the paper they reference how they were inspired by artistic style transfer, where neural networks are used to generate works of art in a specific artist’s style, or to create mix-ups of different artists.

A lot of the complexity comes from the large variety of techniques and materials that are used in the creation of a painting, such as the exact brush used, the type of paint. Some existing approaches have focused on the the fine details here, including physics-based simulation of the paints and brush strokes. These come with significant caveats that Timecraft tried to avoid by going for a more high-level approach.

The time lapse videos that were generated during the experiment were evaluated through a survey performed via Amazon Mechanical Turk, with the 158 people who participated asked to compare the realism of the Timecraft videos versus that of the real time lapse videos. The results were that participants preferred the real videos, but would confuse the Timecraft videos for the real time lapse videos half the time.

Although perhaps not perfect yet, it does show how ML can be used to deduce how a work of art was constructed, and figure out the individual steps with some degree of accuracy.

Continue reading “Recreating Paintings By Teaching An AI To Paint”