Visual cryptography is one of those unusual cases that kind of looks like a good idea, but it turns out is fraught with problems. The idea is straightforward enough — an image to encrypt is sampled and a series of sub-pixel patterns are produced which are distributed to multiple separate images. When individual images are printed to transparent film, and all films in the set are brought into alignment, an image appears out of the randomness. Without at least a minimum number of such images, the original image cannot be resolved. Well, sort of. [anfractuosity] wanted to play with the concept of visual cryptography in a slightly different medium, that of a set of metal plates, shaped as a set of keyrings.
Metal blanks were laser cut, with the image being formed by transmitted light through coincident holes in both plate pairs, when correctly aligned. What, we hear you ask, is the problem with this cryptography technique? Well, one issue is that of faking messages. It is possible for a malicious third party, given either one of the keys in a pair, to construct a matching key composing an entirely different message, and then substitute this for the second key, duping both original parties. Obviously this would need both parties to be physically compromised, but neither would necessarily notice the substitution, if neither party knew the originally encrypted message. For those interested in digging in a little deeper, do checkout this classic paper by Naor and Shamir [pdf] of the Wiezmann Institute. Still, despite the issues, for a visual hack it’s still a pretty fun technique!
Eyes are windows into the soul, the old saying goes. They are also pathways into the mind, as much of our brain is involved in processing visual input. This dedication to vision is partly why much of AI research is likewise focused on machine vision. But do artificial neural networks (ANN) actually work like the gray matter that inspired them? A recently published research paper (DOI: 10.1126/science.aav9436) builds a convincing argument for “yes”.
Neural nets were named because their organization was inspired by biological neurons in the brain. But as we learned more and more about how biological neurons worked, we also discovered artificial neurons aren’t very faithful digital copies of the original. This cast doubt whether machine vision neural nets actually function like their natural inspiration, or if they worked in an entirely different way.
This experiment took a trained machine vision network and analyzed its internals. Armed with this knowledge, images were created and tailored for the purpose of triggering high activity in specific neurons. These responses were far stronger than what occurs when processing normal visual input. These tailored images were then shown to three macaque monkeys fitted with electrodes monitoring their neuron activity, which picked up similarly strong neural responses atypical of normal vision.
Manipulating neural activity beyond their normal operating range via tailored imagery is the Hollywood portrayal of mind control, but we’re not at risk of input injection attacks on our brains. This data point gives machine learning researchers confidence their work still has relevance to biological source material, and neuroscientists are excited about the possibility of exploring brain functions without invasive surgical implants. Artificial neural networks could end up help us better understand what happens inside our brain, bringing the process full circle.
Learning assembly is very important if you want to get a grasp of how a computer truly works under the hood. VisUAL is a very capable ARM emulator for those interested in learning the ARM assembly.
In addition to supporting a large subset of ARM instructions, the CPU is emulated via a series of elaborate and instructive animations that help visualise the flow of data to/from registers, any changes made to flags, and any branches taken. It also packs very useful animations to help grasp some of the more tricky instruction such as shifts and stack manipulations.
As it is was designed specifically to be used as teaching tool at Imperial College London, the GUI is very friendly, all the syntax errors are highlighted, and an example of the correct syntax is also shown.
You can also do the usual things you would expect from any emulator, such as single step through execution, set breakpoints, and view data in different bases. It even warns you of any possible infinite loops!
That being said, lugging such an extravagant GUI comes at a price; programs that consume a few hundred thousand cycles hog far too much RAM should be run in the supported headless mode.
How hot is the water coming out of your tap? Knowing that the water in their apartment gets “crazy hot,” redditor [AEvans28] opted to whip up a visual water temperature display to warn them off when things get a bit spicy.
This neat little device is sequestered away inside an Altoids mint tin — an oft-used, multi-purpose case for makers. Inside sits an ATtiny85 microcontroller — re-calibrated using an Arduino UNO to a more household temperature scale ranging from dark blue to flashing red — with additional room for a switch, while the 10k ohm NTC thermristor and RGB LED are functionally strapped to the kitchen faucet using electrical tape. The setup is responsive and clearly shows how quickly [AEvans28]’s water heats up.
A lot of work with binary arithmetic was pioneered in the mid-1800s. Boolean algebra was developed by George Boole, but a less obvious binary invention was created at this time: the Braille writing system. Using a system of raised dots (essentially 1s and 0s), visually impaired people have been able to read using their sense of touch. In the modern age of fast information, however, it’s a little more difficult. A number of people have been working on refreshable Braille displays, including [Madaeon] who has created a modular refreshable Braille display.
The idea is to recreate the Braille cell with a set of tiny solenoids. The cell is a set of dots, each of which can be raised or lowered in a particular arrangement to represent a letter or other symbol. With a set of solenoids, this can be accomplished rather rapidly. [Madaeon] has already prototyped these miniscule controllable dots using the latest 3D printing and laser cutting methods and is about ready to put together his first full Braille character.
While this isn’t quite ready for a full-scale display yet, the fundamentals look like a solid foundation for building one. This is all hot on the heels of perhaps the most civilized patent disagreement in history regarding a Braille display that’s similar. Hopefully all the discussion and hacking of Braille displays will bring the cost down enough that anyone who needs one will easily be able to obtain and use one.
As we all go about our day to day activities, it’s easy to get lost in technology and take for granted things that have slowly evolved over long periods of time. Take for instance the mouse on your desk. Whether it’s a standard 2-button mouse with a scroll wheel or a magic mouse with no buttons at all, we’re all a bit spoiled when you think about it.
Dvice recently published a visual history of the computer mouse, which is quite interesting. The first pointing device that relied on hand motions to move a cursor was created by the Royal Canadian Navy in 1952. This trackball device, which is predates all other mechanical pointing devices, was crafted using a 5-pin bowling ball and an array of mechanical encoders that tracked the ball’s movement.
As time went on, other mouse-type devices came and went, but it was 30 years ago yesterday that Xerox unveiled the world’s first optical mouse at its PARC facility. The mouse used LEDs and optical sensors along with specialized mouse pads to track the user’s movements. The tech is primitive compared to today’s offerings, but it’s a nice reminder of the humble beginnings something you use every single day.
Be sure to swing by the Dvice site and take a look at how the mouse has evolved over the years – it’s a great way to kill a few minutes.
Advanced Beauty is a collection of 18 “sound sculptures” pairing artists and programmers to create a collaborative work visualizing sound. The styles run a broad range from fluid simulations to manipulating cell animation. The demos were built using Processing. While all of these were built using human input, we see potential for them to help improve standard visualizers. Hopefully, to bring out more information about what’s actually being played. Below is just one of the videos in the series. You can find more on Vimeo. Continue reading “Advanced Beauty Generative Video Art”→