So you just got something like an Arduino or Raspberry Pi kit with a few sensors. Setting up temperature or motion sensors is easy enough. But what are you going to do with all that data? It’s going to need storage, analysis, and summarization before it’s actually useful to anyone. You need a dashboard!
But even before displaying the data, you’re going to need to store it somewhere, and that means a database. You could just send all of your data off into the cloud and hope that the company that provides you the service has a good business model behind it, but frankly the track records of even the companies with the deepest pockets and best intentions don’t look so good. And you won’t learn anything useful by taking the easiest way out anyway.
Instead, let’s take the second-easiest way out. Here’s a short tutorial to get you up and running with a database backend on a Raspberry Pi and a slick dashboard on your laptop or cellphone. We’ll be using scripts and Docker to automate as many things as possible. Even so, along the way you’ll learn a little bit about Python and Docker, but more importantly you’ll have a system of your own for expansion, customization, or simply experimenting with at home. After all, if the “cloud” won’t let you play around with their database, how much fun can it be, really?
Continue reading “Howto: Docker, Databases, and Dashboards to Deal with Your Data”
Good science fiction has sound scientific fact behind it and when Tony Stark first made his debut on the big screen with design tools that worked at the wave of a hand, makers and hackers were not far behind with DIY solutions. Over the years the ideas have become much more polished, as we can see with this Gesture Recognition with PIR sensors project.
The project uses the TPA81 8-pixel thermopile array which detects the change in heat levels from 8 adjacent points. An Arduino reads these temperature points over I2C and then a simple thresholding function is used to detect the movement of the fingers. These movements are then used to do a number of things including turn the volume up or down as shown in the image alongside.
The brilliant part is that the TPA81 8-Pixel sensor has been around for a number of years. It is a bit expensive though it has the ability to detect small thermal variations such as candle flames at up to 2 Meters. More recent parts such as the Panasonic AMG8834 that contain a grid of 8×8 such sensors are much more capable for your hacking/making pleasure, but come with an increased price tag.
This technique is not just limited to gestures, and can be used in Heat-Seeking Robots that can very well be trained to follow the cat around just to annoy it.
AI today is like a super fast kid going through school whose teachers need to be smarter than if not as quick. In an astonishing turn of events, a (satelite)image-to-(map)image conversion algorithm was found hiding a cheat-sheet of sorts while generating maps to appear as it if had ‘learned’ do the opposite effectively[PDF].
The CycleGAN is a network that excels at learning how to map image transformations such as converting any old photo into one that looks like a Van Gogh or Picasso. Another example would be to be able to take the image of a horse and add stripes to make it look like a zebra. The CycleGAN once trained can do the reverse as well, such as an example of taking a map and convert it into a satellite image. There are a number of ways this can be very useful but it was in this task that an experiment at Google went wrong.
A mapping system started to perform too well and it was found that the system was not only able to regenerate images from maps but also add details like exhaust vents and skylights that would be impossible to predict from just a map. Upon inspection, it was found that the algorithm had learned to satisfy its learning parameters by hiding the image data into the generated map. This was invisible to the naked eye since the data was in the form of small color changes that would only be detected by a machine. How cool is that?!
This is similar to something called an ‘Adversarial Attack‘ where tiny amounts of hidden data in an image or other data-set will cause an AI to produce erroneous output. Small numbers of pixels could cause an AI to interpret a Panda as a Gibbon or the ocean as an open highway. Fortunately there are strategies to thwart such attacks but nothing is perfect.
You can do a lot with AI, such as reliably detecting objects on a Raspberry Pi, but with Facial Recognition possibly violating privacy some techniques to fool AI might actually come in handy.
Intel just announced their new Sunny Cove Architecture that comes with a lot of new bells and whistles. The Intel processor line-up has been based off the Skylake architecture since 2015, so the new architecture is a fresh breath for the world’s largest chip maker. They’ve been in the limelight this year with hardware vulnerabilities exposed, known as Spectre and Meltdown. The new designs have of course been patched against those weaknesses.
The new architecture (said to be part of the Ice Lake-U CPU) comes with a lot of new promises such as faster core, 5 allocation units and upgrades to the L1 and L2 caches. There is also support for the AVX-512 or Advanced Vector Extensions instructions set which will improve performance for neural networks and other vector arithmetic.
Another significant change is the support for 52-bits of physical space and 57 bits of linear address support. Today’s x64 CPUs can only use bit 0 to bit 47 for an address space spanning 256TB. The additional bits mean a bump to a whooping 4 PB of physical memory and 128 PB of virtual address space.
The new offering was demoed under the company’s 10nm process which incidentally is the same as the previously launched Cannon Lake. The new processors are due in the second half of 2019 and are being heavily marketed as a boon for the Cryptography and Artificial Intelligence Industries. The claim is that for AI, memory to CPU distance has been reduced for faster access, and that special cryptography-specific instructions have been added.
Never underestimate the ability of makers in over thinking and over-engineering the simplest of problems and demonstrating human ingenuity. The RGB LED sign made by [Hans and team] over at the [Hackheim hackerspace] in Trondheim is a testament to this fact.
As you would expect, the WS2812 RGB LEDs illuminate the sign. In this particular construction, an individual strip is responsible for each character. Powered by an ESP32 running FreeRTOS, the sign communicates using MQTT and each letter gets a copy of the 6 x 20 framebuffer which represents the color pattern that is expected to be displayed. A task on the ESP32 calculates the color value to be displayed by each LED.
The real question is, how to calibrate the distributed strings of LEDs such that LEDs on adjacent letters of the sign display an extrapolated value? The answer is to use OpenCV to create a map of the LEDs from their two-dimensional layout to a lookup table. The Python script sends a command to illuminate a single LED and the captured image with OpenCV records the position of the signal. This is repeated for all LEDs to generate a map that is used in the ESP32 firmware. How cool is that?
And if you are wondering about the code, it is up on [Github], and we would love to see someone take this up a level. The calibration code, as well as the Remote Client and ESP32 codes, are all there for your hacking pleasure.
Its been a while since we have seen OpenCV in action like with the Motion Tracking Turret and Face Recognition. The possibilities seem endless. Continue reading “An over-engineered LED Sign Board”
The most awesome things about having a 3D printer is that you can create almost anything which includes parts for the 3D printer itself. Different materials give power to your imagination and allow you to go beyond the 3D printed vase. So much so that one maker has gone as far as 3D print the bearings as well as the axis screws and nuts and it works!
The RepRap project was the first project to incorporate 3D printed parts to make it self-replicating to a certain extent. The clamps and mounts could be easily printed, however, this project uses a 3D printed frame as well as two linear bearings for the y-axis and z-axis and one for the x-axis. The y-axis is a 3D printed rack-and-pinion while the z-axis is made of a 3D printed screws and nuts. So basically, the servo motors, extruder/hotend and limits switches with mounting screws are the only part that need be bought at the store.
Even though in motors are running hot causing mounts to get soft, heat-sinks are predicted to resolve the issue. This one is not designed for accuracy though it can be a great resource for budding engineers and hackers to get their feet wet with customizing 3D printers. Check out the video for a demo.
From 3D printed guitars to RC Planes, there is a lot you can do with micro-manufacturing and all we need now is a 3D printed motor to get things rolling. Continue reading “The Most-3D-Printed 3D Printer”
It’s the 21st century, and according to a lot of sci-fi movies we should have perfected AI by now, right? Well we are getting there, and this project from a group of Cornell University students titled, “FPGA kNN Recognition” is a graceful attempt at facial recognition.
For the uninitiated, the K-nearest neighbors or kNN Algorithm is a very simple classification algorithm that uses similarities between given sets of data and a data point being examined to predict where the said data point belongs. In this project, the authors use a camera to take an image and then save its histogram instead of the entire image. To train the network, the camera is made to take mug-shots of sorts and create a database of histograms that are tagged to be for the same face. This process is repeated for a number of faces and this is shown as a relatively quick process in the accompanying video.
The process of classification or ‘guess who’, takes an image from the camera and compares it with all the faces already stored. The system selects the one with the highest similarity and the results claimed are pretty fantastic, though that is not the brilliant part. The implementation is done using an FPGA which means that the whole process has been pipe-lined to reduce computational time. This makes the project worth a look especially for people looking into FPGA based development. There is a hardware implementation of a k-distance calculator, sorting and selector. Be sure to read through the text for the sorting algorithm as we found it quite interesting.
Arduino recently released the Arduino MKR4000 board which has an FPGA, and there are many opensource boards out there in the wild that you can easily get started with today. We hope to see some of these in conference badges in the upcoming years.
Continue reading “Quick Face Recognition With An FPGA”