Easy Direction Finding Thanks To Quad SDRs

Direction finding has long been a pastime of the ham radio community. Fox hunts and other DF events have entertained many, as they swept their antennas hunting for a transmitter. As with rock and roll and flared pants, time changes all things, and [Corrosive] has been experimenting with a very modern way to go about direction finding with SDR.

The work is made possible through the use of Kerberos SDR, a device which is essentially four RTL-SDR radios operating in unison. By fitting these with the appropriate antennas and running the right calibrations, the hardware can be used as a powerful direction finding tool.

[Corrosive] demonstrates this ably, by fitting the rig to his car and driving around on the hunt for a transmitter. Hunting for a P25 control station, he demonstrates the configuration of the hardware to help find the FM modulated signal. The software part of the equation is integrated with GPS maps, so one can follow the bearing towards the signal source while data is collected. Over time, the software takes more samples until it builds up an expected location for the transmitter.

The setup is remarkably effective, and largely does all of the heavy lifting, leaving the user to simply handle driving the car. The heat mapping feature is also incredibly cool, and would look great in your next spy movie. We’ve featured Kerberos SDR before, and fully expect to see more great work on this platform. Video after the break.

Continue reading “Easy Direction Finding Thanks To Quad SDRs”

Qt Arrives For Small Computers

There was a time when writing embedded systems meant never having to deal with graphical user interfaces, and spending long hours trying to free up a dozen bytes of ROM to add a feature. Nowadays, an embedded system is likely to have a screen and what would have been a huge amount of memory even for a PC a scant decade ago. Qt has long been a popular choice for building software on desktop platforms, and — while not as popular — has even run on phones for a while. Now there’s Qt for MCUs which is clearly targeting the IoT market that everyone is trying to capture. You can see the glitzy video for the new product, below.

We generally like Qt, and the move recently has been towards an HTML-like markup language called QML instead of directly manipulating widgets. We guess that’s a good thing. However, Qt isn’t just for user interfaces. It provides a wide range of services in a straightforward way

Continue reading “Qt Arrives For Small Computers”

Temperature Logging On The Last Frontier

In Alaska, the impact of climate change is easy to see. Already the melting permafrost is shifting foundations and rocking roads. Hotter summers are also turning food caches from refrigerators into ovens.

A permanent food cache. Via Wikipedia

[rabbitcreek]’s friend builds food caches with kids as part of a program to teach them traditional native activities. Food caches are usually inside buried boxes or small cabins raised on poles. Both are designed to keep hangry bears out. As you might expect, monitoring the temperature at these remote sites is crucial, so the food doesn’t spoil. His friend wanted a set-and-forget temperature monitoring system that could collect data for eight months over the winter.

The Alaska Datalogger carried a pretty serious list of requirements. It has to be waterproof, especially as ice and snow turn to water. Ideally, it should sip power and have a long battery life anyway. Most importantly, it has to be cheap and relatively easy for kids to build.

This awesome little data spaceship is designed around an O-ring used in domestic water purifiers. The greased up O-ring fits between two 3D printed enclosure halves that are shut tight with nylon bolts. Two waterproof temperature probes extend from the case—one inside the cache and the other outside in the elements. It’s built around an Adafruit Feather Adalogger and powered by an 18650 cell. The data is collected by visiting the site and pulling the SD card to extract the text file. There’s really no other way because the sites are far out of cell coverage. Or is there?

Though it probably wouldn’t survive the last frontier, this self-sufficient weather station is a simple solution for sunnier situations.

UbaBOT Mixes Up 50 Cocktails To Quench CCCamp Thirst

[Steffen Pfiffner’s] tent during the Chaos Communication Camp is full of happiness delivered by something greater than alcohol alone. He’s brought a robot bartender that serves up a show while mixing up one of about 50 cocktail recipes.

The project is the work of five friends from Lake Constance (Bodensee) in southern Germany, near the borders with Switzerland and Austria. It started, as many projects do, with some late night drinking. The five were toiling to mix beverages more complex than your most common fare, and decided to turn their labors instead to robot making.

Since 2012, the project has gone through five revisions, the most recent of which the team calls Uba BOT. Delightfully, the cup tray which moves left and right on the front of the machine is connected using a strain gauge. This provides a way for the robot to sense the presence of a cup to avoid dispensing ingredients all over the bar itself. It also provides a feedback loop that verifies the amount of liquids and volume of ice added to the cup. Once everything’s in the cup, a rotary milk frother lowers itself into position to stir things up a bit.

A Raspberry Pi is in control of eighteen pumps that dispense both liquor and mixers. The team is still trying to work out a way to reliably dispense carbonated mixers, which so far have been a challenge due to over-excited foam. The software was originally based on Bartendro, but has since taken on a life of its own as these things often do. The first time you want a drink, you register an RFID tag and record your height, weight, and age which keeps track of your estimated blood alcohol content based on time and your number of visits to the robot. The firmware also tracks the state of each ingredient to alert a meat-based bar attendant of when a bottle needs replacing.

Join us after the break to see an explanation of what’s under the hood and to watch Uba BOT mix up a Mai Tai.

Continue reading “UbaBOT Mixes Up 50 Cocktails To Quench CCCamp Thirst”

Circuit Art Brings Out The Lifelike Qualities Of Electricity

Functional circuit sculptures have been gaining popularity with adventuring electronic artists who dare attempt the finicky art form of balancing structure and wire routing. [Kelly Heaton’s] sculptures however are on a whole other creative level.

Not only does she use the circuits powering her works as part of their physical component, there are no controllers or firmware to be seen anywhere; everything is discrete and analog. In her own words, she tries to balance the “logical planning” of the engineering side with the “unfettered expression” of artworks. The way she does this is by giving her circuits a lifelike quality, with disorganized circuit structures and trills and chirps that mimic those of wildlife.

One of her works, “Birds at My Feeder”, builds up on another previous work, the analog “pretty bird”. On their own, each one of the birds uses a photoresistor to affect its analog-generated chirps, providing both realistic and synthetic qualities to their calls. What the full work expands on is a sizable breadboard-mounted sequencer using only discrete components, controlling how each of the connected birds sing in a pleasing chorus. Additionally, the messy nature of the wires gives off the impression of the sequencer doubling as the birds’ nest.

There are other works as well in this project, such as the “Moth Electrolier”, in which she takes great care to keep structural integrity in mind in the design of the flexible board used there. Suffice to say, her work is nothing short of brilliant engineering and artistic prowess, and you can check one more example of it after the break. However, if you’re looking for something more methodical and clean, you can check out the entries on the circuit sculpture contest we ran last year.

Continue reading “Circuit Art Brings Out The Lifelike Qualities Of Electricity”

Build A Fungus Foraging App With Machine Learning

As the 2019 mushroom foraging season approaches it’s timely to combine my thirst for knowledge about low level machine learning (ML) with a popular pastime that we enjoy here where I live. Just for the record, I’m not an expert on ML, and I’m simply inviting readers to follow me back down some rabbit holes that I recently explored.

But mushrooms, I do know a little bit about, so firstly, a bit about health and safety:

  • The app created should be used with extreme caution and results always confirmed by a fungus expert.
  • Always test the fungus by initially only eating a very small piece and waiting for several hours to check there is no ill effect.
  • Always wear gloves  – It’s surprisingly easy to absorb toxins through fingers.

Since this is very much an introduction to ML, there won’t be too much terminology and the emphasis will be on having fun rather than going on a deep dive. The system that I stumbled upon is called XGBoost (XGB). One of the XGB demos is for binary classification, and the data was drawn from The Audubon Society Field Guide to North American Mushrooms. Binary means that the app spits out a probability of ‘yes’ or ‘no’ and in this case it tends to give about 95% probability that a common edible mushroom (Agaricus campestris) is actually edible. 

The app asks the user 22 questions about their specimen and collates the data inputted as a series of letters separated by commas. At the end of the questionnaire, this data line is written to a file called ‘fungusFile.data’ for further processing.

XGB can not accept letters as data so they have to be mapped into ‘classic LibSVM format’ which looks like this: ‘3:218’, for each letter. Next, this XGB friendly data is split into two parts for training a model and then subsequently testing that model.

Installing XGB is relatively easy compared to higher level deep learning systems and runs well on both Linux Ubuntu 16.04 and on a Raspberry Pi. I wrote the deployment app in bash so there should not be any additional software to install. Before getting any deeper into the ML side of things, I highly advise installing XGB, running the app, and having a bit of a play with it.

Training and testing is carried out by running bash runexp.sh in the terminal and it takes less than one second to process the 8124 lines of fungal data. At the end, bash spits out a set of statistics to represent the accuracy of the training and also attempts to ‘draw’ the decision tree that XGB has devised. If we have a quick look in directory ~/xgboost/demo/binary_classification, there should now be a 0002.model file in it ready for deployment with the questionnaire.

I was interested to explore the decision tree a bit further and look at the way XGB weighted different characteristics of the fungi. I eventually got some rough visualisations working on a Python based Jupyter Notebook script:

 

 

 

 

 

 

 

Obviously this app is not going to win any Kaggle competitions since the various parameters within the software need to be carefully tuned with the help of all the different software tools available. A good place to start is to tweak the maximum depth of the tree and the number or trees used. Depth = 4 and number = 4 seems to work well for this data. Other parameters include the feature importance type, for example: gain, weight, cover, total_gain or total_cover. These can be tuned using tools such as SHAP.

Finally, this app could easily be adapted to other questionnaire based systems such as diagnosing a particular disease, or deciding whether to buy a particular stock or share in the market place.

An even more basic introduction to ML goes into the baseline theory in a bit more detail – well worth a quick look.

Looking Around Corners With F-K Migration

The concept behind non-line-of-sight (NLOS) imaging seems fairly easy to grasp: a laser bounces photons off a surface that illuminate objects that are within in sight of that surface, but not of the imaging equipment. The photons that are then reflected or refracted by the hidden object make their way back to the laser’s location, where they are captured and processed to form an image. Essentially this allows one to use any surface as a mirror to look around corners.

Main disadvantage with this method has been the low resolution and high susceptibility to noise. This led a team at Stanford University to experiment with ways to improve this. As detailed in an interview by Tech Briefs with graduate student [David Lindell], a major improvement came from an ultra-fast shutter solution that blocks out most of the photons that return from the wall that is being illuminated, preventing the photons reflected by the object from getting drowned out by this noise.

The key to getting the imaging quality desired, including with glossy and otherwise hard to image objects, was this f-k migration algorithm. As explained in the video that is embedded after the break, they took a look at what methods are used in the field of seismology, where vibrations are used to image what is inside the Earth’s crust, as well as synthetic aperture radar and similar. The resulting algorithm uses a sequence of Fourier transformation, spectrum resampling and interpolation, and the inverse Fourier transform to process the received data into a usable image.

This is not a new topic; we covered a simple implementation of this all the way back in 2011, as well as a project by UK researchers in 2015. This new research shows obvious improvements, making this kind of technology ever more viable for practical applications.

Continue reading “Looking Around Corners With F-K Migration”