Worried About Bats In Your Belfry? A Tale Of Two Bat Detectors

As somebody who loves technology and wildlife and also needs to develop an old farmhouse, going down the bat detector rabbit hole was a journey hard to resist. Bats are ideal animals for hackers to monitor as they emit ultrasonic frequencies from their mouths and noses to communicate with each other, detect their prey and navigate their way around obstacles such as trees — all done in pitch black darkness. On a slight downside, many species just love to make their homes in derelict buildings and, being protected here in the EU, developers need to make a rigorous survey to ensure as best as possible that there are no bats roosting in the site.

Perfect habitat for bats.

Obviously, the authorities require a professional independent survey, but there’s still plenty of opportunity for hacker participation by performing a ‘pre-survey’. Finding bat roosts with DIY detectors will tell us immediately if there is a problem, and give us a head start on rethinking our plans.

As can be expected, bat detectors come in all shapes and sizes, using various electrickery techniques to make them cheaper to build or easier to use. There are four different techniques most popularly used in bat detectors.

 

  1. Heterodyne: rather like tuning a radio, pitch is reduced without slowing the call down.
  2. Time expansion: chunks of data are slowed down to human audible frequencies.
  3. Frequency division: uses a digital counter IC to divide the frequency down in real time.
  4. Full spectrum: the full acoustic spectrum is recorded as a wav file.

Fortunately, recent advances in technology have now enabled manufacturers to produce relatively cheap full spectrum devices, which give the best resolution and the best chances of identifying the actual bat species.

DIY bat detectors tend to be of the frequency division type and are great for helping spot bats emerging from buildings. An audible noise from a speaker or headphones can prompt us to confirm that the fleeting black shape that we glimpsed was actually a bat and not a moth in the foreground. I used one of these detectors in conjunction with a video recorder to confirm that a bat was indeed NOT exiting from an old chimney pot. Phew!

Continue reading “Worried About Bats In Your Belfry? A Tale Of Two Bat Detectors”

Sensor Filters For Coders

Anybody interested in building their own robot, sending spacecraft to the moon, or launching inter-continental ballistic missiles should have at least some basic filter options in their toolkit, otherwise the robot will likely wobble about erratically and the missile will miss it’s target.

What is a filter anyway? In practical terms, the filter should smooth out erratic sensor data with as little time lag, or ‘error lag’ as possible. In the case of the missile, it could travel nice and smoothly through the air, but miss it’s target because the positional data is getting processed ‘too late’. The simplest filter, that many of us will have already used, is to pause our code, take about 10 quick readings from our sensor and then calculate the mean by dividing by 10. Incredibly simple and effective as long as our machine or process is not time sensitive – perfect for a weather station temperature sensor, although wind direction is slightly more complicated. A wind vane is actually an example of a good sensor giving ‘noisy’ readings: not that the sensor itself is noisy, but that wind is inherently gusty and is constantly changing direction.

It’s a really good idea to try and model our data on some kind of computer running software that will print out graphs – I chose the Raspberry Pi and installed Jupyter Notebook running Python 3.

The photo on the left shows my test rig. There’s a PT100 probe with it’s MAX31865 break-out board, a Dallas DS18B20 and a DHT22. The shield on the Pi is a GPS shield which is currently not used. If you don’t want the hassle of setting up these probes there’s a Jupyter Notebook file that can also use the internal temp sensor in the Raspberry Pi. It’s incredibly quick and easy to get up and running.

It’s quite interesting to see the performance of the different sensors, but I quickly ended up completely mangling the data from the DS18B20 by artificially adding randomly generated noise and some very nasty data spikes to really punish the filters as much as possible. Getting the temperature data to change rapidly was effected by putting a small piece of frozen Bockwurst on top of the DS18B20 and then removing it again.

Continue reading “Sensor Filters For Coders”

AI Poised To Turn The Internet Into Gibberish

Last Thursday two lowly masters grad students, Aaron Gokaslan and Vanya Cohen managed to replicate the secretive OpenAI model and cheekily named their version OpenGPT-2. The code can be downloaded from this Google Colab page and apparently no prior experience in language modeling is required to use it. More useful might be the skills required to persuade Google to part with $50,000 worth of free cloud compute time for the training!

Research firm OpenAI released a new, ever more powerful, version of their GPT language model with 1.5 billion parameters, trained on a data-set of 8 million web pages and although it’s most entertaining use is to produce gibberish, it will inevitably also be able to produce coherent text sometime very soon. For us mere mortals, there’s a cut down version of the model hosted in the cloud and a webpage that we can visit, type in a short phrase to prompt the system, and print out a few paragraphs of fake news. We tried it with the following: “You can use an Arduino to” and the results were:

  • You can use an Arduino to remove your appendix. Use an open or closed loop, but do not use them at the same time, i.e. don’t wire your Arduino to the wall like I did. The Arduino’s are very noisy and can damage the circuit board. Be sure to use a good quality and high-quality resistor to connect them to the wall. You can also use your own antenna if you’d like.

The neural network works by taking the text that has been previously printed and using this to predict the next word sequentially. Because of it’s obscurity, our Arduino example is a pretty severe and rather unfair test on the system whilst in it’s fledgling infant state and, looking at the proper metrics, it actually performs quite well on certain standard industry test sets such as the Children’s Book Test.

Be sure to paste your own fake news into the comments below and we’ll take a vote on the one that’s most entertaining, but please keep it within the boundaries of good taste!

Whilst this is an emerging technology, somebody did get hold of it a while back and applied it to an old teleprinter!

 

Build A Fungus Foraging App With Machine Learning

As the 2019 mushroom foraging season approaches it’s timely to combine my thirst for knowledge about low level machine learning (ML) with a popular pastime that we enjoy here where I live. Just for the record, I’m not an expert on ML, and I’m simply inviting readers to follow me back down some rabbit holes that I recently explored.

But mushrooms, I do know a little bit about, so firstly, a bit about health and safety:

  • The app created should be used with extreme caution and results always confirmed by a fungus expert.
  • Always test the fungus by initially only eating a very small piece and waiting for several hours to check there is no ill effect.
  • Always wear gloves  – It’s surprisingly easy to absorb toxins through fingers.

Since this is very much an introduction to ML, there won’t be too much terminology and the emphasis will be on having fun rather than going on a deep dive. The system that I stumbled upon is called XGBoost (XGB). One of the XGB demos is for binary classification, and the data was drawn from The Audubon Society Field Guide to North American Mushrooms. Binary means that the app spits out a probability of ‘yes’ or ‘no’ and in this case it tends to give about 95% probability that a common edible mushroom (Agaricus campestris) is actually edible. 

The app asks the user 22 questions about their specimen and collates the data inputted as a series of letters separated by commas. At the end of the questionnaire, this data line is written to a file called ‘fungusFile.data’ for further processing.

XGB can not accept letters as data so they have to be mapped into ‘classic LibSVM format’ which looks like this: ‘3:218’, for each letter. Next, this XGB friendly data is split into two parts for training a model and then subsequently testing that model.

Installing XGB is relatively easy compared to higher level deep learning systems and runs well on both Linux Ubuntu 16.04 and on a Raspberry Pi. I wrote the deployment app in bash so there should not be any additional software to install. Before getting any deeper into the ML side of things, I highly advise installing XGB, running the app, and having a bit of a play with it.

Training and testing is carried out by running bash runexp.sh in the terminal and it takes less than one second to process the 8124 lines of fungal data. At the end, bash spits out a set of statistics to represent the accuracy of the training and also attempts to ‘draw’ the decision tree that XGB has devised. If we have a quick look in directory ~/xgboost/demo/binary_classification, there should now be a 0002.model file in it ready for deployment with the questionnaire.

I was interested to explore the decision tree a bit further and look at the way XGB weighted different characteristics of the fungi. I eventually got some rough visualisations working on a Python based Jupyter Notebook script:

 

 

 

 

 

 

 

Obviously this app is not going to win any Kaggle competitions since the various parameters within the software need to be carefully tuned with the help of all the different software tools available. A good place to start is to tweak the maximum depth of the tree and the number or trees used. Depth = 4 and number = 4 seems to work well for this data. Other parameters include the feature importance type, for example: gain, weight, cover, total_gain or total_cover. These can be tuned using tools such as SHAP.

Finally, this app could easily be adapted to other questionnaire based systems such as diagnosing a particular disease, or deciding whether to buy a particular stock or share in the market place.

An even more basic introduction to ML goes into the baseline theory in a bit more detail – well worth a quick look.

Designing An Advanced Autonomous Robot: Goose

Robotics is hard, maybe not quite as difficult as astrophysics or understanding human relationships, but designing a competition winning bot from scratch was never going to be easy. Ok, so [Paul Bupe, Jr’s] robot, named ‘Goose’, did not quite win the competition, but we’re very interested to learn what golden eggs it might lay in the aftermath.

The mechanics of the bot is based on a fairly standard dual tracked drive system that makes controlling a turn much easier than if it used wheels. Why make life more difficult than it is already? But what we’re really interested in is the design of the control system and the rationale behind those design choices.

The diagram on the left might look complicated, but essentially the system is based on two ‘brains’, the Teensy microcontroller (MCU) and a Raspberry Pi, though most of the grind is performed by the MCU. Running at 96 MHz, the MCU is fast enough to process data from the encoders and IMU in real time, thus enabling the bot to respond quickly and smoothly to sensors. More complicated and ‘heavier’ tasks such as LIDAR and computer vision (CV) are performed on the Pi, which runs ‘Robot operating system’ (ROS), communicating with the MCU by means of a couple of ‘nodes’.

The competition itself dictated that the bot should travel in large circles within the walls of a large box, whilst avoiding particular objects. Obviously, GPS or any other form of dead reckoning was not going to keep the machine on track so it relied heavily on ‘LiDAR point cloud data’ to effectively pinpoint the location of the robot at all times. Now we really get to the crux of the design, where all the available sensors are combined and fed into a ‘particle filter algorithm’:

What we particularly love about this project is how clearly everything is explained, without too many fancy terms or acronyms. [Paul Bupe, Jr] has obviously taken the time to reduce the overall complexity to more manageable concepts that encourage us to explore further. Maybe [Paul] himself might have the time to produce individual tutorials for each system of the robot?

We could well be reading far too much into the name of the robot, ‘Goose’ being Captain Marvel’s bazaar ‘trans-species’ cat that ends up laying a whole load of eggs. But could this robot help reach a de-facto standard for small robots?

We’ve seen other competition robots on Hackaday, and hope to see a whole lot more!

Video after the break: Continue reading “Designing An Advanced Autonomous Robot: Goose”

High Performance Stereo Computer Vision For The Raspberry Pi

Up until now, running any kind of computer vision system on the Raspberry Pi has been rather underwhelming, even with the addition of products such as the Movidius Neural Compute Stick. Looking to improve on the performance situation while still enjoying the benefits of the Raspberry Pi community, [Brandon] and his team have been working on Luxonis DepthAI. The project uses a carrier board to mate a Myriad X VPU and a suite of cameras to the Raspberry Pi Compute Module, and the performance gains so far have been very promising.

So how does it work? Twin grayscale cameras allow the system to perceive depth, or distance, which is used to produce a “heat map”; ideal for tasks such as obstacle avoidance. At the same time, the high-resolution color camera can be used for object detection and tracking. According to [Brandon], bypassing the Pi’s CPU and sending all processed data via USB gives a roughly 5x performance boost, enabling the full potential of the main Intel Myriad X chip to be unleashed.

For detecting standard objects like people or faces, it will be fairly easy to get up and running with software such as OpenVino, which is already quite mature on the Raspberry Pi. We’re curious about how the system will handle custom models, but no doubt [Brandon’s] team will help improve this situation for the future.

The project is very much in an active state of development, which is exactly what we’d expect for an entry into the 2019 Hackaday Prize. Right now the cameras aren’t necessarily ideal, for example the depth sensors are a bit too close together to be very effective, but the team is still fine tuning their hardware selection. Ultimately the goal is to make a device that helps bikers avoid dangerous collisions, and we’ve very interested to watch the project evolve.

The video after the break shows the stereoscopic heat map in action. The hand is displayed as a warm yellow as it’s relatively close compared to the blue background. We’ve covered the combination Raspberry Pi and the Movidius USB stick in the past, but the stereo vision performance improvements Luxonis DepthAI really takes it to another level.

Continue reading “High Performance Stereo Computer Vision For The Raspberry Pi”

Pick And Place Robot Built With Fischertechnik

We’d be entirely wrong to think that Fichertechnik is just a toy for kids. It’s also perfect for prototyping the control system of robots. [davidatfsg]’s recent entry in the Hackaday Prize, Delta Robot, shows how complex robotics can be implemented without the hardship of having to drill, cut, bolt together or weld components. The added bonus is that the machine can be completely disassembled non-destructively and rebuilt with a new and better design with little or no waste.

The project uses inverse kinematics running on an Arduino Mega to pick coloured objects off a moving conveyor belt and drop them in their respective bins. There’s also also an optical encoder for regulating the speed of the conveyor and a laser light beam for sensing that the object on the conveyor has reached the correct position to be picked.

Not every component is ‘off the shelf’. [davidatfsg] 3D printed a simple nozzle for the actual ‘pick’ and the vacuum required was generated by the clever use of a pair of pneumatic cylinders and solenoid operated air valves.

We’re pretty sure that this will not be the last project on Hackaday that uses Fischertechnik components and it’s the second one that [davidatfsg] has concocted. Videos of the machine working after the break! Continue reading “Pick And Place Robot Built With Fischertechnik”