Build A Fungus Foraging App With Machine Learning

As the 2019 mushroom foraging season approaches it’s timely to combine my thirst for knowledge about low level machine learning (ML) with a popular pastime that we enjoy here where I live. Just for the record, I’m not an expert on ML, and I’m simply inviting readers to follow me back down some rabbit holes that I recently explored.

But mushrooms, I do know a little bit about, so firstly, a bit about health and safety:

  • The app created should be used with extreme caution and results always confirmed by a fungus expert.
  • Always test the fungus by initially only eating a very small piece and waiting for several hours to check there is no ill effect.
  • Always wear gloves  – It’s surprisingly easy to absorb toxins through fingers.

Since this is very much an introduction to ML, there won’t be too much terminology and the emphasis will be on having fun rather than going on a deep dive. The system that I stumbled upon is called XGBoost (XGB). One of the XGB demos is for binary classification, and the data was drawn from The Audubon Society Field Guide to North American Mushrooms. Binary means that the app spits out a probability of ‘yes’ or ‘no’ and in this case it tends to give about 95% probability that a common edible mushroom (Agaricus campestris) is actually edible. 

The app asks the user 22 questions about their specimen and collates the data inputted as a series of letters separated by commas. At the end of the questionnaire, this data line is written to a file called ‘fungusFile.data’ for further processing.

XGB can not accept letters as data so they have to be mapped into ‘classic LibSVM format’ which looks like this: ‘3:218’, for each letter. Next, this XGB friendly data is split into two parts for training a model and then subsequently testing that model.

Installing XGB is relatively easy compared to higher level deep learning systems and runs well on both Linux Ubuntu 16.04 and on a Raspberry Pi. I wrote the deployment app in bash so there should not be any additional software to install. Before getting any deeper into the ML side of things, I highly advise installing XGB, running the app, and having a bit of a play with it.

Training and testing is carried out by running bash runexp.sh in the terminal and it takes less than one second to process the 8124 lines of fungal data. At the end, bash spits out a set of statistics to represent the accuracy of the training and also attempts to ‘draw’ the decision tree that XGB has devised. If we have a quick look in directory ~/xgboost/demo/binary_classification, there should now be a 0002.model file in it ready for deployment with the questionnaire.

I was interested to explore the decision tree a bit further and look at the way XGB weighted different characteristics of the fungi. I eventually got some rough visualisations working on a Python based Jupyter Notebook script:

 

 

 

 

 

 

 

Obviously this app is not going to win any Kaggle competitions since the various parameters within the software need to be carefully tuned with the help of all the different software tools available. A good place to start is to tweak the maximum depth of the tree and the number or trees used. Depth = 4 and number = 4 seems to work well for this data. Other parameters include the feature importance type, for example: gain, weight, cover, total_gain or total_cover. These can be tuned using tools such as SHAP.

Finally, this app could easily be adapted to other questionnaire based systems such as diagnosing a particular disease, or deciding whether to buy a particular stock or share in the market place.

An even more basic introduction to ML goes into the baseline theory in a bit more detail – well worth a quick look.

Designing An Advanced Autonomous Robot: Goose

Robotics is hard, maybe not quite as difficult as astrophysics or understanding human relationships, but designing a competition winning bot from scratch was never going to be easy. Ok, so [Paul Bupe, Jr’s] robot, named ‘Goose’, did not quite win the competition, but we’re very interested to learn what golden eggs it might lay in the aftermath.

The mechanics of the bot is based on a fairly standard dual tracked drive system that makes controlling a turn much easier than if it used wheels. Why make life more difficult than it is already? But what we’re really interested in is the design of the control system and the rationale behind those design choices.

The diagram on the left might look complicated, but essentially the system is based on two ‘brains’, the Teensy microcontroller (MCU) and a Raspberry Pi, though most of the grind is performed by the MCU. Running at 96 MHz, the MCU is fast enough to process data from the encoders and IMU in real time, thus enabling the bot to respond quickly and smoothly to sensors. More complicated and ‘heavier’ tasks such as LIDAR and computer vision (CV) are performed on the Pi, which runs ‘Robot operating system’ (ROS), communicating with the MCU by means of a couple of ‘nodes’.

The competition itself dictated that the bot should travel in large circles within the walls of a large box, whilst avoiding particular objects. Obviously, GPS or any other form of dead reckoning was not going to keep the machine on track so it relied heavily on ‘LiDAR point cloud data’ to effectively pinpoint the location of the robot at all times. Now we really get to the crux of the design, where all the available sensors are combined and fed into a ‘particle filter algorithm’:

What we particularly love about this project is how clearly everything is explained, without too many fancy terms or acronyms. [Paul Bupe, Jr] has obviously taken the time to reduce the overall complexity to more manageable concepts that encourage us to explore further. Maybe [Paul] himself might have the time to produce individual tutorials for each system of the robot?

We could well be reading far too much into the name of the robot, ‘Goose’ being Captain Marvel’s bazaar ‘trans-species’ cat that ends up laying a whole load of eggs. But could this robot help reach a de-facto standard for small robots?

We’ve seen other competition robots on Hackaday, and hope to see a whole lot more!

Video after the break: Continue reading “Designing An Advanced Autonomous Robot: Goose”

High Performance Stereo Computer Vision For The Raspberry Pi

Up until now, running any kind of computer vision system on the Raspberry Pi has been rather underwhelming, even with the addition of products such as the Movidius Neural Compute Stick. Looking to improve on the performance situation while still enjoying the benefits of the Raspberry Pi community, [Brandon] and his team have been working on Luxonis DepthAI. The project uses a carrier board to mate a Myriad X VPU and a suite of cameras to the Raspberry Pi Compute Module, and the performance gains so far have been very promising.

So how does it work? Twin grayscale cameras allow the system to perceive depth, or distance, which is used to produce a “heat map”; ideal for tasks such as obstacle avoidance. At the same time, the high-resolution color camera can be used for object detection and tracking. According to [Brandon], bypassing the Pi’s CPU and sending all processed data via USB gives a roughly 5x performance boost, enabling the full potential of the main Intel Myriad X chip to be unleashed.

For detecting standard objects like people or faces, it will be fairly easy to get up and running with software such as OpenVino, which is already quite mature on the Raspberry Pi. We’re curious about how the system will handle custom models, but no doubt [Brandon’s] team will help improve this situation for the future.

The project is very much in an active state of development, which is exactly what we’d expect for an entry into the 2019 Hackaday Prize. Right now the cameras aren’t necessarily ideal, for example the depth sensors are a bit too close together to be very effective, but the team is still fine tuning their hardware selection. Ultimately the goal is to make a device that helps bikers avoid dangerous collisions, and we’ve very interested to watch the project evolve.

The video after the break shows the stereoscopic heat map in action. The hand is displayed as a warm yellow as it’s relatively close compared to the blue background. We’ve covered the combination Raspberry Pi and the Movidius USB stick in the past, but the stereo vision performance improvements Luxonis DepthAI really takes it to another level.

Continue reading “High Performance Stereo Computer Vision For The Raspberry Pi”

Pick And Place Robot Built With Fischertechnik

We’d be entirely wrong to think that Fichertechnik is just a toy for kids. It’s also perfect for prototyping the control system of robots. [davidatfsg]’s recent entry in the Hackaday Prize, Delta Robot, shows how complex robotics can be implemented without the hardship of having to drill, cut, bolt together or weld components. The added bonus is that the machine can be completely disassembled non-destructively and rebuilt with a new and better design with little or no waste.

The project uses inverse kinematics running on an Arduino Mega to pick coloured objects off a moving conveyor belt and drop them in their respective bins. There’s also also an optical encoder for regulating the speed of the conveyor and a laser light beam for sensing that the object on the conveyor has reached the correct position to be picked.

Not every component is ‘off the shelf’. [davidatfsg] 3D printed a simple nozzle for the actual ‘pick’ and the vacuum required was generated by the clever use of a pair of pneumatic cylinders and solenoid operated air valves.

We’re pretty sure that this will not be the last project on Hackaday that uses Fischertechnik components and it’s the second one that [davidatfsg] has concocted. Videos of the machine working after the break! Continue reading “Pick And Place Robot Built With Fischertechnik”

Drone On Drone Warfare, With Jammers

After the alleged drone attacks on London Gatwick airport in 2018 we’ve been on the look out for effective countermeasures against these rogue drone operators. An interesting solution has been created by [Ogün Levent] in Turkey and is briefly documented on in his Dronesense page on Crowdsupply. There’s a few gaps in the write up due to non-disclosure agreements, but we might well be able to make some good guesses as to the missing content.

Not one, but two LimeSDRs are sent off into the air onboard a custom made drone to track down other drones and knock them out by jamming their signals, which is generally much safer than trying to fire air to air guided missiles at them!

The drone hardware used by [Ogün Levent] and his team is a custom-made S600 frame with T-Motor U3 motors and a 40 A speed controller, with a takeoff weight of 5 kg. An Adventech single board computer is the master controller with a Pixhawk secondary and, most importantly, a honking great big 4 W, 2.4 GHz frequency jammer with a range of 1200 meters.

The big advantage of sending out a hunter drone with countermeasures rather than trying to do it on the ground is that, being closer to the drone, the power of the jammer can be reduced, thus creating less disturbance to other RF devices in the area – the rogue drone is specifically targeted.

One of the LimeSDRs runs a GNU radio flowgraph with a specially designed block for detecting the rogue drone’s frequency modulation signature with what seems to be a machine learning classification script. The other LimeSDR runs another *secret* flowgraph and a custom script running on the SBC combines the two flowgraphs together.

So now it’s the fun part, what does the second LimeSDR do? Some of the more obvious problems with the overall concept is that the drone will jam itself and the rogue drone might already have anti-jamming capabilities installed, in which case it will just return to home. Maybe the second SDR is there to track the drone as it returns home and thereby catch the human operator? Answers/suggestions in the comments below! Video after the break. Continue reading “Drone On Drone Warfare, With Jammers”

Art Meets Science In The Cold Wastelands Of Iceland

Although Iceland is now a popular destination for the day-tripping selfie-seeking Instagrammer who rents a 4×4, drives it off road onto delicate ecosystems and then videos the ensuing rescue when the cops arrive, there are still some genuine photographers prepared to put a huge amount of time and effort into their art. [Dheera Venkatraman] is one of the latter and produces composite photos using a relatively low resolution thermal camera and DIY pan and tilt rig.

Whilst we don’t have the exact details, we think that, since the Seek Reveal Pro camera used has a resolution of 320 x 240, [Dheera] would have had to take at least 20 photos for each panoramic shot. In post processing, the shots were meticulously recombined into stunning landscape photos which are a real inspiration to anybody interested in photography.

If you do go to Iceland you might find the traditional food a little challenging to those not raised upon it, nor would you go there for a stag night as beer is eyewateringly expensive. But if you enjoy uninhabitable, desolate, dramatic landscapes there is a huge range of possibilities for the photographer from rugged, frozen lava flows to extra terrestrial ‘Martian’ crater-scapes, if you know where to find them.

[Dheera’s] blog contains some more information about his Iceland photography and there’s a Github repsoitory too. And if you cant afford a $699 Seek Reveal Pro, maybe try building one yourself.

3D Printed Snap Gun For Automatic Lock Picking

At a far flung, wind blown, outpost of Hackaday, we were watching a spy film with a bottle of suitably cheap Russian vodka when suddenly a blonde triple agent presented a fascinating looking gadget to a lock and proceeded to unpick it automatically. We all know very well that we should not believe everything we see on TV, but this one stuck.

Now, for us at least, fantasy became a reality as [Peterthinks] makes public his 3D printed lock picker – perfect for the budding CIA agent. Of course, the Russians have probably been using these kind of gadgets for much longer and their YouTube videos are much better, but to build one’s own machine takes it one step to the left of center.

The device works by manually flicking the spring (rubber band) loaded side switch which then toggles the picking tang up and down whilst simultaneously using another tang to gently prime the opening rotator.

The size of the device makes it perfect to carry around in a back pocket, waiting for the chance to become a hero in the local supermarket car park when somebody inevitably locks their keys in their car, or even use it in your day job as a secret agent. Just make sure you have your CIA, MI6 or KGB credentials to hand in case you get searched by the cops or they might think you were just a casual burglar. Diplomatic immunity, or a ‘license to pick’ would also be useful, if you can get one.

As mentioned earlier, [Peter’s] video is not the best one to explain lock picking, but he definitely gets the prize for stealth. His videos are below the break.

In the meantime, all we need now are some 3D printed tangs.

Continue reading “3D Printed Snap Gun For Automatic Lock Picking”