Farming is a challenge under even the best of circumstances. Almost all conventional farmers use some combination of tillers, combines, seeders and plows to help get the difficult job done, but for those like [Taylor] who do not farm large industrial monocultures, more specialized tools are needed. While we’ve featured the Acorn open source farming robot before, it’s back now with new and improved features and a simulation mode to help rapidly improve the platform’s software.
The first of the two new physical features includes a fail-safe braking system. Since the robot uses electric geared hub motors for propulsion, the braking system consists of two normally closed relays which short the motor leads in emergency situations. This makes the motors see an extremely high load and stops them from turning. The robot also has been given advanced navigation facilities so that it can follow custom complex routes. And finally, [Taylor] created a simulation mode so that the robot’s entire software stack can be run in Docker and tested inside a simulation without using the actual robot.
For farmers who are looking to buck unsustainable modern agricultural practices while maintaining profitable farms, a platform like Acorn could be invaluable. With the ability to survey, seed, harvest, and even weed, it could perform every task of larger agricultural machinery. Of course, if you want to learn more about it, you can check out our earlier feature on this futuristic farming machine.
So great to see our work on Hackaday! Please advise if this runs afoul of the rules but I also wanted to share our new monthly crowd funding campaign on Open Collective. We are trying to make this project sustainably funded and would appreciate your support.
https://opencollective.com/twisted-fields-research-collective
If anyone has any questions, AMA!
What areas might computer vision play a role? Staying aligned with the burrows? Field navigation? (something like ORBSLAM?) Surveying/yield measurement? Identifying plants/weeds?
Sorry something is happening with my desktop that is causing comment threading to fail. Sorry mods for the extra comments, trying from mobile now. M if you see this, find my reply to you under the article instead of inline.
Oh and see here for a direct link to instructions on how to run the code in simulation!
https://github.com/Twisted-Fields/acorn-precision-farming-rover/blob/main/SIMULATION.md
Long term, vision will be useful in a few areas. Navigation, obstacle detection, and row following using outward facing cameras is one area. The other will be to use crop-facing cameras for a few tasks:
identifying and localizing unwanted plants (aka weeds) to be destroyed by a mechanical tool, possibly for thinning plants after seeds have germinated, and potentially automatically detecting pests or sick plants. However I am also very interested in some kind of NERF like system to create high resolution 3D views of the plants, which could be used in labeling and human inspection of crop images.
I am particularly interested in Tesla’s Hydra Net architecture (see any Andre Karpathy talk on youtube), where a multi-headed network splits off in to different downstream nets.
One feature I am interested in is accurately GPS stamping images coming off of the camera system. I want to use a microcontroller to track shutter activations and correlate them with IMU fused GPS data to get very accurate stamps. This seems like it would be useful for things like photogrammetry/NERF, or just building an image browser where a farmer can click through and see images based on location.
I’ve worked with nerf a little and found that it’s very important to have a large variety of viewpoints of the object in question. If you have multiple plant-facing cameras, then you can use the rigid transformations between them when feeding nerf (this can be worked out from camera calibration using a checkerboard). SLAM (or IMU-fused SLAM) might give accurate enough transformation to use multiple sets of images (taken with the cart at different positions) to flesh out the set you give to nerf.
Nerf is usually very slow, though I recall there was a voxel-based approach recently that didn’t use dumb multilayer perceptrons and which sped things up rather dramatically (minutes rather than hours per scene. If nerf is too slow (you’re looking at a significant delay per-plant/per-scene) but you can afford RGBD (or use dense direct-method SLAM, or other structure-from-motion, with a regular camera) then you could maybe build a dense RGB-colored pointcloud of the plant instead and work from that. open3d is handy in that regard. The point cloud can be quite handy and a much more amenable representation to work with than something embedded in a neural network.
SLAM systems that navigate using a camera can fill a useful role between GPS and IMU accuracy. Of course, they can also work without either, or one of the two. They do require a bit of processing power to work in real time. Equipping the SLAM system with an IMU will also ensure it has a good estimate of scale (which is otherwise impossible to estimate without binocular vision and a known distance between cameras).
Canopy NDVI
Oops M, I think I made my comment in parallel rather than in direct response. See:
https://hackaday.com/2022/03/17/open-source-farming-robot-now-includes-simulations/#comment-6447813
The structure and the video are more concerned about the electronics than the end applications. I hear about machine learning, vision etc but .. how about something practical, like weed cutting (which doesn’t need a RPI to operate), ditch cutting, etc. Maybe with attachable tools.
By the way, i would have made the same with two motors instead of four.
We definitely have practical tools in mind! The rpi means the system can be autonomous. And it actually has eight motors since there are steering motors too. The four wheel drive is wonderful for traction. Tools are coming, but we are refining the main vehicle so we can ship early adopter kits of that and develop tools as a community. This project will take a long time, and we are moving slowly but surely ahead.
Modular approach is great. Once you get some tinker-able bots in the hands of some third party real world applications, things will really take off.
Hi Taylor
I have seen your project popping up here and there for a while,and kudos for maintaining a pace.
Having a small farm and living in small farm country,I am interested.and have some questions.
Will you incorporate soil testing,ph, nitrogen,phosphorus
Will you use infrared cameras for disease dignosis,soil temperature?
Are you considering insect control,your physical platform of a big
box would be a great spot to contain any extranious laser energy,or are you considering mechanical insect control,or other
means?,fans ,vacumes,pheremones,sticky traps.
How about a robot for installing row tunnels? or has that been done?
If it all gets to be too mutch then I can see breaking different tasks into another unit and hiching the units together like a train
or big bug which would wind its way through a field.
David
I can add: predator control to the list.
It seems like a robot could sit idle in the field overnight and watch for predators, and take some action if one is seen.
Or alert the farmer if the cows or sheep get out of their pens.
A reasonable action might be to alert the farmer. I can also imagine an ultrasonic cannon on a turret that can be turned to the direction of the predator to scare it away. (This works for woodpeckers damaging my house.)
Any firearm-related response should probably be done by direct human action, but the robot could certainly be used as a spotter for this.
“but the robot could certainly be used as a spotter for this.”
Oh, is *that* what “field artillery” is referring to?