MicroGPS Sees What You Overlook

the microGPS pipeline

GPS is an incredibly powerful tool that allows devices such as your smartphone to know roughly where they are with an accuracy of around a meter in some cases. However, this is largely too inaccurate for many use cases and that accuracy drops considerably when inside such as warehouse robots that rely on barcodes on the floor. In response, researchers [Linguang Zhang, Adam Finkelstein, Szymon Rusinkiewicz] at Princeton have developed a system they refer to as MicroGPS that uses pictures of the ground to determine its location with sub-centimeter accuracy.

The system has a downward-facing monochrome camera with a light shield to control for exposure. Camera output feeds into an Nvidia Jetson TX1 platform for processing. The idea is actually quite similar to that of an optical mouse as they are often little more than a downward-facing low-resolution camera with some clever processing. Rather than trying to capture relative position like a mouse, the researchers are trying to capture absolute position. Imagine picking up your mouse, dropping it on a different spot on your mousepad, and having the cursor snap to a different part of the screen. To our eyes that are quite far away from the surface, asphalt, tarmac, concrete, and carpet look quite uniform. But to a macro camera, there are cracks, fibers, and imperfections that are distinct and recognizable.

They sample the surface ahead of time, creating a globally consistent map of all the images stitched together. Then while moving around, they extract features and implement a voting method to filter out numerous false positives. The system is robust enough to work even a month after the initial dataset was created on an outside road. They put leaves on the ground to try and fool the system but saw remarkably stable navigation.

Their paper, code, and dataset are all available online. We’re looking forward to fusion systems where it can combine GPS, Wifi triangulation, and MicroGPS to provide a robust and accurate position.

Video after the break.

45 thoughts on “MicroGPS Sees What You Overlook

    1. So after posting this I had to do some scheduled procrastination, and went reading. I was technically wrong – early cruise missiles used a version of this using radar, not cameras. Optical navigation was done much later.

      1. Tell us more about your scheduled procrastination. Is this something you do or were you being sarcastic and claiming that the procrastination was on purpose?

        I once heard a story of med school student that worked (studied) in 12 minute intervals with one or two minute breaks in between. He had timed his attention span and ability to learn/remember material and he had the discipline to maintain that schedule. Made for odd study sessions for people who didn’t know before hand. Apparently it worked for him.

        1. Interesting. I had not heard about the 12 minute study/work loop. I have heard about research in a 20 minute project focus/break schedule. I wonder how he dealt with context shifts (ie going between two different projects). I would love to hear more about this.

          1. I’ll see if I can get more information and circle back here in the next few days.

            Odds are this guy has retired by now, since he went to med school in the mid 70s. It would be interesting to follow up with him to see if he maintained this mentality in other portions of his life after leaving school (because I can’t imagine a doctor being able to maintain it in their professional one).

  1. GPS = Global Positioning System.

    Not sure what is global about a system that requires high detailed photos of a local area. It’s not like you can scan in the whole world at high resolution and store that in a system…

      1. My first reaction was that this was a cool project, but has nothing to do with GPS. If it is not a Global Positioning System the authors should not have used the acronym GPS in the name without some tie to the commonly recognized acornym. Using the name MicroGPS came across to one of my colleagues, who is an expert in geodesy and cartography, as naive and misleading. While I understand the uses and benefits of using photogrametry, and also have experimented with setting up databases to quickly search billions of focal points, the name is frankly misleading — unless they use a Global Positioning System (GPS) to get you in the ballpark, and the mGPS system to get you down to the mm or better. So, @Chris, I think I do get the point of the exercise of being able to position autonomous systems down to the mm, but I will still contest it was named badly. There is also a rich literature in photogrametry, and as I read the paper and inspect the code I will see if they knew about it, if the title just needs a tweak, or what is going on (one of the advantages for releasing code and data to replicate the project — cudo’s to them for that)

        All that said it is an impressive project non the less. I look forward to closely reviewing the project.

        1. a couple of updates…

          The project uses SiftGPU (a GPU supported version of the Scale Invariant Feature Transform algorithm), FLANN (a Fast Library for Approximate Nearest Neighbors), and OpenCV to support the general image processing. SIFT (and ORB which is not used here) are standard tools for this kind of work, so I already have a very good idea how they are doing everything. I was going to run a couple of tests, but their build system does not work and if I actually build it, it will probably take a non-trivial effort to get it running again.

          BTW, when researching this I also found the name MicroGPS used for miniaturized GPS modules… Like I said, the name does not resolve well.

          1. I finally had a moment to poke at this, and here are the changes I needed to make to get it to work:

            * change cmake_minimum_required (VERSION 2.6) value up to whatever you have installed (worked with 3.20.5) in CMakeLists.txt

            * link config_linux.cmake config.cmake or modify the CMakeLists.txt to include config_linux.cmake

            * add the following lines to image.h (to support OpenCV4 (currently compiled with 4.5.2):
            #include “opencv2/imgcodecs/legacy/constants_c.h”
            #include “opencv2/imgproc/types_c.h”

            I still have to resolve some missing GL libraries in the linker, but if I get it sorted I will reply or edit this post. As a note, the build system was not as bad as I originally feared — mostly just hard coded locations of files.

          2. HackaDay’s interface would now allow me to edit my last comment. THis is the last of the magic sauce to get it to compile:

            * in execute/CMakeLists.txt, anywhere where you see ‘gflags’ in the target_link_libraries, add “OpenGL GLEW glut lz4” to resolve the OpenGL dependencies.

            As a note, this has only been build on Gentoo Linux, and I have not run the program yet — just got it compiled.

            Anyway, I hope this is useful to others interested in project.

    1. Because the camera system is an absolute position detection method and doesn’t accumulate error over time like a relative positioning system would. Wheels are prone to wear and slippage which an encoder can’t account for, atleast not without additional sensors/information. Also the camera system is a nice fail safe because if the robot crashes it can fairly quickly determine its position again.

      You only need to calibrate once to get these benefits. Afterwards you could probably update the system live.

      1. In theory you could also calibrate as you navigate and incorporate a map of things to ignore, further increasing the accuracy. Perhaps even making the ignore list seasonal or periodic, say for day and night.

  2. I’m surprised this hasn’t been done before!

    From what I remember crabs use a similar system of navigation. But instead of light they use fluctuations in the magnetic field to navigate.

  3. In a warehouse/well used area how do they account for crud, spills and marks appearing and disappearing on the floor?

    As an alternative you could use the new-fangled Bluetooth positioning system.

    1. I was about to ask exactly the same question, I work in a warehouse as a maintenance engineer, there’s absolutely no way this would work on one of our floors with the amount of dust and general dirt that accumulates.

    2. +1 some sort of local system using receiver/transmitter devices to track something, on places like that, is much better. True satellite position (something this project is not) don’t work on caves or heavy concrete buildings.

    3. The video does mention that they experimented with how much of a given image could vary (by putting leaves on it) before it broke the localization & they found that it worked as long as at least half the image matched. Still I suspect you’d need some sort of dead reckoning to really make it work in the real world where this technique is only used to keep it calibrated.

    4. At first I thought about using this in farming, but when I saw it’s based on visible terrain features I realised it won’t work as tire trails in the mud change even with the same robot transit.

      1. I think it might be possible to get it to work in a farming context — depending on how you filter that images. That said, if you are thinking of just tracking the ground, then yea, you are likely out of luck. I’m thinking more about triangulating on trees, large rocks, fence posts, and things that do not change all *that* often. Anyway, if you are interested in looking into this PM me and we can discuss testing this. BTW, I own a small farm and might be able to test this.

  4. The livescribe pen I use to record notes uses an optical system for absolute positioning that works with sub mm accuracy. The paper has tiny dots placed in specific patterns and the pen has a camera that sees the dots and whatever I’m writing and records it. IRIC the absolute positioning space available is larger than the surface of the earth.

  5. Usually databases are represented in charts as three stacked cylinders. I’m assuming the original intent was to look like a stack of hard disks to represent storage, but I’ve heard people call it a barrel or (humorously) a trash can.

    But from now on I shall represent a databases in charts as a stack of concrete slabs. Thank you.

  6. The missile knows where it is at all times. It knows this because it knows where it isn’t. By subtracting where it is from where it isn’t, or where it isn’t from where it is – whichever is greater – it obtains a difference or deviation. The guidance subsystem uses deviations to generate corrective commands to drive the missile from a position where it is to a position where it isn’t, and arriving at a position that it wasn’t, it now is. Consequently, the position where it is is now the position that it wasn’t, and if follows that the position that it was is now the position that it isn’t. In the event that the position that it is in is not the position that it wasn’t, the system has acquired a variation. The variation being the difference between where the missile is and where it wasn’t. If variation is considered to be a significant factor, it too may be corrected by the GEA. However, the missile must also know where it was. The missile guidance computer scenario works as follows: Because a variation has modified some of the information that the missile has obtained, it is not sure just where it is. However, it is sure where it isn’t, within reason, and it know where it was. It now subtracts where it should be from where it wasn’t, or vice versa. And by differentiating this from the algebraic sum of where it shouldn’t be and where it was, it is able to obtain the deviation and its variation, which is called error.

    https://www.youtube.com/watch?v=bZe5J8SVCYQ

        1. Hunt around. I have seen projects where they hack the mouse sensor and process the imagery directly. It is likely that you can find all the pieces off-the-shelf, and hook it into the microgps code. Also, if I am not mistaken most mouse image sensors are only 64×4 or maybe 128×128 pixels, BUT produce these images many hundreds, or even thousand, images/sec. If you do get this working, let me know I would love to see it it would also make a good Hack a Day project as well ;-)

  7. “We’re looking forward to fusion systems where it can combine GPS, Wifi triangulation, and MicroGPS to provide a robust and accurate position.”
    All this gear will be useless if you can’t get a robust and accurate point of reference. Ultra accurate relative mesurements will never refine your absolute position without a dependable origin point. However the proposal here is very interesting to improve position mesurements relative to an arbitrary origin point.

    1. For the most part I agree with you, but I can still see how this could be useful in a warehouse, building, or home, as long as you can quickly find the absolute position using FLANN from the database. I have not used FLANN before and not sure how well it can make that step. That said, if it can, I have a different project in mind and will see if I can get a hold of an old database of a million plus image…

Leave a Reply to DanCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.