GPS is the modern answer to the ancient question about one’s place in the world yet it has its limitations. It depends on the time of flight of radio signals emitted by satellites twenty thousand kilometers above you. Like any system involving large distances and high velocities, this is bound to offer some challenges to precise measurements which result in a limit to achievable accuracy. In other words: The fact that GPS locations tend to be off by a few meters is rooted in the underlying principle of operation.
Today’s level of precision was virtually unattainable just decades ago, and we’re getting that precision with a handheld device in mere seconds. Incredible! Yet the goal posts continue to move and people are working to get rid of the remaining error. The solution is called Differential GPS or ‘DGPS’ and its concept looks surprisingly simple.
What’s fascinating is that you can use one GPS to precisely measure the error of another GPS. This is because the inherent error of a GPS fix is known to be locally constant. Two receivers next to each other pick up signals that have been affected in the same way and thus can be expected to calculate identical wrong positions. This holds true for distances up to several kilometers between individual receivers. So in order to remove the error, all you need is a GPS receiver in a known location to measure the current deviation and a way to transmit correction information to other units. DGPS does just that, using either terrestrial radio in some regions and satellites in others. Mobile solutions exist as well.
So a raspi with a USB GPS dongle in a known location should be able to act as a DGPS over IP base station, right? In theory, yes. In practice… fail.
Setting Up The Base Station
The hardware for a complete DGPS system is pretty minimal and quite affordable. Many of the necessary items you probably already on hand.
A base station is no more than a computer connected to the Internet and a GPS module. In this case I use a Raspberry Pi from the parts bin. I connected a USB GPS receiver dongle using a five meter USB extension. The important thing is to install it in a known location, a task you can hire out but it’s more fun to roll your own and I’ll get to that in a minute.
For the receiving hardware I had intended to provide accurate initial positioning for smart phones as part of an augmented reality project. I did not quite get there, so I replaced the Android app with a netbook and a handheld GPS.
The first steps were to find a suitable location for the receiver station and to weatherproof everything. The barbecue hut in my backyard provided shelter for the raspi and even power. The GPS dongle was ziptied to a length of bamboo and covered in a plastic bag. The overly long cable had a nice amount of slack, so rain running along it would drip off before it got to the computer.
The Software Side
Installing and reading the output of the dongle is charmingly effortless under Linux. The interface is serial over USB and the protocol is standard NMEA 0183 text. Getting this to work, only required me to turn off echoing for the relevant port and then read whatever poured out of the dongle. NMEA 0183 is a well documented standard and sufficiently human readable. To make things even easier, Python libraries to handle NMEA and geospatial data exist and work great.
As mentioned, knowing the exact location of your base station is crucial to this project. There are several ways to obtain it. You could hire a surveyor to precisely measure the location of your antenna or acquire a sufficiently accurate map of your site. You can also resort to brute force: Assuming that the GPS error is evenly distributed over time, just logging enough positions and averaging them should result in the correct location. So I started to write the GPS fixes into a file and let the installation sit for four days. This provided me with over 70,000 fixes that were fed to a Python script for clean up and averaging. Cross checking with OpenStreetMap put the result right in my backyard and above sea level, so I was confident.
All that remained was to whip up a script that continuously listened to the GPS and subtracted the known position to acquire the current error and a second one to deliver the info to the web. Again, Python makes this easy and now I have what you might call an error beacon.
Client Hardware and Software
The originally intended target platform for the client was Android. This is not an environment I’m familiar with and after reading the documentation on Android location services I postponed the app and instead decided to first build a test rig. I used a netbook and a handheld GPS for which I had made a USB cable ages ago.
The software did nothing more than fetching the correction calculated by the base station via HTTP and then subtracting it from the fixes provided by the GPS. To assess the performance of this solution, the client machine was placed in my back yard and set up to log the corrected fixes for some time. If my implementation worked, the recorded positions should be stationary and precisely indicate the location of my garden chair.
Why Did This Fail?
This is a ‘Fail of the Week’ so I’m sure you already guessed that my location data from this test was completely unreliable. The positions showed the same amount of spread as the raw data that I had used earlier to locate the base station!
Something clearly was not working as expected. I checked my code for errors, then tried some random modifications, hoping to infuse correctness by accident and finally took a break to do what should have been the first step of this project: testing my assumptions about the working principle of DGPS.
To achieve this, an hour of position were measured by both the GPS dongle and the hand-held while both receivers were placed right next to each other. NMEA provides a timestamp for each fix and so it was easy to correlate the measurements of the two devices. And it turned out, even if the GPS error is locally constant, the way it affects individual receivers may be different.
What is the Path to Success?
One could say that you should always test your basic assumptions. “Never assume anything” is an often repeated mantra in all flavours of development. But sometimes this is not feasible and often assumptions are not discernible from common sense.
For example, when building an autonomous underwater robot, do you really want to test the wetness of water, the buoyancy of lead, and the effects of moisture on electronics before you begin? The German language provides the term ‘gefühltes Wissen’ — perceived knowledge — for such situations. Perceived knowledge is sometimes dangerous but often hard to avoid and always hard to identify.
So far this project let me familiarize myself with the intricacies of GPS, seriously brush up my bash-fu, and let me finally build my first internet connected thing. It doesn’t work as intended and that is where I want to tap into the power of the Hackaday comments section. Do you think the basics of this DGPS system are in place? How can a simple and inexpensive system like this be put back on track and ultimately achieve accuracy greater than a single, commercially available GPS receiver?
I hope that together we can turn this article about learning from failure into one entitled: ‘How To Build Your Own DGPS Base Station’.
- DGPS over IP has been done successfully by a team of Italian researchers to position autonomous robots with decimeter precision in mountainous terrain. I didn’t copy their approach because I somehow can’t find their paper again.
- Local DGPS is used in agriculture and I have the strong suspicion that these devices employ WiFi or Bluetooth as a communication layer, because it’s easy to use and saves you a lot of headache regarding the legal aspects of radio.
- Sanitizing your exact location out of software that is designed to provide just that is a major piece of work. Also my scripts have not been written to hold up to public scrutiny so I’ll keep them to myself for now.
79 thoughts on “Fail Of The Week: How Not To Build Your Own DGPS Base Station”
To make this work your base station and mobile station need to use exactly the same satellites, otherwise the error is not the same. If you watch your animation you can almost see the movement of the satellites and the resulting error drift and the time points a switch to other satellites happen.
I’m amazed how simple this sounds. I’d be very interested to know if this is in fact the issue.
The error introduced by each satellite is different, so the overall error is different for different receivers using different satellites. This is the stumbling block that most DIY DGPS systems encounter.
i would expect the error relative to each satellite to be locally constant, but the error in the combined result data to be variable due to different receivers seeing different subsets of the constellation? it just means you need to work at a lower level.
Without a tally of active satellite connections it’s gonna be hard to trouble shoot. Cheap units might be terrible at reception in addition to noise from their own penny pinching designs.
It might take multiple units on both the base and mobile station to get usable numbers.
(Typing from ignorance here) Does the NEMA data list the satellite(s)?
If so, maybe filter out the data of any satellite that does not show up on both records before computing position?
It does provide information on which satellites were used and also on the strength of each signal. The problem is, this happens after the fix is already calculated, so the contribution of individual satellites can not be discerned.
So once the GPS unit decides “good enough” it stops trying to get a fix from all satellites?
Would requesting a new fix and using that be better than polling the existing satellites?
That’s along the lines of what I was thinking. You can download apps for your phone that show how many GPS satellites your phone can see at any one time.
You may be comparing 4-sat fixes to those with 9. It may be worth while to ignore fixes with fewer satellites or those with more than a certain number. That won’t make it inherently more accurate (in theory 4 satellites will provide just as good a fix as 9) but it should at least make it more consistent.
Multipath can be another huge issue with some of the antennas used on gps units as they are normally used from different orientations but yours will be stationary so I would make a better antenna. I have not tried the design below but it was the first diy one I found in 1min and I have seen better designs on some ham sites.
Choke ring gps antenna used on Trimble and stationary antennas.
Why do you think the error will be in the same direction for both receivers?
DGPS doesn’t shift the lat-lon coordinate, it shifts the time signal of each satellite. The simple GPS dongles have a chip that reads the satellite signals, then uses firmware to calculate the position and output it via NMEA ASCII. You have to use a GPS dongle that supports raw mode (only a few do), then use external software like RTKLIB to compute the position using DGPS adjustments on each channel.
I already had a hunch that I would have to dissect the individual channels and their contribution to the resulting error. Until now I I feared i’d have to modify an existing SDR GPS implementation or build my own to get them, so this this is good news :)
This sounds right. When we did dgps for aerial we had huge raw files and software that would match which satellites. It was a pain and took hours to match up and correct.
We achieved cm accuracy for photos.
RTK integrates IMU style data with GPS in a fusion algorithm. RTK and DGPS are not necessarily the same thing!
I think you need to read up on RTKLIB, it uses nothing more than the raw satellite data, not IMU type feeds in addition. It gains precision by measuring phase differences in addition to the timing signals, but requires a GPS receiver that provides the extra information that isn’t part of the standard NMEA sentences.
RTK is a type of DGPS. Not only does it correct for variable ionospheric delay (what the original DGPS did) but also uses the phase of the received signals to give higher precision.
Some in the used market might be cheap. Also I bet that list is rather incomplete, for obvious reasons.
carlhage3 is barking up the right tree: The error in GPS measurements is more a result of timing uncertainty. Correcting the time uncertainty is key to improving the accuracy. Timing accuracy is one of the key reasons why the encrypted military system is more accurate; more accurate timing can be derived from the signal. There are GPS modules that can lock to the phase of the encrypted signal and make timing corrections to get better accuracy. These modules can get less than a meter accuracy in good conditions. It usually requires two receivers though: one stationary one to measure the local timing error and the other unit consuming the error measurements, kinda like a local DGPS system. The NEO-P series modules are one example of a module designed to work in this type of setup. They are not cheap though as far as modules are concerned, running over $100 a pop!
There’s good news on the module front. U-blox now sells the NEO-M8T, they’re 80$ in singles from digikey, and steeply discount down to 35$ @100. ~150$ for a complete system at quantity is miles better than when I first started working with this technology just a few years ago.
The Neo-6m can be configured to output raw mode. It’s $3.50 on AliExpress.
Either will give atmospheric corrections, which gives you ~3m. Most of the time, but not always, when people say DGPS, they mean atmospheric corrections. Other times, they mean carrier phase differential.
Doing carrier phase differential requires a high precision local clock, the m6 uses just a crystal, while the m8 uses a TCXO. I’m not sure how much impact that has on the real-world performance, but I suspect it will be significant. Do you know if anyone has attempted to do carrier phase with the 6m?
The cheap “NEO-6M”s on Aliexpress often aren’t actually NEO-6M based. The one I got was actually a ublox 5 chip (the previous generation to the 6M), which can be made to output raw mode but isn’t so widely supported. There’s no guarantee that what you get will be the same.
Yup. The moment I saw “NMEA”, I knew exactly why it failed.
rtklib is pretty much your best bet for DGPS, it’s a well documented approach, but does require specific receiver hardware.
Hah. I tried doing more-or-less this the Proper Way, by using raw pseudorange measurements from my GPS receiver and post-processing them against a local public base station. My results weren’t really any better though. I wonder if this is just hard to do correctly.
There’s a reason the commercial units are thousands of dollars.
Navspark sells 50$ raw GPS modules. You still need an intermediary running rtklib but definitely less than thousands of dollars total cost.
So… why do you think this isn’t (kinda sorta) working?
I mean, except for the short burst of 100+m differences, if you look at the 2D positions, they’re typically around ~20-50 m apart, even though the absolute position is wandering over a much larger area. I think it’d be more obvious if you took a longer-term average for one of the two positions, as well.
To do it right you really need the raw pseudoranges and then you want to figure out how to filter them best, but I don’t see from just that plot that it’s totally nuts.
My working theory is that the two receivers used different satellites for their fixes. The built in antennas can’t be that great, although a lot has happened in this sector since the Garmin came out 15 years ago. So I assume that the dongle picked up satellites that the older receiver didn’t get. The urban environment doesn’t help here either.
There’s a good way to test this, I think; get a setup identical to your base station, and try it with that. If you’re using equivalent hardware, I would expect them to behave similarly to any given configuration of the gps constellation. If you get a better result there, then you know for certain (alright, not for certain.. but definitely with more confidence) that the satellite selection is causing the problem.
GPS is more complicated than you think. Each receiver gets ephemeris data (description of the orbit and how to know exactly where the sat is) and time data from each satellite and constructs the coefficients of a big Kalman Filter that uses error estimates and past statistics to predict new readings. It is really complicated and embedded in the silicon these days. Receiver quality and antenna don’t make much difference as you either get the data or you don’t. You never get “bad” data. It has always seemed reasonable that putting two of them together and syncing one should give you differential position to a few centimeters. With your base sending out an error based on the readings, versus some average that indicates its “true” position, it should be near perfect. But it doesn’t work that way. The one you “synced” has a different past and a different set of coefficients for predicting error, and other problems beyond my ken.
There are papers out there in the interwebs that give all the hairy details and I think starting with a couple SDR GPS receivers and building from there is a great problem and one a lot of us would like to see tackled.
You might try using two identical receivers. Different receivers do the: maths and signal processing in different ways. To achieve really good results it may be required (I can’t remember the details now) to calculate and inject the error information much earlier than NMEA. Regular receivers don’t allow for such operation. It might be possible with sufficiently enough SDR receivers.
* sufficiently powerful SDR
I looked into GPS via SDR and found that it is possible but requires specialized equipment and much better understanding of signal processing than I can bring to bear right now.
Who needs SDR? Just build your own GPS/GLONASS receiver from scratch. http://lea.hamradio.si/~s53mv/navsats/analog.html
…holy sh1t. The light, the brilliance… it burns!
Have you looked at using s WAAS enabled GPS module? If it’s good enough for aircraft to use… https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System
Also using more than just the American GPS.
WAAS capability is pretty common for receivers, the hand held I used certainly has it, and IIRC the dongle as well. Unfortunately WAAS is limited to northern America and the European system that would be relevant for me is not as widely supported. In fact, I didn’t even know EGNOS existed until a minute ago when I double checked the WAAS coverage.
There is also the QZS system which covers Japan and Australia. But I don’t know if it works the same way.
I suspect that the tall building nearby creates you problems, with echos and less signals to capture from this direction. You should try to place your base in a more open space.
:) But then it would be outside the range of my WiFi.
Seriously though, I’m thinking along the same lines, bad visibility, low number of satellites and possibly multipath headaches are my main suspects for this fail.
You used the wrong technique, You should have used WAAS (Wide Area Augmentation System) to do what you wanted. WAAS uses an additional set of signals from hard points in the earth, and that corects and accurizes the satellites signals location. WAAS is an extra receiver, that outputs nmea0183 text into any gps receiver that can read it.
You can now get GPS modules with WAAS built-in.
I have WAAS capable receivers, at least one (about the dongle I’m not sure) but I’m also outside the area covered by WAAS, so I had to try something else.
What about EGNOS or MSAS? They are all effectively the same system and receivers that do WAAS should do all three.
Just a SWAG,
any possibility of oscillators in the units interfering with each other over the air?
Just in case, ever hear of NTRIP?
On a similar/different subject,
I recall reading somewhere/sometimeago about stationary GPS units being used in weather research/forecasting, as minute (as in tiny, not 60 second) delays occur in received signals by atmospheric disturbances (high pressure areas, jet stream, water vapor).
From your first link:
“In what is widely accepted as a proof-of-concept mission for GPS-RO, the University Corporation for Atmospheric Research’s (UCAR) GPS/Meteorology mission flew a GPS receiver aboard a microsatellite from 1995 to 1997.”
I was working there at the time, that’s where I probably first heard of it.
I’m surprised that there’s no mention of Selective Availability (https://en.wikipedia.org/wiki/Error_analysis_for_the_Global_Positioning_System#Selective_availability).
DGPS was conceived to defeat SA. It was disabled in 2000(ish) but DGPS still exists for the reasons mentioned in this article.
I seem to remember SA being disabled during the 1st gulf invasion due to there being insufficient “military” GPS receivers available for the ground forces.
I thought DGPS was around in the 1980’s…
Old Dinosaur here …
Differential GPS was an extension of a Differential LORAN-C project in 1986-1987. Both Differential GPS and Differential LORAN-C used survey grade receivers and post-processing to determine the usefulness of differential correction.
Selective Availability did NOT disrupt Differential GPS. I highly recommend reading:
… the information in that article is quite useful and accurate. Time to go watch for that shiny rock coming this way.
SA was turned off by presidential decree during Bill Clinton’s term in 2000. New GPS satellites haven’t even got the capability. Any wobble in position is down to the accuracy of the clocks in the satellites, plus atmospheric effects.
Ionospheric delay is one of the major contributors here – and probably 90% of the benefits of DGPS approaches are from iono correction.
This is also why military GPS units are dual-frequency – ionospheric delay happens to be a linear function of frequency, so by measuring the time delta between the two frequencies you can determine the absolute delay.
There’s a civilian alternate frequency entering service sometime in the next few years – I believe a few sats have it as a test signal now, but not enough sats have it for it to actually be useful.
So I have an interesting question: could DGPS be done with only one receiver?
Basically, if you can eliminate the error by taking measurements over a long period of time, and you have a GPS receiver with an accelerometer, could you figure out when you’ve been stationary and use the measurements taken during that time to calculate your exact position?
If this is true, then once you have your exact position, could go over the data collected up to that point (and until the accelerometer is bumped) to build a sort of database of the error VS time and other factors (like which satalites you’re using)?
If you can do that, then you might be able to use this database as a sort of local “phantom error beacon” that can be used to correct any measurements taken nearby via the error recorded in the database.
Obviously, this only works if the error reliably follows a pattern related to factors which can be known…..which makes one wonder why databases of error or ways to calculate them don’t already exist. (unles there’s a component that’s specific to the receiver that’s hard to calibrate out?)
I admit, this article is the first I’ve heard of DGPS, so I’m not familiar with what actually causes the error.
Nice thinking, but one of the major factors that determine the time of flight error that manifests itself in wrong fixes is actually caused by local changes in thickness and composition of the ionosphere. These are chaotic and thus unpredictable. The other factors are interesting as well, for example small variations in the satellite’s orbits caused by solar wind or irregularities in the earth’s magnetic field.
There are all ready stream of DGPS (DGNSS) indormation available called NTRIP,
they provide there corrections via RTCM (/ would guess RTCM 3).
An about Android 7 or 8 raw GPS/GNSS measurments ar avaible in Android
I hope that help a bit
See http://www.navspark.com.tw for inexpensive actual raw carrier phase measurement devices.
You need a base and mobile (or two).
These look great, thank you a lot!
I second that, the NavSpark-GL is nice.
I toyed around with one, now sitting unused somewhere in my pot of electronic gold…
Just a thought . I looked into this before but have not had the time to play
There was a mob in Christchurch, NZ providing this service when I lived there about 15 years ago. I used a basic yellow Garmin eTrex connected to a laptop via a serial cable, which was connected to my mobile phone using IRDa, then to the Internet over GPRS (115kbps!) I tried it at a survey point and was within 50cm quite consistently. I’ve not seen the service offered anywhere else, other than the expensive systems using dedicated radio channels.
There’s also various services that will take a file of NMEA data amassed over at least a day and process it against their database to give you a pretty accurate fix, especially the elevation.
As said before, raw measurements are needed and the use of RTKLIB is a good choice.
But as I see in your picture, take care of the environnement. You are in the middle of buildings and the signal is disturbed. Even with professional devices, a land surveyor is not able to have good precision in cities or you need to have your base station up on a building.
We used to use DGPS on a product we sold. I have no personal experience, but the data sheets that we have lying around list a DGPS output mode for the receiver, transmitting on the secondary serial port. I do believe that this feature is still in modern GPS receivers. You might want to look up the AT codes.
Sorry that I do not have specifics, but this is a project that has been on the far back burner, so I cannot find my notes.
So, what percentage of readers here really want this to work so they can build their own robotic lawn mower and have it actually know where it is? :-) ????
ummm, that is exactly the reason I am reading the comments! though, truth, I am just trying to determine if I could possibly use this to make a local coordinate system using a google map image and a basepoint….. I am not so worried about correcting the accuracy as I am in determining if the error is mostly constant once a fix has been obtained.
Yea, precision is what is needed. Not so much accuracy. I don’t need to know the gps coordinates out to the 10th digit. I just need to know if anything I might build will be mowing the daisies or snowblowing the lawn. 1 cm precision would be sweet, allowing for precision fertilizing, watering, trimming, and mowing. (maybe even weed removal based on detecting plants that don’t look like grass – differences in reflective spectrum, florescence, who knows, but only if the machine can navigate back and forth for recharging and switching out tools. navigation is, I think, the biggest roadblock to effective home yard automated gardening) :-)
Having a system for local coordinates with high accuracy is a huge need for DIY tinkering that nobody has made affordably. I don’t need a high accuracy GPS system (because I don’t need to know where I am on earth), I need to know where something is in a relatively small area (like my house). Being able to ask a voice assistant “Where are my keys” and have it be able to tell me something sensible would make me (and probably the rest of humanity) very happy.
The problem may come from the lack of precision using microcomputers. Floating point numbers are generally good only to about six digits. The GPS is sending nine place numbers but the computer holds only the first six and fills the rest with random numbers.
This has been a great read and very interesting. This Fail is a Success! thanks!
This is how you do it:
This is GPS, DGPS, Galileo, GLONASS multi-system global positioning using GNSS-SDR and GNU Radio.
As hardware, you can even use ultra-cheap RTL-SDR USB sticks, if these are retrofitted with a temperature compensated crystal oscillator (TXCO) timebase:
I suggest all of above site documentation.
This is how you do software and system design and this is how all publicly-funded research should be documented.
The agricultural base stations generally use RTK (https://en.wikipedia.org/wiki/Real_Time_Kinematic) typically over UHF in public bands (450/900 I believe depending on location)
Check out the latest ublox zed-f9p. Not cheap, ~$US200, but seeing 30 plus satellites, from four different systems (BDS/Compass, Glonass, GPS and Galileo), (with an in attic antenna 35ft AMSL), most with dual band transmissions is amazing.
I have the ardusimple board but there are less expensive models.
I believe that the dual band transmissions will really help with the ionospheric latency calculations.
I can see that my f9p can see dual band transmissions from more than half of all of the satellites. The GPS and Galileo systems should be transmitting dual band on all satellites by the end of 2020 or 2021 https://www.gps.gov/systems/gps/modernization/civilsignals/
Even the earlier Ublox M8Q connected to an RPI (https://store.uputronics.com/index.php?route=product/product&product_id=81) (and other versions) with an in attic antenna gives me 20-30 satellites (GPS, Galileo and Glonass)
One of the things missed in Jon’s post above is that the new ZED-F9P module (https://www.digikey.com/product-detail/en/u-blox-america-inc/ZED-F9P-00B-02/672-1212-1-ND/9990023 – $200 each) is preconfigured for RTCM usage.
They have an app note (https://portal.u-blox.com/s/question/0D52p00008a9BnrCAE/enabling-moving-baseline-on-zedf9p) for piping the RTCM messages between the receivers – basically you can have a fixed station transmit over an RF UART (low-latency with high-FEC/ECC) link to the rover, then you can do two receivers and chain the RTCM bus between master and slave, to get both corrected differential and heading (add a third “rover” to the differential chain to get a full 3D orientation solution: position, altitude, roll, pitch, yaw, speed/vector).
For the timing module (what lead me to this project page) ZED-F9T, uBlox advertises better timing jitter performance if operated in a differential GPS mode.
What I’m guessing is that the more receivers one has in each mode (fixed base, roving base, rover) the better the total position data – I’d imagine that the RTK correction done on a fixed base station should improve a survey-in and subsequent solution correction dataset, then when broadcast over the RF link to a roving base and slave rover doing their own correction – would give local and regional corrections to the solution errors. Theoretical of course…
I found an Czek Masters Thesis that covers the the challenges of implementing DGPS with a lot more technical detail. Look at https://dspace.cvut.cz/bitstream/handle/10467/68266/F6-DP-2016-Svaton-Martin-Thesis_signed.pdf
DGPS is “navigation quality”. Each GNSS satellite and receiver generate the same pseudo random code simultaneously. When you get an autonomous position on the earth (autonomous = a single GNSS receiver e.g. in a mobile phone or Garmin hand held receiver) the receiver compares it’s version of the pseudo random code with what it receives from the sattelites. Using the sattelites orbit data (ephemeris) and comparing the time lag between satellite and receiver code the receiver is able to calculate the ranges from each sattelites and…to work out a position on the earths surface.
DGPS allows some refinements to be made to autonomous positioning. It has a receiver sitting on a known point on the earths surface. It measures the ranges as above but, because it knows exactly where it is, it knows what the ranges should be. It the makes a comparison of the two and calculates a time correction for each sattelite. It broadcasts these corrections to the roving receiver and applies the time corrections. It is good enough to get sub metre precision relative the base station depending how far away the rover is.
RTK is survey quality and compares the phases of the carrier wave of the signal that carries the pseudo random code. The carrier wave has a wavelength of around 60cm. It’s easy for it to “wind back” the carrier wave and work out the last part of the range ie get the last cm or so. What it has a hard time working out is the number of wavelengths between the sattelites and the receiver – what’s known as the integer ambiguity. It needs two receivers, base and rover, with the base transmitting it’s satellite info to the rover in real time (RTK = real time kinfmatic).The rover does some pretty flash computing and works out the integer ambiguity and gets an accurate vector between the base and rover. This allows cm type precision relative to the base.
Post processed static allows both receivers to log raw data for say an hour then process it back in the office. This gives an even more accurate vector still – down to approx 1mm plus 1ppm. It solves the integer ambiguity but uses a ton of data on a single baseline to get a really accurate position.
You can do post processed kinematic also but because the rover is moving the accuracy is similar to RTK.
Don’t know why your ranges were up the spud. Good on you for trying.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)