Displaying HTML Interfaces And Managing Network Nodes… In Space!

The touchscreen interface aboard SpaceX Crew Dragon is just one of its many differences from past space vehicles, but those big screens make an outsized visual impact. Gone are panels filled with indicator needles in gauges, or endless rows of toggle switches. It looked much like web interaction on everyday tablets for good reason: what we see is HTML and JavaScript rendered by the same software core underlying Google’s Chrome browser. This and many other details were covered in a Reddit Ask Me Anything with members of the SpaceX software team.

Various outlets have mentioned Chromium in this context, but without answering the obvious follow-up question: how deep does Chromium go? In this AMA we learn it does not go very deep at all. Chromium is only the UI rendering engine, their fault tolerant flight software interaction is elsewhere. Components such as Chromium are isolated to help keep system behavior predictable, so a frozen tab won’t crash the capsule. Somewhat surprisingly they don’t use a specialized real-time operating system, but instead a lightly customized Linux built with PREEMPT_RT patches for better real-time behavior.

In addition to Falcon rocket and Dragon capsule, this AMA also covered software work for Starlink which offered interesting contrasts in design tradeoffs. Because there are so many satellites (and even more being launched) loss of individual spacecraft is not a mission failure. This gives them elbow room for rapid iteration, treating the constellation more like racks of servers in a datacenter instead of typical satellite operations. Where the Crew Dragon code has been frozen for several months, Starlink code is updated rapidly. Quickly enough that by the time newly launched Starlink satellites reach orbit, their code has usually fallen behind the rest of the constellation.

Finally there are a few scattered answers outside of space bound code. Their ground support displays (visible in Hawthorne mission control room) are built with LabVIEW. They also confirmed that contrary to some claims, the SpaceX ISS docking simulator isn’t actually running the same code as Crew Dragon. Ah well.

Anyone interested in what it takes to write software for space would enjoy reading through these and other details in the AMA. And since it had a convenient side effect of serving as a recruiting event, there are plenty of invitations to apply if anyone has ambitions to join the team. We certainly can’t deny the attraction of helping to write the next chapter in human spaceflight.

[Photo credit: SpaceX]

44 thoughts on “Displaying HTML Interfaces And Managing Network Nodes… In Space!

  1. * INNN SPAAAAACCCE

    Last time I encountered LabVIEW it was running on Windows 3.1 or is it a different Labview? As I recall it was kinda tied to some expensive data acquisition boards, so as PCs got less mystical and squirting analog into them got cheaper there were other options than LabVIEW.

    1. I’m sure it is a many-generations successor to the LabVIEW you encountered. National Instruments has continually updated the software up to and including the latest LabVIEW 2020. Also, in the AMA they said they plan to migrate away from LabVIEW towards this now-proven HTML based approach for Starship ground operations.

    2. The data acquisition hardware is fairly tied to labview but not the other way around: you can write labview for just about anything that has an interface. I’ve written stuff to talk to old digital scales via RS232, old oscilloscopes via GPIB, arduinos via USB, and windows and linux system calls via winapi and C.
      It kinda sucks for serious programming, but it’s good for rapid deployment, and it does well in situations where you need asynchronous parallel operation of asymmetric hardware, like holding bidirectional communication with a scope, multimeters, and serial hardware.

      1. For people who are curious about how to “write labview”, do you have any resources to recommend? Code examples to follow? I haven’t found anything freely available. Everything I found so far are advertisement for various expensive training sessions.

        1. You mostly don’t “write” in LabView. LabView is mostly a graphical programming environment. You draw a flowchart looking kind of thing, and that’s your program. There are code blocks where you can actually write functions, but most things are done in graphical blocks. You can pack your own drawings into blocks as well.

          In the free software world, the closest things are GNU Radio (https://www.gnuradio.org/) and Pure Data (https://puredata.info/downloads/purr-data) – Pure Data is almost entirely graphical, and I think GNU Radio also has ways to write program code.

          1. Sorry I was not precise enough… I understand what you say to be true within a LabVIEW program, but I was asking about interfacing arbitrary hardware into LabVIEW. The “written stuff to talk to old digital scales via RS232, old oscilloscopes via GPIB, arduinos via USB, and windows and linux system calls via winapi and C” portion of the comment I replied to.

            I understand each piece of hardware is exposed to the LabVIEW graphical environment as a “Virtual Instrument” driver, and someone has to write that driver in C++ (?) National Instruments has a huge library to interface with professional hardware like Tektronix oscilloscopes, etc. But if I want to interface LabVIEW with an Arduino running my own code, I’ll have to write that virtual instrument driver myself and I have yet to find the resources to get started.

          2. “and I think GNU Radio also has ways to write program code.”

            That’s quite an understatement.

            GNU Radio Companion (grc) generates python from the graphical representation and then runs it. The generated code can be freely modified.

          3. There are standard blocks for RS232, GPIB, etc. that you can use to communicate with external hardware.

            As for writing your own drivers to access custom hardware: LabView is a proprietary system. You’re going to need to purchase the documentation (and probabyl training) to get that information.

        2. There have been some recent developments on that front. NI has a free community edition now. https://www.ni.com/en-us/shop/labview/select-edition/labview-community-edition.html

          I used to teach LabVIEW classes after I got my certifications. The classes are expensive, but if your job requires you to learn quickly they may be worth it to your company. I never took one. I learned on-the-job. NI’s website has an enormous number of examples and whitepapers. There are also free libraries and purchasable libraries through their website. There are many engineers and programmers active in the NI forums as well. If you download a copy of LabVIEW, it included numerous examples as well.

          If you do not need to learn the language very rapidly, you probably don’t need a training session.

  2. Read that then go look how they did things for Apollo. There is so much stuff that will have you shaking your head and maybe just maybe wondering if the conspirists might be right as there just couldn’t be any way that worked lol Seriously compare the two and you will see just how far things have come and a lot of what we have now is a direct result of Apollo.

  3. The last time I saw Labview was perhaps three years ago. I attempted to work with it to build a moderately comprehensive data collection module for reading serial ports. Suffice to say it does not end well. I’ve had plenty of problems with Labview in that their help screens are not helpful. By contrast VEE from Keysight currently does a better job of doing that.

  4. The control panels are an interesting design concept, and it seems Mr Musk has a love affair with touch panels, starting with the Tesla, which also did away with a lot of physical controls for a touch screen.

    While I am sure these have been tested ad nauseum and signed off by those who know, I do wonder whether the cost/space/weight benefits are worth it. Physical dials have a certain amount of in-built redundancy. With 3 screens that is only 3 things to get broken or fail and you lose all your information. (I remember one Apollo astronaut saying that on take off he loosened his straps top be more comfortable. When the 2nd stage kicked in he was thrown forward just missing the control panel by inches. I wonder how much pressure a touch panel can withstand. Hopefully more than my mobile )

    It would also be interesting to understand to UI design methodology. I often mistype things on my phone, but I am not wearing a spacesuit, I am not doing it in a 3G/0G environment and it is not like I am doing anything safety critical. How have they designed things so that they are safely entered?

    Still it has to be said they look pretty cool

    1. Yeah. Using touchscreen interfaces like this is a dumbass idea for spacecraft. There’s a reason aerospace interfaces are usually kind of oldschool to this day. Even in civilian life, touchscreens exist mainly as a cost-cutting strategy, with some marketing to make it seem like it’s really for the user’s convenience… and I think that’s probably the primary factor here, too. You’ll want something a lot more physical in a spacecraft.

    2. How likely is it that 3 independent screens all fail at the same time ?

      Also, the craft works fine without the user interface. The astronauts are glorified passengers, since everything can be remotely operated, or done automatically.

      1. its not so much that there are 3 screens, it is the fact that they contain all the information. If you have 20 dials, and 3 fail it is possible you can extrapolate the information from other dials. Imagine a Apollo13 situation. You have your power system failing, you need to reduce your power system. With touch screens you cannot segment the system

        As another example. I once visited a UK submarine. It had a visual display indicating heading, depth and dive slope. It also had physical dials indicating similar information as backup. As a final backup, on the wall was a simple bubble guage that would indicate whether the boat was pointing up, level or down.

        Touchscreens to me seem a great idea when all things are going fine. When things hit the fan, you sometimes need to revert to things that are tried and tested.

        But hey, I’m sure better minds than mine have gone through this and considered the failure modes

        1. I would bet if all 3 screens fail, they hit physical buttons underneath to trigger a launch abort. He wants to launch thousands of rockets a year rather than one or two. Adding the complexity, weight, cost and failure modes that all those dials and wiring is just not worth it anymore.

          Once they got rid of the inverter based backlights and went to capacitive touch, which both could be designed with multiple redundancy in per display (assuming a transrefective mode wasn’t good enough) there aren’t many common failure modes left. Really electrostatic, and physical damage are the only options I can think of. I believe both can be reasonably be mitigated. Especially if the displays are thin and flexible, since helmets and gloved fingers aren’t very pointy.

          Also since the helmets are very stiff there is limited room to put the display and controls. Fighter pilots use digital displays for mission critical functions, if ever there were a life or death situation an active adversary would be it.

          1. Hope they are not using off the shelf LCD panels. (Pun intended as LCD also short for Lowest Common Denominator) I don’t want to see Star Trek exploding bridge equipment IRL.

            One of the places I worked ages ago for made rugged display panels. They laminated the raw LCD and develop their own drivers to meet the mechanical specs for military applications. Laminated glass can be made tougher and won’t break into large sharp blades floating in zero g.

        2. A faulty cap sense controller can knock out 1 screen at a time vs individual switches. On the other hand if the other 2 screens can duplicate the same GUI, that could be some work around. It would still be bad if you have to jump through a whole pile of GUI BS to get to there. (e.g. the bad GUI choice of tab editor windows i.e. not being able to show more than 1 file at a time)

    3. As far as UI design, this has the potential to be much safer than manual switches and buttons because the software can check everything the user is trying to do, and require different levels of confirmation depending on the potential danger.

      Another aspect is that the software can present a clean interface only showing the important notifications at that time, reducing the chance for operator error.

    4. One point you have to remember is that there’s really not a lot of point in having humans be in control, and if there were damage to the vessel the best course of action is to stop what you’re doing and get out of there. Regardless there IS redundancy in the screens, they can each show all the information from the mission even if they usually don’t.

    1. Technically low Earth orbit (LEO) isn’t space. The bandwidth and latency is still good, so they can trade off being wasteful. Eventually when Elon is out on Mars, he might still found himself on a link worse than ISDN (pre-DSL). I hope he would rethink the trade offs. HTML over Ouija Board per ”The Martian” is going to be painful.

      https://mars.nasa.gov/msl/mission/communicationwithearth/data/ ”Mars Curiosity Rover – Communications with Earth”
      >The data rate direct-to-Earth varies from about 500 bits per second to 32,000 bits per second (roughly half as fast as a standard home modem). The data rate to the Mars Reconnaissance Orbiter is selected automatically and continuously during communications and can be as high as 2 million bits per second. The data rate to the Odyssey orbiter is a selectable 128,000 or 256,000 bits per second (4-8 times faster than a home modem).
      >An orbiter passes over the rover and is in the vicinity of the sky to communicate with the rover for about eight minutes at a time, per sol. In that time, between 100 and 250 megabits of data can be transmitted to an orbiter. That same 250 megabits would take up to 20 hours to transmit direct to Earth! The rover can only transmit direct-to-Earth for a few hours a day due to power limitations or conflicts with other planned activities, even though Earth may be in view much longer.

      They are talking about 250 megabits (31.25MB) n 20 hours… That’s like a single web page with the usual autoplay popup video before you managed to close it. :P

        1. I believe the importance of data rate and link latency are of great importance in a ‘remote control’ scenario. The comment is propabely pointing out that in LEO, communication with Earth can be sufficient to takeover control of the ship in the (apparently highly propabale) case that 3 out of 3 modern screens cleared for use in spacecraft fail.

  5. The biggest surprise was when a popup appeared on the touchscreen asking if the crew were interested in any singles in they’re area. Which was quite impressive given that they’re area was around 408 km above the ground

    I’d imagine what they’re using is something similar to libcef to render content in a frame

  6. Crew Dragon isn’t like Apollo …

    … in a way, it’s more like Mercury:

    The crew is back to be “spam in a can”, but on an much higher technological level.

    There is no much need for interaction between crew and spacecraft, everything is highly automatical.

    I bet the Crew Dragon has no need for neither crew nor touchscreens to do it’s job perfectly ;-)

      1. The main use of the touchscreens is to keep the astronauts calm. Give them something to do so that they have a sense of control. Give them up-to-date information so they can see everything is going as planned.

        1. and to give them a survivor moment in case of a link failure… One of the unlucky crew has to stay behind to manually touch the control (vs stringing a long rope or a cable to the actual switch) to self destruct or something.

  7. Everyone seems to love Chrome but I’ve had nothing but trouble with it. It goes into some sort of failure mode where the last tab opened plus any subsequent tabs I open act like there is not network connection. Previous tabs continue working but the only way to get a new tab which functions is to close and re-open Chrome. I have seen this on several different computers, running both Windows and Linux both logged into my Google account and not and even in fresh installs with no extensions.

    So I use Firefox.

    It scares me to think of astronauts stuck in a rocket and unable to access the UI to control the ship! Maybe the problems I have seen are in the Google extensions that separate Chrome from Chromium. I certainly hope so!

    Still, having previously been a web developer, I would never want to trust my life to any web browser. I’ll take a wall full of toggle switches any day over that!

    1. If everything goes according to plan, there’s not much to do. The dragon capsule has a fixed destination: dock with the ISS. From the moment they launch until the moment that the capsule is docked is a fully automatic program. The astronauts can just sit back and enjoy the blinkenlights.

      After the ISS visit, it returns to Earth for landing, another fully automatic procedure.

      But if everything fails, there is a small physical panel below the touchscreens with a handful of knobs and buttons. In case of emergency, they can use these to abort the mission and return to Earth.

      Biggest risk is not the UI failing, but rather some mechanical problem in systems related to the propulsion, because these are engineered with small safety margins.

    2. Maybe this is what the scifi guys meant about a space elevator, just push the button for the floor you want and the control systems take care of the rest. :-D

    3. I have not had the best of luck with Chrom(e)(ium) regarding stability for web-apps. I did once attempt to strip chromium to the bare essentials, but is did not make a lot of difference on various platforms and the job of maintaining it was not worth it.

      For persistent web interfaces, I use Firefox exclusively, even on projects with Raspberry Pi’s where we are somewhat stuck with FF-ESR (although with the new 64 bit Raspberry OS, this should soon be a thing of the past). And while FF on RPi does not support hardware video decoding, it can display HD 30fps quite well and I feel it renders html, graphics and test much nicer out of the box.

      1. Firefox has it’s own special failure modes though. Like when system load goes high then all it’s scripts crash one by one, not just the on page ones but the “internal” ones.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.