A Rebel Alliance For Internet Of Things Standards

Back when the original Internet, the digital one, was being brought together there was a vicious standards war. The fallout from the war fundamentally underpins how we use the Internet today, and what’s surprising is that things didn’t work out how everyone expected. The rebel alliance won, and when it comes to standards, it turns out that’s a lot more common than you might think.

Looking back the history of the Internet could have been very different. In the mid eighties the OSI standards were the obvious choice. In 1988 the Department of Commerce issued a mandate that all computers purchased by government agencies should be OSI compatible starting from the middle of 1990, and yet two years later the battle was already over, and the OSI standards had already lost.

In fact by the early nineties the dominance of TCP/IP was almost complete. In January of 1991 the British academic backbone network, called JANET (which was based around X.25 colored book protocols), established a pilot project to host IP traffic on the network. Within ten months the IP traffic had exceeded the levels of X.25 traffic, and IP support became official in November.

“Twenty five years ago a much smaller crowd was fighting about open versus proprietary, and Internet versus OSI. In the end, ‘rough consensus and running code’ decided the matter: open won and Internet won,”

Marshall Rose, chair of several IETF Working Groups during the period

This of course wasn’t the first standards battle, history is littered with innumerable standards that have won or lost. It also wasn’t the last the Internet was to see. By the mid noughties SOAP and XML were seen as the obvious way to build out the distributed services we all, at that point, already saw coming. Yet by the end of the decade SOAP and XML were in heavy retreat. RESTful services and JSON, far more lightweight and developer friendly than their heavyweight counterparts, had won.

“JSON appeared at a time when developers felt drowned by misguided overcomplicated XML-based web services, and JSON let them just get the job done,”

“Because it came from JavaScript, and pretty much anybody could do it, JSON was free of XML’s fondness for design by committee. It also looked more familiar to programmers.”

Simon St. Laurent, content manager at LinkedIn and O’Reilly author

Yet, depending on which standards body you want to listen to, ECMA or the IETF, JSON only became a standard in 2013, or 2014, respectively and while the IETF RFC talks about semantics and security, the ECMA standard covers only the syntax. Despite that it’s unlikely many people have actually read the standards, and this includes the developers using the standard and even those implementing the libraries those developers depend on.

We have reached the point where standardization bodies no longer create standards, they formalize them, and the way we build the Internet of Things is going to be fundamentally influenced by that new reality.

The Standardization of IoT

quote-standarization-bodies-dont-create-standardsRight now there’s a new standards body or alliance, pushing their own standards or groups of standards, practically every month or so. And of course there are companies, Samsung for instance, that belong to more than one of these alliances. I think it’s unlikely that these bodies will create a single standard to rule them all, not least because many Internet of Things devices are incapable of speaking TCP/IP. The demise of Moore’s Law may well mean that the entire bottom layer, the cheap throw away sensors, will never speak TCP/IP at all. It will not, as they say, be turtles all the way down.

These bodies also move slowly. Despite the fact that the member companies live on Internet time, no standards body does. The “rough consensus and running code” of the IETF era will not be replicated by today’s standards bodies. Made up of companies, not people, they’re not capable. Instead that consensus will be built outside of the existing standards bodies, not inside them.

“Today, the industry is looking at a much harder set of problems. My guess is that we’re going to end-up throwing a lot of stuff — products, code, and architecture — away, again, and again, and again. The pressure to deploy is much higher now than it was then,”

–Marshall Rose

We’re Stuck in the Unknown

No one really knows how this is going to shake out right now, and obviously the outcome of that standards battle, which I think is going to take at least a decade, will have a fundamental influence on the path our technology takes. But I don’t guarantee that any of the current players will be emerging victorious. In fact, I think there will be another rebellion much like we saw with the original network standards. Despite the rhetoric from the standards bodies I actually think most of the current architectures don’t stand much of a chance of mass adoption.

quote-most-current-architectures-dont-stand-a-chanceI think any architecture that stands a chance is going to have to be a lot flatter than most of the current ones—with things actually talking to other things rather than people. Significantly absent from most, if not all, of the current architectures is a degree of negotiation and micro-transaction amongst the things themselves. As the number of things grow the human attention span, the amount of interest you have in micro-managing your things, means that we simply won’t.

Beyond that, architectures that stand a chance of making the next generation of Internet of Things devices work needs to deal with selective sharing of data; both sharing of subsets of data from individual things, or a superset from multiple things. Right now we’re seeing those emerging proto-standards in interesting ways. For a brief period of time it looked like Twitter was going to become a protocol. It could, in fact, have been the protocol.

Twitter Could Have Been the Standard

Back in 2010, Twitter proposed something called ‘annotations,’ it was an experimental project where you could attach 1kb of JSON to each tweet. Annotations could be strings of text, a URL, a location tag, or arbitrary bits of data. It would have fundamentally changed the way Twitter operated.

Twitter Annotations example [via SitePoint]
Twitter Annotations example [via SitePoint]
It could, in other words, have become the backbone network — a message bus. Not just for moving data, but for moving apps. With an appropriately custom client, you could have attached small applications to a tweet. Moving code to data, rather than data to code.

Building something like this is really hard, a classic social network chicken and egg proposition. But Twitter already had the users and, at least at the time, an army of third party developers. It was not to be, by the end of 2011 they were alternative history.

“Annotations is still more concept than reality. Maybe some day we’ll have more to say about them”

Taylor Singletary, then a developer advocate at Twitter.

Perhaps they dropped the idea because they could see, not it failing, but it being too successful. With the accompanying calls for openness, and the invasion of the clones that would duplicate Twitter at the API level, if not at the backend.

As with Everything: IoT as a Service

Right now perhaps the easiest way to get one Internet of Things device to talk to another isn’t a standard, it’s a service. Right now the standard Internet of Things messaging bus belongs to one company, and that company is IFTTT. “If This Then That” is currently one of the few ways that consumers can get the incompatible things in their life to talk to one another. For someone building a device, that doesn’t come cheaply.

In the long term however it’s unlikely we’re going to let one company become the backhaul for consumer Internet of Things traffic. It’s unlikely that there will be one platform to rule them all. I don’t think it’s going to be long till IFTTT starts to see some complaints about that, and inevitably clones.

In the end I think the standard (or realistically the multiple standards) that will become the Internet of Things as we know it, or will know it, currently sit as “slide ware” being pitched to venture capitalists. The standards exist as throw away slides, where the founders wave their hands and say “We’ll be doing this, so we can do this other thing that makes money.”

The standards for the Internet of Things will be a rebellion against the standards bodies. It will be developers deciding that what they’re doing is good enough for now, that they should do it that way untill people make up their minds about what we all really should be doing. Whatever that is will end up being good enough for everybody and will win this particular standards war.

41 thoughts on “A Rebel Alliance For Internet Of Things Standards

  1. OSI was always stupid. A layer model that never actually mapped into how people were actually doing things, even though it was inevitably described in every networking book of the time, and for all I know is still being parroted in current network books.

    At some point I bought into XML, what a pain in the a$$. We have eradicated most of our SOAP software in our observatory projects now.

    The lesson? What works in the trenches beats the ivory tower almost every time.

  2. As we already see the internet “closing” I wonder how long open data or standard protocol/format will stay relevant?

    I mean the use of closed ecosystems like Facebook (25% of traffic? – dodgy stat, lets assume min of 12.5%) with proprietary protocols and clients. So many systems and communities seem to be built on top of closed, proprietary systems with a, terrifying (facebook), business model or no clear business model at all.

    [Sub thought – government intervention in facebook at some point? Thinking Standard Oil? Also I wonder if you could unseat facebook as they would be using AI based strategies to make themselves look good and seem essential. And they really do control the news people see, the whole “we are not a news outlet” schtick, just bury every story pointing out that facebook might be a bad thing. Of course if they abuse that power too much people will figure it out and if we could be effectively controlled by media then I would point to Brexit and Trump, more Brexit though.]

    Muses: Is this where opensource hits a bit of a wall – email protocols are not the most exciting thing, nor clients. Thunderbird would be a good example of that kind of problem.

    1. Yup. My fears exactly. Email, as boring and old school as it is, is probably one of the only “open” communication protocols left. I gave up chasing my kids, peers and friends as they jumped onto whatever the cool app de jour was (Twitter, FB, Skype, iMessage, Snapchat, ….) too many apps to communicate with different people. Too easy to compromise by either corp greed or gov’t intervention. I’m slowly dialing back, so if you don’t do email and only use the latest hip chat app on your iFruit, well sorry. Hope we meet IRL someday to catch up…

      Now get off my lawn :-)

        1. Agreed, but my worry is the relevance. If a person sends an IRC message and no-one listens is there any point? It is certainly possible to create open systems but without the permission of the internet big boys no-one will know or use your technology.

          1. Relevance? IRC I can understand all your concern.
            But with Jabber (XMPP) you have user login and security, with encryption.
            You have friends that you allow read or write to you. There are a XMPP-IoT where one set a Jabber user (JUID) as owner of the device. So when someone else ask to be able to read or write to it, it will ask it’s owner for permission to grant the asked permission. That will be remembered until you withdraw it.
            So if your heater want to talk to your thermometer, the thermometer will ask you if the heater are allowed to talk to the thermometer.

            Then, if some device want to talk to your device, you get a message from the device asking of permission to talk to the other.

            Ok, you worry over small devices that you have now, or that are talking some special protocol, like a short radio protocol. Then you can have a proxy that will take all the XMPP things and translate between the proprietary protocol and XMPP.

            And XMPP have push-pull and user control and security already. With many debugged servers already which you can use. And same for client software.

  3. Manufacturers must produce more secure IoTs…
    If I’m ever going to buy a ‘smart TV’ or an Internet-connected refrigerator, it better not have the same default password. :/

    1. I dont think you would have a problem with such a device as you know you have to change it, or what to look at when you decide wich one you buy. The Problem are all the unaware masses who are adopting this technology for the cheapest price they can find.

  4. Manufacturers don’t want standards they want proprietary and lock in.
    API’s are the closest you’re going to get and that’s at the mercy of their whim to disable to cancel them.
    If you don’t like it, roll your own.

    That’s when the IoT hacker community comes in and we see delightful projects that replace the firmware on standard chips to speak protocols like MQTT.

    As long as device manufacturers keep producing things that run on linux there is ample possibilities to hack them.
    As a “hacker” I’m more than happy for a 3rd party to lower the cost of the hardware for me to then roll my own (or the community) software on to it.

      1. Well the DMCA is a Merican thing and the global markets can’t be dictated by one country.

        In any case no manufacturer is locking any chips into DMCA. Sure there are some propriety IP modules that you *can* use but you don’t have to.

        Most of what is being DCMA locked is complete products or the ‘black box’.

        That is never going to work for the customer so customers will turn to different products or after-market modules that are supported by all the software from the open software community.

  5. the first thing that should be standardized is security.

    1. dont have hard coded passwords.

    2. if hard coded passwords cant be avoided then require password change as the first thing and have an if/then comparison done to ensure someone does not use the default password as the password.

    3. make it very difficult to revert the settings by requiring physical access by poking a paper clip into some hole and them powering the device to reset it so simply pulling someone’s power meter to cause a power outage to the house will not reset the device.

    1. Nope. I mean, on a minimal level, yes of course. But nope….if IoT devices become as pervasive as people think they’ll be, then we can’t require actual remembered passwords on fifty small devices scattered through your home. Compromised? You’ll have to take a whole day to go and change passwords on your light bulbs and thermometers, you’ll wear out your paperclip.

      What we need instead is a pre-shared key implementation that doesn’t require people to create passwords, but does bind the devices securely into the group. The key creation needs to be automated, fast, and near-field. If you don’t have to remember your password, you can skip thinking up something like “hackaday” and instead use e60d19f30377c130d7c845594f673f9da4f86ac77eeb726be331ea581930a4f8. Using a phone to bind devices definitely seems like a good option. The problem of course is that these implementations need to be secure too, as the Hue lightbulb hacks demonstrate.

    2. Well, IoT should use XMPP for handling and managing IoT devices. If they use MQTT or what not is not important. They should be managed by XMPP. It got all that infrastructure that is needed to handle IoT devices and also to stop or allow others to use data from your device.
      But I guess that companies doesn’t want that.

    3. Look at XMPP. It has security, and doesn’t need to have passwords. It can use certificate, and the owners JUID is all that is needed to be known. No password exchange between device and owner. Only between XMPP (Jabber) server and client. Access control are handled with friend lists, which only the owner has control over.

  6. There needs to be some sort of push for local apis (that are always on, not tied to subscriptions, etc.) so that critical functions aren’t lost when the cloud service is down. Just look at Ecobee’s holiday disaster a few years back when their cloud service was down all through the X-mas season.

    Radiothermostat has a nice one https://radiothermostat.desk.com/customer/portal/articles/1268461-where-do-i-find-information-about-the-wifi-api-

  7. I have really started loving Node-Red.
    Comes pre-installed on raspberry Pi now. This seems like the tool that gives the user the ability to control their own server instead of relying on some other computer somewhere. Personally I could never trust comcast enough to trust my IOT house to run on the internet instead of my LAN.

    Props to the Hack-A-Day crew for making IOT MqTT tutorials that got me started with my own DIY IOT ESP setup that I really love. This should be the standard, I threw away the other $30 smart plugs I bought.

  8. Well I am completely lost!

    There is a suggestion here that something other than TCP/IP will be used or IoT and that small micro’s aren’t enough for a TCP/IP stack. Sure your cheapo wireless modulator module can open a garage door but that’s not intelligent data transfer. Open – Close – it’s just not rocket science. It’s not in the same ball park as IoT and is definitely not a turtle.

    After all the ‘I’ in IoT does in fact stand for internet and the base protocol is already a standard and just about any IoT module like the ESP8266 or the newer 32 bit version has more than enough room for a TCP/IP stack – hell there’s even enough room for a HTTP sever if you want.

    And as for JSON – when was it a standard ??? It may be more recently that JSON parsing has been incorporated into the JavaScript engine of browsers but that means nothing because JSON isn’t just used in a browser and even in a browser you can easily write a JSON parser in JavaScript because JSON is a native object expression in JavaScript. I seem to remember there being a JSON parser in PHP versions as old as 3.x.x and it was well used because Microsoft’s XML is total *bloat ware* by comparison. Perhaps it wasn’t well used in M$ platforms – wonder why!

    So on the note of JSON – here it is – I one page reference that definitively defines JSON –
    http://json.org/

    1. Although JSON has indeed been around practically forever, it’s only as of 2013 that it has been officially accepted by a standards organization.

      If there’s one thing that I’ve learned, though, it’s that ‘official’ standards are complete bullshit. Especially when it comes to Microsoft software. Remember when they made their Office document format a standard? And how even their own office suite wasn’t completely compatible with that standard?

    2. You can source XML back to SGML, which is used for first HTML-specs. SGML was designed by IBM, i think.
      So, is HTML good enough for internet, so is XML. XML is even easier to parse than HTML.

      For small devices we do have 6LoWPAN. And we should look at IPv6 for IoT, not IPv4. That is for a gateway to convert to IPv4.

      And yes, it is bull to say small devices can’t handle IPv4 or IPv6. It has been done, but those stacks are not made for fast communication. They are slow.

      Some info
      https://news.ycombinator.com/item?id=5289730
      Contiki OS has an OS and IPv4 on a device with 50k memory. It runns on a 6502 (Apple II 8-bit CPU) and Atmel Atmega128 RFA1, with a radio!
      http://www.contiki-os.org/support.html

      Yes, Linux and MS Windows 10 doesn’t fit into those. But there are OS:es that does.

  9. There are many areas of IOT which need standards for true interobject communications. The standards organizations are not doing a good job because they are often driven by businesses with a desire for closed proprietary systems to make money. What is needed are true grass roots open standards and luckily there are some quite good starting points. Not all of these are truly open, but open varieties can be created. I have set up a non-profit to help encourage such open standards, openiotfoundation.org, but it is early days so far.

    So what standards are there to start with? Clearly, the core of the Internet is based on the defacto Ethernet and Wifi standards. There are other wired and, more importantly, wireless protocols which will become useful at “the edge” of the IOT but it is pretty clear that existing wifi and wired standards will be at the core of IOT. That is one reason I like to use ESP8266 modules for my “edge” components, such as wireless sensors. Wifi is limited in terms of power consumption and range but it is good enough for 99% of my uses. I generally don’t use Bluetooth as its security is poor compared to Wifi. When I do, I have to add application layer security.

    At the next level comes the data transmission protocols. TCP/IP and UDP are clearly the ones to use over Ethernet and Wifi. Many current messaging protocols, such as MQTT and CoAP are designed for these specific protocols, with MQTT usually using TCP/IP and CoAP, UDP. I personally like to use MQTT where connection time and power usage is not an issue and CoAP where they are. Some people like to use HTTP REST calls but I find that, on one hand, it is too heavyweight and on the other hand in many ways too limited, but that is a much longer conversation.

    On top of that are the message syntax standards. Generally speaking, JSON has become the defacto standard and I use JSON structures for all my messages.

    Next are the security standards and there are many levels of security many of which have quite well defined standards. For example WPA2 when well implemented, provides pretty good security for communication with the edge endpoints, such as sensors which only talk to local objects that relay the information. The next level is TLS which, also when well implemented, offers a pretty good protection for data being sent over the Internet. I, for example, use it with MQTT for communication with Apps. It is the next level of security that needs a lot of work and that is “user level” authentication and authorization. Oauth is used by many businesses, primarily to lock you into their cloud based systems, but the protocol can be also be used by individuals. There are other, complementary, security protocol standards starting to appear, such as Opacity but it in not clear whether these will be truly open. This is an area I am actively working on.

    Finally, there are semantic standards. That is how data is represented. In practical terms that means providing some standard JSON data structures. This is a very hard nut to crack. There is a general tendency just to define structures that meet the immediate needs or to have people trying to set up standards in abstraction, like GATT for BLE. The only way that semantic standards can be truly universal is if there is a system that allows standards to evolve through community action, somewhat like Wikipedia. I am starting to work on this area but for my current applications I am just setting up carefully designed pragmatic data structures such that they can be changed later as standards emerge.

    1. Nice summary of the different standards challenges throughout the IoT layers. I have come to a similar conclusion for choice of protocol/standards although Wi-Fi power/range is IMO a limiting restriction for a lot of remote sensors (but for home use just add more Wi-Fi access points & design sensors to compensate for the power use – Thread is an interesting upcoming alternative).

      A significant piece of the IoT standards puzzle not mentioned in this article / comments is the discovery protocols/services like Alljoyn and OCI – and now that both competing standards have merged into OCF there is a decent chance of OCF becoming the interop standard that allows dissimilar IoT devices to discover and communicate with each other.

  10. I really don’t see it not being TCP/IP all the way down, unless I woke up in some alternate universe where the ESP8266 does not exist. As with USB, which is too complicated to bit-bang for most small micros yet micros exist with complete implementations in silicon, the hardware necessary to do both wifi and TCP/IP is only going to get cheaper, smaller, and lower power demanding. IPv4 is totally adequate for a local network whose nodes are separated from the big bad Internet by a router and more flexible guardians.

    It is also hard to see the why a “meshier” IoT with things talking to other things is necessary or even useful. In just about any case where multiple things need to intercommunicate, there is going to be some central logic that needs to be applied which implies a common controller. Mesh networking is not of much advantage if your basic comm scheme is wifi since range is good and range extenders are a cheap standard product.

    Of course there is the matter of sensors that need long life on battery power, but that is a bit of a specialty niche; for the most part when we think IoT we think things like appliances, lights, door openers, and HVAC controls all of which have line power available. When you do need a sensor in some inaccessible place without power it will almost certainly be within low-power RF range of a place where power is available and you can locate a portal.

  11. Tcp/ip all the way down?

    No.

    If iot means 8266 to you then I see why you would think so. IoT means border routers and other radio technologies that are not necessarily 6LowPAN.

    Sleepy end nodes… Etc.

    Think larger scale, larger than 255 nodes in a subnet.

    1. Yes! All the way. There are IPv6 and IPv4 running on Atmel Atmega128 and old 8-bit 6502 with Contiki OS.
      And there are proxies for devices that doesn’t handle Bluetooth (which are on IP now) and 6LowPAN.
      And for security, we do have XMPP-IoT, which can work as proxy for devices that can’t handle XMPP themself or running some obscure, insecure protocol like MQTT or anything else that a computer can use to talk to a device.
      http://www.contiki-os.org/index.html#why
      http://www.contiki-os.org/hardware.html
      http://www.contiki-os.org/support.html

      Yes, IPv6 easier to work with. Only need to manage /64 net in LAN.

  12. A bit late, maybe interesting re Twitter as a protocol / message bus:

    “why Twitter is such a big deal. The reason is that it’s a new messaging protocol, where you don’t specify the recipients. […] new protocols that take off are […] But Twitter is a protocol owned by a private company.” http://paulgraham.com/twitter.html (2009)

    “some of us will never be happy with only the centralized service a corporation like Facebook or Twitter provides. Eventually we don’t need the training wheels — it’s time to take off down the street in our own car […] implement a back-end service that does what Twitter does, with a web service API” http://scripting.com/stories/2009/02/17/whereIsTwittersWordpress.html (2009)

    “Twitter should not be a company alone. It should be an open protocol much like HTTP or email protocols (IMAP/POP). There should be an adopted industry standard that Twitter, the company, should and could (and still can) champion and work through with the guidance of other industry members.” https://technosailor.com/2012/03/01/twitter-as-a-protocol/ (2012, Cache: https://webcache.googleusercontent.com/search?q=cache:r587dAsrRqIJ:https://technosailor.com/2012/03/01/twitter-as-a-protocol/+&cd=1&hl=en&ct=clnk&gl=us )

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.