Hack The Cloud!

The obvious rants against software or services “in the cloud” are that you don’t own it, your data isn’t on your own hard drive, or that, when the interwebs are down, you just can’t get your work done. But the one that really grinds my gears is that, at least for many cloud services, you just can’t play around with them. Why does that matter? Well, as a hacker type, of course, I like to fool around, but more deeply, I feel that this invitation to play around is what’s going to grow up the next generation of hackers. Openness matters not just for now, but also for the future.

Of course, it’s unfair to pin all of this on the cloud. There are plenty of services with nice open APIs that let you play around with their systems as much as you want — witness the abundance of amusing things you can do with Twitter or Twitch. Still, every day seems to bring another formerly-open API that gets bought up by some wealthy company and shut down. I built a nice “is it going to rain today” display out of a meter-long WS2812 strip and an ESP8266, but Dark Sky API got bought up by Apple and is going dark soon (tee-hee!) leaving me thinking of how I’m going to get easy weather data in the next few months.

Whisper your tip in our earOr take my e-mail annunciator. I wrote a little script that, when I have new mail that’s work related or from my wife (read: important), it displays the subject line on a VFD that I have perched on my monitor. This works with Gmail, which I have to use for work, because they support IMAP so at least I can do cool things with the mail once it reaches my server. But can I do anything with Google Groups, which we use for the Hackaday Tip Line? Fat chance!

So there’s good “cloud” and there’s bad “cloud”. Good cloud is open cloud. Good cloud invites you to play, to innovate, and to come up with the right solutions for yourself. Good cloud gives you access to your data. Good cloud is hackable cloud. Let’s see more of that.

33 thoughts on “Hack The Cloud!

    1. … I’ve used it extensively for years. Certainly some groups are using groups.io, and until recently there was Yahoo groups. Then there are various mailman-based mailing lists, and more or less closely USENET related things, but, uh, what exactly do you think people are using instead? IRC and Discord aren’t really mailing list replacements.

  1. The constant move towards hiding complexity behind simple APIs is IMHO the path to doom. I‘m working with a lot of ee master students and some of them are starting to use arduino c++ or even micropython for embedded programming. They have no idea of what the system is doing. For desktop applications or mobile apps it‘s even worse. I wonder if there will be enough people left at some point who know enough of low level programming to improve the stuff we have on a system level.

    1. Maybe that is the goal. Not to go down a rabbit hole, but I think tech in general doesn’t really want innovators, unless the innovators are working for them. Everything is about money, especially tech related. The ‘at some point’ you mention is what I believe to be the end goal for big buisness. Really just have to look at the mainstream news to see it, granted a story at a time, but this is how these things work.

    2. Abstracting stuff lots can lead to some issues, but if we had *none* and everyone was required to know all the low-level details of things they’re dealing with, there would be much less progress in actual practical user-facing applications, and programming would be a far harder field to enter.

    3. Y2K should have taught us a lesson. I remember retired programmer had to be hired to fix old systems. At the university I used to work at they relied too heavily on students to write code for experience. Unfortunately lots of it wasn’t well documented or was done hap hazard. When it came time to make changes to systems, the students had long since graduated and professional programmers had to be hired to write all new systems.

  2. I’ve been mulling over the weather API problem for a while to finish my ‘is it raining’ hardware design. The simplest approach seems to be building a separate cloud-hosted ‘proxy’ on Heroku / Lambda / other free-tier Platforms that query the APIs on your gadgets’ behalf, and pointing your gadgets to that middleware instead of directly querying/parsing the third-party API.

    In addition to relieving the JSON parsing nightmare in limited memory hardware, the middleware can cache only the specific weather data that the hardware needs, relieving the load on the third-party API and preventing rate-limiting to your API key. If the current weather source is going to sunset (dark sky pun…), it’s much simpler to update the cloud-based middleware to query and massage the data your gadget already expects from another API than attempt to update the gadget.

      1. Arrrgggh, I hate when people reference xkcd (Is there an xkcd for that feeling?), because then I spend the next 30 minutes reading ones I’ve not seen before… and OMG, I just learned about hovering over them for additional insight.

  3. Well, that’s the usual behavior of Apple in particular. As soon as they buy some service, everything outside the Apple ecosystem will immediately be discontinued, with no transition or contingency plan.
    There’s no point in having an “open” cloud (what’s that supposed to mean anyway?) when your services can be shut down just willy-nilly. I also wouldn’t argue that this service wasn’t open: They had an API, which is probably as open as you can get while still running the software-as-a-service model.

    What we’d really need is standardised interfaces and protocols, so you’re just choosing the data/computation/resource vendor/supplier. This concept works amazingly well for other industries: for example, I don’t have to buy different tools, adapt my assembly procedure and modify my product if I change the supplier of my screws. Still, there seems to be such a demand for screws that screw-producing companies survive in spite of their product being easily replaceable by a product of some other company.
    However, this is not what we’re seeing in IT, with ever more dependence on very specific systems and infrastructures. On the surface, using “standards” for communication (REST APIs, MQTT, JSON, what-have-you) might look as if these components are loosely coupled to your application, but there’s not a lot of agreement on payload format, so you’re looking at either anticipating frequent change and building converters from the supplier’s data to your internal data representation (and vice versa) or, especially if you don’t think that’s worth it, just modeling your internal data representation in a way that closely matches the structure of the data you get from your cloud data provider. Both approaches suck. The first one adds needless complexity and bloat for pretty much everything exchanging data, the second one guarantees your application will break if the data representation changes, be it due to breaking updates or a change in your cloud-based resource supplier.
    There were some interesting, but far too inflexible ideas in the 90s and early 00s for providing data format descriptors, but as far as I can tell, they seem to have gone nowhere.

  4. The bad part about the cloud is indeed that it may be “here today, gone tomorrow”. I keep casting a wary eye on Github, but if that goes bad, I certainly won’t lose data with a local repository on my machine that I can just push elsewhere.

    But in general, the rule is simple — cloud services suck big time.

  5. I have a stupid question – VFD obviously doesn’t mean “Variable Frequency Drive” (motor drive) in this context (mounted on top of a monitor), as it has for 30 odd years of my experience. What is it?

  6. “The obvious rants against software or services “in the cloud” are that you don’t own it, your data isn’t on your own hard drive, or that, when the interwebs are down, you just can’t get your work done.”

    Private clouds. Play away.

  7. Soon as my oldest child is ready I am training her to be a systems administrator so that I have a 2IC to back me up, that way I can more confidently commit to the process of migrating my entire family’s “digital everything” across to locally hosted FOSS services. I can’t see why this can’t also be achieved by other configurations of humans groups, i.e. micro communities. The 10% of functions we may not be able to immediately have during the switchover are not going to be worth the continual dependence on external service providers. And when I say everything I mean everything that is possible including Wikipedia mirrors. I may look at extending access to some of that to my local community via a wifi mesh if I am comfortable with the security. Imaging the impact of large numbers of people doing that, and sharing their knowledge as to how best to go about it.

  8. What I’d love to see: a tool that connects to Office365 and exposes a CalDAV interface and a fully fledged IMAP interface.

    My workplace has just migrated to this turd of a unproductivity suite, we went from Postfix+dovecot and SoGO, where IMAP worked beautiful (with public folders) worked with any email client you threw at it, and calendaring worked with anything that spoke CalDAV.

    I now have the situation where public mail folders are invisible, sharing a mail folder is impossible because the server “doesn’t support this feature”, and calendars are completely invisible because Microsoft still lives in the 90s and refuses to use an agreed-upon protocol for calendaring.

    There’s an API it seems for managing calendars, possibly public folders too, so there may be a way, but it certainly isn’t “out-of-the-box” like it ought to in 2021.

  9. I don’t like the cloud in some instances because it creates and reliance on an outside service over which I have no control. That’s why when I buy an e-book, I used a rooted phone so I can save the file from it and convert to PDF. It’s why I use popmail still to this day in an old hacked version of outlook express (running everything through postini before downloading it). If it’s mine, I want it right here on my PC or on my device where I have control and ownership. Making devices inoperative because you don’t have a connection to internet is something I’m simply not willing to tolerate. Of course I’m not talking about routers and network devices which are for the sole purpose of internet connectivity. I’m talking end-user products. Remote file access is via my website/ftp site. I have put myself in a position where I don’t rely on the cloud knowingly and over all, I’m happy with my approach. There are some really good reasons for cloud services mind you, but the more I purposefully choose my own way of using internet, the more satisfied I am with it.

  10. We see a lot of talk around here about why cloud solutions are bad. And we see a lot of projects using them.

    It seems like there isn’t so much talk about how to host something ones self or even much mention of port forwarding.

    Is it because there are no interesting projects being done this way? Is it just assumed that everyone knows about that? Given how many people seem to be going straight to the cloud for their projects without even considering self-hosting I don’t think people do know that they can. Or is much of the world’s ISPs putting customers behind NATS even right at the modem? I haven’t seen this where I live but maybe I’m lucky.

    I don’t get it. What gives? Why don’t we see projects here where one is running their own services?

Leave a Reply to RenCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.