Hackaday Links Column Banner

Hackaday Links: October 4, 2020

In case you hadn’t noticed, it was a bad week for system admins. Pennsylvania-based United Health Services, a company that owns and operates hospitals across the US and UK, was hit by a ransomware attack early in the week. The attack, which appears to be the Ryuk ransomware, shut down systems used by hospitals and health care providers to schedule patient visits, report lab results, and do the important job of charting. It’s not clear how much the ransomers want, but given that UHS is a Fortune 500 company, it’s likely a tidy sum.

And as if an entire hospital corporation’s IT infrastructure being taken down isn’t bad enough, how about the multi-state 911 outage that occurred around the same time? Most news reports seemed to blame the outage on an Office 365 outage happening at the same time, but Krebs on Security dug a little deeper and traced the issue back to two companies that provide 911 call routing services. Each of the companies is blaming the other, so nobody is talking about the root cause of the issue. There’s no indication that it was malware or ransomware, though, and the outage was mercifully brief. But it just goes to show how vulnerable our systems have become.

Our final “really bad day at work” story comes from Japan, where a single piece of failed hardware shut down a $6-trillion stock market. The Tokyo Stock Exchange, third-largest bourse in the world, had to be completely shut down early in the trading day Thursday when a shared disk array failed. The device was supposed to automatically failover to a backup unit, but apparently the handoff process failed. This led to cascading failures and blank terminals on the desks of thousands of traders. Exchange officials made the call to shut everything down for the day and bring everything back up carefully. We imagine there are some systems people sweating it out this weekend to figure out what went wrong and how to keep it from happening again.

With our systems apparently becoming increasingly brittle, it might be a good time to take a look at what goes into space-rated operating systems. Ars Technica has a fascinating overview of the real-time OSes used for space probes, where failure is not an option and a few milliseconds error can destroy billions of dollars of hardware. The article focuses on the RTOS VxWorks and goes into detail on the mysterious rebooting error that affected the Mars Pathfinder mission in 1997. Space travel isn’t the same as running a hospital or stock exchange, of course, but there are probably lessons to be learned here.

As if 2020 hasn’t dealt enough previews of various apocalyptic scenarios, here’s what surely must be a sign that the end is nigh: AI-generated PowerPoint slides. For anyone who has ever had to sit through an endless slide deck and wondered who the hell came up with such drivel, the answer may soon be: no one. DeckRobot, a startup company, is building an AI-powered extension to Microsoft Office to automate the production of “company compliant and visually appealing” slide decks. The extension will apparently be trained using “thousands and thousands of real PowerPoint slides”. So, great — AI no longer has to have the keys to the nukes to do us in. It’ll just bore us all to death.

And finally, if you need a bit of a palate-cleanser after all that, please do check out robotic curling. Yes, the sport that everyone loves to make fun of is actually way more complicated than it seems, and getting a robot to launch the stones on the icy playing field is a really complex and interesting problem. The robot — dubbed “Curly”, of course — looks like a souped-up Roomba. After sizing up the playing field with a camera on an extendable boom, it pushes the stone while giving it a gentle spin to ease it into exactly the right spot. Sadly, the wickedly energetic work of the sweepers and their trajectory-altering brooms has not yet been automated, but it’s still pretty cool to watch. But fair warning: you might soon find yourself with a curling habit to support.

Twitter: It’s Not The Algorithm’s Fault. It’s Much Worse.

Maybe you heard about the anger surrounding Twitter’s automatic cropping of images. When users submit pictures that are too tall or too wide for the layout, Twitter automatically crops them to roughly a square. Instead of just picking, say, the largest square that’s closest to the center of the image, they use some “algorithm”, likely a neural network, trained to find people’s faces and make sure they’re cropped in.

The problem is that when a too-tall or too-wide image includes two or more people, and they’ve got different colored skin, the crop picks the lighter face. That’s really offensive, and something’s clearly wrong, but what?

A neural network is really just a mathematical equation, with the input variables being in these cases convolutions over the pixels in the image, and training them essentially consists in picking the values for all the coefficients. You do this by applying inputs, seeing how wrong the outputs are, and updating the coefficients to make the answer a little more right. Do this a bazillion times, with a big enough model and dataset, and you can make a machine recognize different breeds of cat.

What went wrong at Twitter? Right now it’s speculation, but my money says it lies with either the training dataset or the coefficient-update step. The problem of including people of all races in the training dataset is so blatantly obvious that we hope that’s not the problem; although getting a representative dataset is hard, it’s known to be hard, and they should be on top of that.

Which means that the issue might be coefficient fitting, and this is where math and culture collide. Imagine that your algorithm just misclassified a cat as an “airplane” or as a “lion”. You need to modify the coefficients so that they move the answer away from this result a bit, and more toward “cat”. Do you move them equally from “airplane” and “lion” or is “airplane” somehow more wrong? To capture this notion of different wrongnesses, you use a loss function that can numerically encapsulate just exactly what it is you want the network to learn, and then you take bigger or smaller steps in the right direction depending on how bad the result was.

Let that sink in for a second. You need a mathematical equation that summarizes what you want the network to learn. (But not how you want it to learn it. That’s the revolutionary quality of applied neural networks.)

Now imagine, as happened to Google, your algorithm fits “gorilla” to the image of a black person. That’s wrong, but it’s categorically differently wrong from simply fitting “airplane” to the same person. How do you write the loss function that incorporates some penalty for racially offensive results? Ideally, you would want them to never happen, so you could imagine trying to identify all possible insults and assigning those outcomes an infinitely large loss. Which is essentially what Google did — their “workaround” was to stop classifying “gorilla” entirely because the loss incurred by misclassifying a person as a gorilla was so large.

This is a fundamental problem with neural networks — they’re only as good as the data and the loss function. These days, the data has become less of a problem, but getting the loss right is a multi-level game, as these neural network trainwrecks demonstrate. And it’s not as easy as writing an equation that isn’t “racist”, whatever that would mean. The loss function is being asked to encapsulate human sensitivities, navigate around them and quantify them, and eventually weigh the slight risk of making a particularly offensive misclassification against not recognizing certain animals at all.

I’m not sure this problem is solvable, even with tremendously large datasets. (There are mathematical proofs that with infinitely large datasets the model will classify everything correctly, so you needn’t worry. But how close are we to infinity? Are asymptotic proofs relevant?)

Anyway, this problem is bigger than algorithms, or even their writers, being “racist”. It may be a fundamental problem of machine learning, and we’re definitely going to see further permutations of the Twitter fiasco in the future as machine classification is being increasingly asked to respect human dignity.

Community Testing Suggests Bias In Twitter’s Cropping Algorithm

With social media and online services are now huge parts of daily life to the point that our entire world is being shaped by algorithms. Arcane in their workings, they are responsible for the content we see and the adverts we’re shown. Just as importantly, they decide what is hidden from view as well.

Important: Much of this post discusses the performance of a live website algorithm. Some of the links in this post may not perform as reported if viewed at a later date. 

The initial Zoom problem that brought Twitter’s issues to light.

Recently, [Colin Madland] posted some screenshots of a Zoom meeting to Twitter, pointing out how Zoom’s background detection algorithm had improperly erased the head of a colleague with darker skin. In doing so, [Colin] noticed a strange effect — although the screenshot he submitted shows both of their faces, Twitter would always crop the image to show just his light-skinned face, no matter the image orientation. The Twitter community raced to explore the problem, and the fallout was swift.

Continue reading “Community Testing Suggests Bias In Twitter’s Cropping Algorithm”

Ideas To Prototypes Hack Chat With Nick Bild

Join us on Wednesday, July 29 at noon Pacific for the Ideas to Prototypes Hack Chat with Nick Bild!

For most of us, ideas are easy to come by. Taking a shower can generate half of dozen of them, the bulk of which will be gone before your hair is dry. But a few ideas will stick, and eventually make it onto paper or its electronic equivalent, to be played with and tweaked until it coalesces into a plan. And a plan, if we’re lucky, is what’s needed to put that original idea into action, to bring it to fruition and see just what it can do.

No matter what you’re building, the ability to turn ideas into prototypes is what moves projects forward, and it’s what most of us live for. Seeing something on the bench or the shop floor that was once just a couple of back-of-the-napkin sketches, and before that only an abstract concept in your head, is immensely satisfying.

The path from idea to prototype, however, is not always a smooth one, as Nick Bild can attest. We’ve been covering Nick’s work for a while now, starting with his “nearly practical” breadboard 6502 computer, the Vectron, up to his recent forays into machine learning with ShAIdes, his home-automation controlling AI sunglasses. On the way we’ve seen his machine-learning pitch predictor, dazzle-proof glasses, and even a wardrobe-malfunction preventer.

All of Nick’s stuff is cool, to be sure, but there’s a method to his productivity, and we’ll talk about that and more in this Hack Chat. Join us as we dive into Nick’s projects and find out what he does to turn his ideas into prototypes.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, July 29 at 12:00 PM Pacific time. If time zones have you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about. Continue reading “Ideas To Prototypes Hack Chat With Nick Bild”

Hackaday Links Column Banner

Hackaday Links: June 14, 2020

You say you want to go to Mars, but the vanishingly thin atmosphere, the toxic and corrosive soil, the bitter cold, the deadly radiation that sleets down constantly, and the long, perilous journey that you probably won’t return from has turned you off a little. Fear not, because there’s still a way for you to get at least part of you to Mars: your intelligence. Curiosity, the Mars rover that’s on the eighth year of its 90-day mission, is completely remote-controlled, and NASA would like to add some self-driving capabilities to it. Which is why they’re asking for human help in classifying thousands of images of the Martian surface. By annotating images and pointing out what looks like soil and what looks like rock, you’ll be training an algorithm that one day might be sent up to the rover. If you’ve got the time, give it a shot — it seems a better use of time than training our eventual AI overlords.

We got a tip this week that ASTM, the international standards organization, has made its collection of standards for testing PPE available to the public. With titles like “Standard Test Method for Resistance of Medical Face Masks to Penetration by Synthetic Blood (Horizontal Projection of Fixed Volume at a Known Velocity)”, it seems like the standards body wants to make sure that that homebrew PPE gets tested properly before being put into service. The timing of this release is fortuitous since this week’s Hack Chat features Hiram Gay and Lex Kravitz, colleagues from the Washington University School of Medicine who will talk about what they did to test a respirator made from a full-face snorkel mask.

There’s little doubt that Lego played a huge part in the development of many engineers, and many of us never really put them away for good. We still pull them out occasionally, for fun or even for work, especially the Technic parts, which make a great prototyping system. But what if you need a Technic piece that you don’t have, or one that never existed in the first place? Easy — design and print your own custom Technic pieces. Lego Part Designer is a web app that breaks Technic parts down into five possible blocks, and lets you combine them as you see fit. We doubt that most FDM printers can deal with the fine tolerances needed for that satisfying Lego fit, but good enough might be all you need to get a design working.

Chances are pretty good that you’ve participated in more than a few video conferencing sessions lately, and if you’re anything like us you’ve found the experience somewhat lacking. The standard UI, with everyone in the conference organized in orderly rows and columns, reminds us of either a police line-up or the opening of The Brady Bunch, neither of which is particularly appealing. The paradigm could use a little rethinking, which is what Laptops in Space aims to do. By putting each participant’s video feed in a virtual laptop and letting them float in space, you’re supposed to have a more organic meeting experience. There’s a tweet with a short clip, or you can try it yourself. We’re not sure how we feel about it yet, but we’re glad someone is at least trying something new in this space.

And finally, if you’re in need of a primer on charlieplexing, or perhaps just need to brush up on the topic, [pileofstuff] has just released a video that might be just what you need. He explains the tri-state logic LED multiplexing method in detail, and even goes into some alternate uses, like using optocouplers to drive higher loads. We like his style — informal, but with a good level of detail that serves as a jumping-off point for further exploration.

Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips

What has dual compressed-air cannons, 500 roll-on deodorant balls, and a machine-learning brain with a bad attitude? We didn’t know either, until [Leo Fernekes] dropped this video on his autonomous robot sentry gun and saw it in action for ourselves.

Now, we’ve seen tons of sentry guns on these pages before, shooting everything from water to various forms of Nerf. And plenty of those builds have used some form of machine vision to aim the gun onto the target. So while it might appear that [Leo]’s plowing old ground here, this build is chock full of interesting tips and tricks.

It started when [Leo] saw a video on TensorFlow basics from our friend [Edje Electronics], which gave him the boost needed to jump into an AI project. The controller he ended up with looks for humans in the scene and slews the turret onto target, where the air cannons can do their thing. The hefty ammo is propelled by compressed air, which is dumped into the chamber using a solenoid valve with an interesting driver that maximizes the speed at which it opens. Style points go to the bacteriophage T4-inspired design, and to the sequence starting at 1:34 which reminded us of the factory scene from RoboCop.

[Leo] really put a ton of work into this project, and the results show. He is hoping to get an art gallery or museum to show it as an interactive piece to comment on one possible robot-human future, presumably after getting guests to sign a release. Whatever happens to it, the robot looks great and [Leo] learned a lot from it, as did we.

Continue reading “Autonomous Sentry Gun Packs A Punch And A Ton Of Build Tips”

Machine Vision Keeps Track Of Grubby Hands

Can you remember everything you’ve touched in a given day? If you’re being honest, the answer is, “Probably not.” We humans are a tactile species, with an outsized proportion of both our motor and sensory nerves sent directly to our hands. We interact with the world through our hands, and unfortunately that may mean inadvertently spreading disease.

[Nick Bild] has a potential solution: a machine-vision system called Deep Clean, which monitors a scene and records anything in it that has been touched. [Nick]’s system uses Jetson Xavier and a stereo camera to detect depth in a scene; he built his camera from a pair of Raspberry Pi cams and a Pi 3B+, but other depth cameras like a Kinect could probably do the job. The idea is to watch the scene for human hands — OpenPose is the tool he chose for that job — and correlate their depth in the scene with the depth of objects. Touch a doorknob or a light switch, and a marker is left on the scene. The idea would be that a cleaning crew would be able to look at the scene to determine which areas need extra attention. We can think of plenty of applications that extend beyond the current crisis, as the ability to map areas that have been touched seems to be generally useful.

[Nick] has been getting some mileage out of that Xavier lately — he’s used it to build an AI umpire and shades that help you find lost stuff. Who knows what else he’ll find to do with them during this time of confinement?

Continue reading “Machine Vision Keeps Track Of Grubby Hands”