There are many viable options for home security systems, but where is the fun in watching a static camera feed from inside your place? The freedom to really look around might have been what compelled [Varun Kumar] to build a security car robot to drive around his place and make sure all is in order.
Aimed at cost-effectiveness and WiFi or internet accessibility, an Android smartphone provides the foundation of this build — skipping the need for a separate Bluetooth or WiFi module — and backed up by an Arduino Uno, an L298 motor controller, and two geared DC motors powering the wheels.
Further taking advantage of the phone’s functionality, the robot is controlled by DTMF tones. Using the app DTMF Tone Generator and outputting through the 3.5mm jack, commands are interpreted by a MT8870DE DTMF decoder module.While this control method carries some risks — as with many IoT-like devices — [Kumar] has circumvented one of DTMF’s vulnerabilities by adding a PIN before the security car will accept any commands.
He obtains a live video feed from the phone using AirDroid in concert with VNC server, and assisted by a servo motor for the phone is enabled to sweep left and right for a better look. A VNC client on [Kumar]’s laptop is able to access the video feed and issue commands. Check it out in action after the break!
Twitter is kind of a crazy place. World leaders doing verbal battle, hashtags that rise and fall along with the social climate, and a never ending barrage of cat pictures all make for a tumultuous stream of consciousness that runs 24/7. What exactly we’re supposed to do with this information is still up to debate, as Twitter has yet to turn it into a profitable service after over a decade of operation. Still, it’s a grand experiment that offers a rare glimpse into the human hive-mind for anyone brave enough to dive in.
One such explorer is a security researcher who goes by the handle [x0rz]. He’s recently unveiled an experimental new piece of software that grabs Tweets and uses them as a “noise” to mix in with the Linux urandom entropy pool. The end result is a relatively unpredictable and difficult to influence source of random data. While he cautions his software is merely a proof of concept and not meant for high security applications, it’s certainly an interesting approach to introducing humanity-derived chaos into the normally orderly world of your computer’s operating system.
Noise sampling before and after being merged with urandom
This hack is made possible by the fact that Twitter offers a “sample” function in their API, which effectively throws a randomized collection of Tweets at anyone who requests it. There are some caveats here, such as the fact that if multiple clients request a sample at the same time they will both receive the same Tweets. It’s also worth mentioning that some characters are unusually likely to make an appearance due to the nature of Twitter (emoticons, octothorps pound signs, etc), but generally speaking it’s not a terrible way to get some chaotic data on demand.
On its own, [x0rz] found this data to be a good but not great source of entropy. After pulling a 500KB sample, he found it had an entropy of 6.5519 bits per byte (random would be 8). While the Tweets weren’t great on their own, combining the data with the kernel’s entropy pool at /dev/urandom provided something that looked a lot less predictable.
The greatest weakness of using Twitter as a source of entropy is, of course, the nature of Twitter itself. A sufficiently popular hashtag on the rise might be just enough to sink your entropy. It’s even possible (though admittedly unlikely) that enough Twitter spam bots could ruin the sample. But if you’re at the point where you think hinging your entropy pool on a digital fire hose of memes and cat pictures is sufficient, you’re probably not securing any national secrets anyway.
(Editor’s note: The way the Linux entropy pool mixes it together, additional sources can only help, assuming they can’t see the current state of your entropy pool, which Twitter cats most certainly can’t. See article below. Also, this is hilarious.)
Fully aware that this is one of those “just because you can doesn’t mean you should” projects, [MG] takes pains to point out that his danger dongle is just for dramatic effect, like a prop for a movie or the stage. In fact, he purposely withholds details on the pyrotechnics and concentrates on the keystroke injection aspect, potentially nasty enough by itself, as well as the dongle’s universal payload launching features. We’re a little bummed, because the confetti explosion (spoiler!) was pretty neat.
The device is just an ATtiny85 and a few passives stuffed into an old USB drive shell, along with a MOSFET to trigger the payload. If you eschew the explosives, the payload could be anything that will fit in the case. [MG] suggests that if you want to prank someone, an obnoxious siren might be a better way to teach your mark a lesson about plugging in strange USB drives.
While this isn’t the most dangerous thing you can do with a USB port, it could be right up there with that rash of USB killer dongles from a year or so ago. All of these devices are fun “what ifs”, but using them on anything but your own computers is not cool and possibly dangerous. Watching the smoke pour out of a USB socket definitely drives home the point that you shouldn’t plug in that thumbdrive that you found in the bathroom at work, though.
Researchers have recently announced a vulnerability in PC hardware enabling attackers to wipe the disk of a victim’s computer. This vulnerability, going by the name Joykill, stems from the lack of proper validation when enabling manufacturing system tests.
Joykill affects the IBM PCjr and allows local and remote attackers to destroy the contents of the floppy diskette using minimal interaction. The attack is performed by plugging two joysticks into the PCjr, booting the computer, entering the PCjr’s diagnostic mode, and immediately pressing button ‘B’ on joystick one, and buttons ‘A’ and ‘B’ on joystick two. This will enable the manufacturing system test mode, where all internal tests are performed without user interaction. The first of these tests is the diskette test, which destroys all user data on any inserted diskette. There is no visual indication of what is happening, and the data is destroyed when the test is run.
A local exploit destroying user data is scary enough, but after much work, the researchers behind Joykill have also managed to craft a remote exploit based on Joykill. To accomplish this, the researchers built two IBM PCjr joysticks with 50-meter long cables.
Researchers believe this exploit is due to undocumented code in the PCjr’s ROM. This code contains diagnostics code for manufacturing burn-in, system test code, and service test code. This code is not meant to be run by the end user, but is still exploitable by an attacker. Researchers have disassembled this code and made their work available to anyone.
As of the time of this writing, we were not able to contact anyone at the IBM PCjr Information Center for comment. We did, however, receive an exciting offer for a Carribean cruise.
When news of Meltdown and Spectre broke, Intel’s public relations department applied maximum power to their damage control press release generators. The initial message was one of defiance, downplaying the impact and implying people are over reacting. This did not go over well. Since then, we’ve started seeing a trickle of information from engineering and even direct microcode updates for people who dare to live on the bleeding edge.
All the technical work to put out the immediate fire is great, but for the sake of Intel’s future they need to figure out how to avoid future fires. The leadership needs to change the company culture away from an attitude where speed is valued over all else. Will the new security group have the necessary impact? We won’t know for quite some time. For now, it is encouraging to see work underway. Fundamental problems in corporate culture require a methodical fix and not a hack.
Editor’s note: We’ve changed the title of this article to better reflect its content: that Intel is making changes to its corporate structure to allow a larger voice for security in the inevitable security versus velocity tradeoff.
When the Macintosh was released some thirty-odd years ago, to Steve Jobs’ triumphant return in the late 90s, there was one phrase to describe the simplicity of using a Mac. ‘It Just Works’. Whether this was a reference to the complete lack of games on the Mac (Marathon shoutout, tho) or a statement to the user-friendliness of the Mac, one thing is now apparent. Apple has improved the macOS to such a degree that all passwords just work. That is to say, security on the latest versions of macOS is abysmal, and every few weeks a new bug is reported.
The first such security vulnerability in macOS High Sierra was reported by [Lemi Ergin] on Twitter. Simply, anyone could login as root with an empty password after clicking the login button several times. The steps to reproduce were as simple as opening System Preferences, Clicking the lock to make changes, typing ‘root’ in the username field, and clicking the Unlock button. It should go without saying this is incredibly insecure, and although this is only a local exploit, it’s a mind-numbingly idiotic exploit. This issue was quickly fixed by Apple in the Security Update 2017-001
The most recent password flaw comes in the form of unlocking the App Store preferences that can be unlocked with any password. The steps to reproduce on macOS High Sierra are simply:
Click on System Preferences
Click on App Store
Click the padlock icon
Enter your username and any password
Click unlock
This issue has been fixed in the beta of macOS 10.13.3, which should be released within a month. The bug does not exist in macOS Sierra version 10.12.6 or earlier.
This is the second bug in macOS in as many months where passwords just work. Or don’t work, depending on how cheeky you want to be. While these bugs have been overshadowed with recent exploits of Intel’s ME and a million blog posts on Meltdown, these are very, very serious bugs that shouldn’t have happened in the first place. And, where there are two, there’s probably more.
We don’t know what’s up with the latest version of the macOS and the password problems, but we are eagerly awaiting the Medium post from a member of the macOS team going over these issues. We hope to see that in a decade or two.
While the whole industry is scrambling on Spectre, Meltdown focused most of the spotlight on Intel and there is no shortage of outrage in Internet comments. Like many great discoveries, this one is obvious with the power of hindsight. So much so that the spectrum of reactions have spanned an extreme range. From “It’s so obvious, Intel engineers must be idiots” to “It’s so obvious, Intel engineers must have known! They kept it from us in a conspiracy with the NSA!”
We won’t try to sway those who choose to believe in a conspiracy that’s simultaneously secret and obvious to everyone. However, as evidence of non-obviousness, some very smart people got remarkably close to the Meltdown effect last summer, without getting it all the way. [Trammel Hudson] did some digging and found a paper from the early 1990s (PDF) that warns of the dangers of fetching info into the cache that might cross priviledge boundaries, but it wasn’t weaponized until recently. In short, these are old vulnerabilities, but exploiting them was hard enough that it took twenty years to do it.
Building a new CPU is the work of a large team over several years. But they weren’t all working on the same thing for all that time. Any single feature would have been the work of a small team of engineers over a period of months. During development they fixed many problems we’ll never see. But at the end of the day, they are only human. They can be 99.9% perfect and that won’t be good enough, because once hardware is released into the world: it is open season on that 0.1% the team missed.
The odds are stacked in the attacker’s favor. The team on defense has a handful of people working a few months to protect against all known and yet-to-be discovered attacks. It is a tough match against the attackers coming afterwards: there are a lot more of them, they’re continually refining the state of the art, they have twenty years to work on a problem if they need to, and they only need to find a single flaw to win. In that light, exploits like Spectre and Meltdown will probably always be with us.
Let’s look at some factors that paved the way to Intel’s current embarrassing situation.