A Phone That Old Shouldn’t Be Running Android

Cars and smartphones have something curious in common, just as most everyday saloon cars from different manufacturers have tended towards similarity, so have smartphones. Whether your smartphone the latest and greatest or only cost you $50 from a supermarket, it matters little to look at because both phones will be superficially near-identical black slabs.

It wasn’t always this way though, in decades past phones from different manufacturers each had their own flavours, and there was a variety in form factors to suit all tastes. There’s a ray of hope for fans of those days though, in the form of [befinitiv]’s 2000-era Sony flip phone. It runs Android. Yes, you read that right, there on the tiny screen is Android 9.

Of course whatever processor and electronics the phone came with are long gone, and instead the phone sports the internals of a modern Chinese watch-smartphone grafted in in place of the original. The whole electronics package fits in the screen opening, and though it required some wiring for the USB-C socket and a few other parts it looks for all the world from the outside as though it was meant to run Android. You can take a look in the video below the break.

He cheerfully admits that there’s still a way to go for example in getting the original keyboard working, but even with a tiny touchscreen it’s good enough to be a daily driver. It may be a little on the small side, but for those of us who miss our old phones maybe there’s hope in it for something new.

Meanwhile this isn’t the first re-use of an old phone we’ve seen recently.

Continue reading “A Phone That Old Shouldn’t Be Running Android”

Google And Apple Reveal Their Coronavirus Contact Tracing Plans: We Kick The Tires

Google and Apple have joined forces to issue a common API that will run on their mobile phone operating systems, enabling applications to track people who you come “into contact” with in order to slow the spread of the COVID-19 pandemic. It’s an extremely tall order to do so in a way that is voluntary, respects personal privacy as much as possible, doesn’t rely on potentially vulnerable centralized services, and doesn’t produce so many false positives that the results are either ignored or create a mass panic. And perhaps much more importantly, it’s got to work.

Slowing the Spread

As I write this, the COVID-19 pandemic seems to be just turning the corner from uncontrolled exponential growth to something that’s potentially more manageable, but it’s not clear that we yet see an end in sight. So far, this has required hundreds of millions of people to go into essentially voluntary quarantine. But that’s a blunt tool. In an ideal world, you could stop the disease globally in a couple weeks if you could somehow test everyone and isolate those who have been exposed to the virus. In the real world, truly comprehensive testing is impossible, and figuring out whom to isolate is extraordinarily difficult due to two factors: COVID-19 has a long incubation period during which it is nonetheless transmissible, and some or even most people don’t know they have it. How can you stop what you can’t see, and even when you can detect it, it’s a week too late?

One promising approach is to isolate those people who’ve been in contact with known cases during the stealth contagion period. To do this is essentially to keep a diary of everyone you’ve been in contact with for the last week or two, and then if you eventually test positive for COVID-19, alert them all so that they can keep from infecting others even before they test positive: track and trace. Doctors can do this by interviewing patients who test positive (this is the “contact tracing” we’ve been hearing so much about), but memory is imperfect. Enter a technological solution. Continue reading “Google And Apple Reveal Their Coronavirus Contact Tracing Plans: We Kick The Tires”

How Many Smartphones Does It Take To Make A Traffic Jam?

Online mapping services pack in a lot of functionality that their paper-based forebearers could simply never imagine. Adding in metadata for local landmarks, businesses and respective reviews, and even live traffic data, they have the capability to deliver more information than ever before – and also correspondingly, shape human behaviour. [Simon Weckert] decided to explore this concept with a cheeky little hack.

Pictured: All it takes to create a traffic jam on Google Maps!

The hack targets the manner in which Google collects live traffic data for display on Google Maps. When users load the app, Google takes location data from individual phones, tracking them as they travel along roadways. Large numbers of users travelling slowly down a road indicate there’s heavy traffic, and thus Google will display corresponding warnings on their maps and redirect users to take alternative paths.

To pull off the hack, [Simon] placed 99 smartphones in a handy-cart, tugging them behind him as he walked slowly down a series of streets. In the video, this is overlaid with Google’s map data captured at the time. The app updates the maps with orange and red lines down the roads which [Simon] travelled with his 99 pretend drivers, indicating a traffic jam.

We’d love to know whether [Simon] ran 99 individual SIM cards with data access, or if the hack was perpetrated with the use of a WiFi hotspot for cheaper internet access. Reddit comments note that Google will likely swiftly work on methods to prevent such tomfoolery in future. It’s simple to see that 99 individual users reporting the exact same location and speed at the same time would be trivial to filter out from traffic monitoring in future.

It’s both a commentary on the power we give these apps in our lives, as well as a great demonstration of how easily such systems can be trifled with. We first reported on Google’s traffic monitoring back in 2009, when it was a technology in its infancy. Video after the break.

Continue reading “How Many Smartphones Does It Take To Make A Traffic Jam?”

36C3: All Wireless Stacks Are Broken

Your cellphone is the least secure computer that you own, and worse than that, it’s got a radio. [Jiska Classen] and her lab have been hacking on cellphones’ wireless systems for a while now, and in this talk gives an overview of the wireless vulnerabilities and attack surfaces that they bring along. While the talk provides some basic background on wireless (in)security, it also presents two new areas of research that she and her colleagues have been working on the last year.

One of the new hacks is based on the fact that a phone that wants to support both Bluetooth and WiFi needs to figure out a way to share the radio, because both protocols use the same 2.4 GHz band. And so it turns out that the Bluetooth hardware has to talk to the WiFi hardware, and it wouldn’t entirely surprise you that when [Jiska] gets into the Bluetooth stack, she’s able to DOS the WiFi. What this does to the operating system depends on the phone, but many of them just fall over and reboot.

Lately [Jiska] has been doing a lot of fuzzing on the cell phone stack enabled by some work by one of her students [Jan Ruge] work on emulation, codenamed “Frankenstein”. The coolest thing here is that the emulation runs in real time, and can be threaded into the operating system, enabling full-stack fuzzing. More complexity means more bugs, so we expect to see a lot more coming out of this line of research in the next year.

[Jiska] gives the presentation in a tinfoil hat, but that’s just a metaphor. In the end, when asked about how to properly secure your phone, she gives out the best advice ever: toss it in the blender.

Why Is Your Cellphone Not A More Useful Computer?

Sometimes when you are browsing randomly through the tech feeds, up pops an article that just crystallizes a nascent thought that had been simmering below the surface for a long time, and is enough to make you sit up and say “Yes! I agree completely with that!”. Such a moment came with [Cheapscatesguide]’s post: “My Fantasy: A Cellphone I can Use as a Desktop Computer“, in which the pertinent question is asked that if smartphones are so powerful, why are they not much better at being more than, well, smartphones?

Readers with long memories may recall that the cellphone-as-computer idea is one that has been tried at least once before. The Motorola Atrix appeared in the early years of this decade, and was a high-end smartphone that could be slotted into both desktop replacement and netbook-style base stations and used as a Linux-based personal computer. Unfortunately it was both eye-wateringly expensive and disappointingly slow due to a hobbled operating system, so it failed to set the market alight. There was a brief moment when unsold Atrix netbook docks were available on the surplus market and became popular platforms as a Raspberry Pi desktop interface, but this experiment seems to have put paid to the idea of one device to truly rule them all.

If we had to hazard a guess as to why this has failed to happen, we’d finger both the manufacturer’s desire not to undermine their lucrative sales in other sectors, and both their and the carriers’ desire to lock down the devices as much as possible. A manufacturer such as Apple will for example never  produce an iPhone that can replace a desktop, because it would affect their MacBook sales. Oddly in another form we’re nearly there, this piece is being worked on with a Chromebook, a device that has a useful browser, a functional Android layer, and (because it’s a 64-bit model) an officially supported and useful Debian layer. We don’t expect this to translate into a phone any time soon though.

From another angle, we’ve asked in the past why we aren’t hacking old cellphones.

Moto Atrix lapdock picture: ETC@USC [CC BY-SA 2.0].

Via Hacker News.

5G Is For Robots

Ecclesiastes 1:9 reads “What has been will be again, what has done will be done again; there is nothing new under the sun.” Or in other words, 5G is mostly marketing nonsense; like 4G, 3G, and 2G was before it. Let’s not forget LTE, 4G LTE, Advance 4G, and Edge.

Just a normal everyday antenna array in a Seattle parking garage.

Technically, 5G means that providers could, if they wanted to, install some EHF antennas; the same kind we’ve been using forever to do point to point microwave internet in cities. These frequencies are too lazy to pass through a wall, so we’d have to install these antennas in a grid at ground level. The promised result is that we’ll all get slightly lower latency tiered internet connections that won’t live up to the hype at all. From a customer perspective, about the only thing it will do is let us hit the 8Gb ceiling twice as faster on our “unlimited” plans before they throttle us. It might be nice on a laptop, but it would be a historically ridiculous assumption that Verizon is going to let us tether devices to their shiny new network without charging us a million Yen for the privilege.

So, what’s the deal? From a practical standpoint we’ve already maxed out what a phone needs. For example, here’s a dirty secret of the phone world: you can’t tell the difference between 1080p and 720p video on a tiny screen. I know of more than one company where the 1080p on their app really means 640 or 720 displayed on the device and 1080p is recorded on the cloud somewhere for download. Not a single user has noticed or complained. Oh, maybe if you’re looking hard you can feel that one picture is sharper than the other, but past that what are you doing? Likewise, what’s the point of 60fps 8k video on a phone? Or even a laptop for that matter?

Are we really going to max out a mobile webpage? Since our device’s ability to present information exceeds our ability to process it, is there a theoretical maximum to the size of an app? Even if we had Gbit internet to every phone in the world, from a user standpoint it would be a marginal improvement at best. Unless you’re a professional mobile game player (is that a thing yet?) latency is meaningless to you. The buffer buffs the experience until it shines.

So why should we care about billion dollar corporations racing to have the best network for sending low resolution advertising gifs to our disctracto cubes? Because 5G is for robots.

Continue reading “5G Is For Robots”

Ask Hackaday: Why Aren’t We Hacking Cellphones?

When a project has outgrown using a small microcontroller, almost everyone reaches for a single-board computer — with the Raspberry Pi being the poster child. But doing so leaves you stuck with essentially a headless Linux server: a brain in a jar when what you want is a Swiss Army knife.

It would be a lot more fun if it had a screen attached, and of course the market is filled with options on that front. Then there’s the issue of designing a human interface: touch screens are all the rage these days, so why not buy a screen with a touch interface too? Audio in and out would be great, as would other random peripherals like accelerometers, WiFi, and maybe even a cellular radio when out of WiFi range. Maybe Bluetooth? Oh heck, let’s throw in a video camera and high-powered LED just for fun. Sounds like a Raspberry Pi killer!

And this development platform should be cheap, or better yet, free. Free like any one of the old cell phones that sit piled up in my “hack me” box in the closet, instead of getting put to work in projects. While I cobble together projects out of Pi Zeros and lame TFT LCD screens, the advanced functionality of these phones sits gathering dust. And I’m not alone.

Why is this? Why don’t we see a lot more projects based around the use of old cellphones? They’re abundant, cheap, feature-rich, and powerful. For me, there’s two giant hurdles to overcome: the hardware and the software. I’m going to run down what I see as the problems with using cell phones as hacker tools, but I’d love to be proven wrong. Hence the “Ask Hackaday”: why don’t we see more projects that re-use smartphones?

Continue reading “Ask Hackaday: Why Aren’t We Hacking Cellphones?”