Falsified Photos: Fooling Adobe’s Cryptographically-Signed Metadata

Last week, we wrote about the Leica M11-P, the world’s first camera with Adobe’s Content Authenticity Initiative (CAI) credentials baked into every shot. Essentially, each file is signed with Leica’s encryption key such that any changes to the image, whether edits to the photo itself or the metadata, are tracked. The goal is to not only prove ownership, but that photos are real — not tampered with or AI-generated. At least, that’s the main selling point.

Although the CAI has been around since 2019, it’s adoption is far from widespread. Only a handful of programs support it, although this list includes Photoshop, and its unlikely anybody outside the professional photography space was aware of it until recently. This isn’t too surprising, as it really isn’t relevant to the casual shooter — when I take a shot to upload to Instagram, I’m rarely thinking about whether or not I’ll need cryptographic proof that the photo wasn’t edited — usually adding #nofilter to the description is enough. Where the CAI is supposed to shine, however, is in the world of photojournalism. The idea is that a photographer can capture an image that is signed at the time of creation and maintains a tamper-proof log of any edits made. When the final image is sold to a news publisher or viewed by a reader online, they are able to view that data.

At this point, there are two thoughts you might have (or, at least, there are two thoughts I had upon learning about the CAI)

  1. Do I care that a photo is cryptographically signed?
  2. This sounds easy to break.

Well, after some messing around with the CAI tools, I have some answers for you.

  1. No, you don’t.
  2. Yes, it is.

Continue reading “Falsified Photos: Fooling Adobe’s Cryptographically-Signed Metadata”

The UK Online Safety Bill Becomes Law, What Does It Mean?

We’ve previously reported from the UK about the Online Safety Bill, a piece of internet safety legislation that contains several concerning provisions relating to online privacy and encryption. UK laws enter the statutes by royal assent after being approved by Parliament, so with the signature of the King, it has now become the law of the land as the Online Safety Act 2023. Now that it’s beyond amendment, it’s time to take stock for a minute: what does it mean for internet users, both in the UK and beyond its shores? Continue reading “The UK Online Safety Bill Becomes Law, What Does It Mean?”

The British Government Is Coming For Your Privacy

The list of bad legislation relating to the topic of encryption and privacy is long and inglorious. Usually, these legislative stinkers only affect those unfortunate enough to live in the country that passed them. Still, one upcoming law from the British government should have us all concerned. The Online Safety Bill started as the usual think-of-the-children stuff, but as the EFF notes, some of its proposed powers have the potential to undermine encryption worldwide.

At issue is the proposal that services with strong encryption incorporate government-sanctioned backdoors to give the spooks free rein to snoop on communications. We imagine that this will be of significant interest to some of the world’s less savoury regimes, a club we can’t honestly say the current UK government doesn’t seem hell-bent on joining. The Bill has had a tumultuous passage through the Lords, the UK upper house, but PM Rishi Sunak’s administration has proved unbending.

If there’s a silver lining to this legislative train wreck, it’s that many of the global tech companies are likely to pull their products from the UK market rather than comply. We understand that UK lawmakers are partial to encrypted online messaging platforms. Thus, there will be poetic justice in their voting once more for a disastrous bill with the unintended consequence of taking away something they rely on.

Header image: DaniKauf, CC BY-SA 3.0.

SUPERCON 2022: Kuba Tyszko Cracks Encrypted Software

[Kuba Tyszko] like many of us, has been hacking things from a young age. An early attempt at hacking around with grandpa’s tractor might have been swiftly quashed by his father, but likely this was not the last such incident. With a more recent interest in cracking encrypted applications, [Kuba] gives us some insights into some of the tools at your disposal for reading out the encrypted secrets of applications that have something worth hiding.  (Slides here, PDF.)

There may be all sorts of reasons for such applications to have an encrypted portion, and that’s not really the focus. One such application that [Kuba] describes was a pre-trained machine-learning model written in the R scripting language. If you’re not familiar with R, it is commonly used for ‘data science’ type tasks and has a big fan base. It’s worth checking out. Anyway, the application binary took two command line arguments, one was the encrypted blob of the model, and the second was the path to the test data set for model verification.

The first thing [Kuba] suggests is to disable network access, just in case the application wants to ‘dial home.’ We don’t want that. The application was intended for Linux, so the first port of call was to see what libraries it was linked against using the ldd command. This indicated that it was linked against OpenSSL, so that was a likely candidate for encryption support. Next up, running objdump gave some clues as to the various components of the binary. It was determined that it was doing something with 256-bit AES encryption. Now after applying a little experience (or educated guesswork, if you prefer), the likely scenario is that the binary yanks the private key from somewhere within itself reads the encrypted blob file, and passes this over to libssl. Then the plaintext R script is passed off to the R runtime, the model executes against the test data, and results are collated.

[Kuba]’s first attack method was to grab the OpenSSL source code and drop in some strategic printf() function calls into the target functions. Next, using the LD_PRELOAD ‘trick’ the standard system OpenSSL library was substituted with the ‘fake’ version with the trojan printfs. The result of this was the decryption function gleefully sending the plaintext R script direct to the terminal. No need to even locate the private key!

Continue reading “SUPERCON 2022: Kuba Tyszko Cracks Encrypted Software”

Why Can’t We Have Pretty Things?

I was reading [Al Williams]’ great rant on why sometimes the public adoption of tech moves so slowly, as exemplified by the Japanese Minister of Tech requesting the end of submissions to the government on floppy diskettes. In 2022!

Along the way, [Al] points out that we still trust ballpoint-pen-on-paper signatures more than digital ones. Imagine going to a bank and being able to open an account with your authentication token! It would be tons more secure, verifiable, and easier to store. It makes sense in every way. Except, unless you’ve needed one for work, you probably don’t have a Fido2 (or whatever) token, do you?

Same goes for signed, or encrypted, e-mail. If you’re a big cryptography geek, you probably have a GPG key. You might even have a mail reader that supports it. But try requesting an encrypted message from a normal person. Or ask them to verify a signature.

Honestly, signing and encrypting are essentially both solved problems, from a technical standpoint, and for a long time. But somehow, from a societal point of view, we’re not even close yet. Public key encryption dates back to the late 1970’s, and 3.5” diskettes are at least a decade younger. Diskettes are now obsolete, but I still can’t sign a legal document with my GPG key. What gives?

Back Up Encrypted ZFS Data Without Decrypting It, Even If TrueNAS Doesn’t Approve

[Michael Lynch] recently replaced his Synology NAS with a self-built solution built on ZFS, a filesystem with a neat feature: the ability to back up encrypted data without having to decrypt it first. The only glitch is that [Michael] is using TrueNAS, and TrueNAS only wants to back up unencrypted ZFS data to another TrueNAS system. Fortunately, there’s a way around this that isn’t particularly complicated, but definitely requires leveraging the right tools. It also provides an educational walkthrough for how ZFS handles these things.

The solution is a small handful of shell scripts to manage full and incremental backups and restores of encrypted datasets, without having to decrypt the data first. As mentioned, this is something TrueNAS will handle by default, but only if the destination is also a TrueNAS system. Now, [Michael] can send that backup to off-site cloud storage with only a little extra work.

There’s one additional trick [Michael] uses to monitor his backups. He leverages a paid (but with a free tier) service called Cronitor. It’s not very obvious from the site’s features, but there is a way to implement cron job monitoring that doesn’t require adding any software whatsoever. Here’s how that part works: Cronitor provides a custom, unique URL. If that URL isn’t visited regularly (for example, because the cron job fails), then the user is notified. By integrating this into an existing cron job, one can be notified. Such an integration would look like this:

0 0 3 * * monthly-job && curl --silent https://cronitor.link/p/<API-KEY>/monthly-job?state=complete

In short, if the cron job runs successfully, curl checks in by visiting the custom URL. If that doesn’t happen, the user gets a notification. No added software, just a simple leveraging of a free service for some added peace of mind.

Backups are easy to neglect, so maybe it’s time to take a few moments to consider what you do for data storage, including how you’d recover from disaster.

A Secure Phone Fit For A Prime Minister

The curtain of state secrecy which surrounds the type of government agency known primarily by initialisms is all-encompassing and long-lived, meaning that tech that is otherwise in the public domain remains top secret for many decades. Thus it’s fascinating when from time to time the skirts are lifted to reveal a glimpse of ankle, as has evidently been the case for a BBC piece dealing with the encrypted phones produced by GCHQ and used by Margaret Thatcher in the early 1980s. Sadly, it’s long on human interest and short on in-depth technology, but nevertheless from it can be deduced enough to work out how it most likely worked.

We’re told that it worked over a standard phone line and transmitted at 2.4 kilobytes per second, a digital data stream encoded using a paper tape key that was changed daily. If we were presented with this design spec to implement in a briefcase using 1980s components, we’d probably make an ADPCM (Adaptive Differential Pulse Code Modulation) system with an XOR encryption against the key, something we think would be well within the capabilities of early 1980s digital logic and microprocessors. We’re wondering whether the BBC have made a typo and that  should be kilobits rather than kilobytes to work on a standard phone line.

No doubt there are people in the comments who could tell us if they were willing to break the Official Secrets Act, but we’d suggest they don’t risk their liberty by doing so. It’s worth noting though, that GCHQ have been known to show off some of their past glories, as in this 2019 exhibition at London’s Science Museum.