Your Fuji Digital Camera Is Hackable

There was a time when a digital camera was a surprisingly simple affair whose on-board processor didn’t have much in the way of smarts beyond what was needed to grab an image from the sensor and compress it onto some storage. But as they gained more features, over time cameras acquired all the trappings of a fully-fledged computer in their own right, including full-fat operating systems and the accompanying hackability opportunities.

Prominent among camera manufacturers are Fujifilm, whose cameras it turns out have plenty of hacking possibilities. There’s something of a community about them, with all their work appearing in a GitHub repository, and a cracking April Fool in which a Fujifilm camera appears able to be coaxed into running DOOM.

Correction: We’ve since heard from creator [Daniel] who assures us that not only was the DOOM hack very much real, but that he’s released the instructions on how to run the classic shooter on your own Fujfilm X-A2.

Fujifilm cameras past 2017 or so run the ThreadX real-time operating system on a variety of ARM SoCs, with an SQLite data store for camera settings and some custom software controlling the camera hardware. The hackability comes through patching firmware updates, and aside from manipulating the built-in scripting language and accessing the SQLite database, can include code execution.

Don’t have a Fujifilm? They’re not the only hackable camera to be found.

Classic Film Camera Goes Digital With Game Boy Tech

Despite having been technologically obsolete for a decade or two, analog photography is still practiced by hobbyists and artists to achieve a particular aesthetic. One might imagine a similar thing happening with early digital cameras, and indeed it has: the Game Boy Camera has seen use in dozens of projects. [Michael Fitzmayer] however decided to combine the worlds of analog and early digital photography by equipping a Holga with the image sensor from a Game Boy Camera.

A camera module and an STM32 module on a solderless breadboardThe Holga, if you’re not familiar, is a cheap film camera from the 1980s that has achieved something of a cult following among retro-photography enthusiasts. By equipping it with the sensor from what was one of the first mass-market digital cameras, [Michael] has created a rather unusual digital point-and-shoot. The user interface is as simple as can be: a single button to take a photo, and nothing else. There’s no screen to check your work — just as with film, you’ll have to wait for the pictures to come back from the lab.

The sensor used in the Game Boy Camera is a Mitsubishi M64282FP, which is a 128 x 128 pixel monochrome CMOS unit. [Michael] hooked it up to an STM32F401 microcontroller, which reads out the sensor data and stores it on an SD card in the form of a bitmap image.

With no film roll present, the Holga has plenty of space for all the electronics and a battery. The original lens turned out to be a poor fit for the image sensor, but with a bit of tweaking the Game Boy optics fit in its place without significantly altering the camera’s appearance.

A monochrome low-resolution selfie of a man making the peace sign[Michael] helpfully documented the design process and shared all source code on his GitHub page. Holgas shouldn’t be hard to find to find, but if none are available in your area you can just roll your own. The Game Boy Camera is actually one of the most versatile cameras out there, having been used for everything from video conferencing to astrophotography.

A Compact Camera Running Linux? What’s Not To Like!

One of the devices swallowed up by the smartphone for the average person is the handheld camera, to the extent that the youngsters are reported to be now rediscovering 20-year-old digital cameras for their retro cool factor. Cameras aren’t completely dead though, as a mirrorless compact or a DSLR should still blow the socks off a phone in competent hands. They’ve been around long enough to be plentiful secondhand, which makes [Georg Lukas]’ look at a ten-year-old range of models from Samsung worth a second look. Why has a deep dive into old cameras caught our eye? These cameras run Linux, in the form of Samsung’s Tizen distribution.

His interest in the range comes from owning one since 2014, and it’s in his earlier series of posts on hacking that camera that we find some of the potential it offers. Aside from the amusement that it runs an unprotected X server, getting to a root shell is fairly straightforward as we covered at the time, and it turns out to be a very hackable device.

Cameras follow a Gartner hype cycle-like curve in the popularity stakes, so for example the must-have bridge cameras and compact cameras of the late-2000s are now second-hand-store bargains. Given that mirrorless cameras such as the Samsung are now fairly long in the tooth, it’s likely that they too will fall into a pit of affordability before too long. One to look out for, perhaps.

Inside Digital Image Chips

Have you ever thought how amazing it is that every bit of DRAM in your computer requires a teeny tiny capacitor? A 16 GB DRAM has 128 billion little capacitors, one for each bit. However, that’s not the only densely-packed IC you probably use daily. The other one is the image sensor in your camera, which is probably in your phone. The ICs have a tremendous number of tiny silicon photosensors, and [Asianometry] explains how they work in the video you can see below.

The story starts way back in the 1800s when Hertz noticed that light could knock electrons out of their normal orbits. He couldn’t explain exactly what was happening, especially since the light intensity didn’t correlate to the energy of the electrons, only the number of them. It took Einstein to figure out what was going on, and early devices that used the principle were photomultiplier tubes, which are extremely sensitive. However, they were bulky, and an array of even dozens of them would be gigantic.

Semiconductor devices use silicon. Bell Labs was working on bubble memory, which was a way of creating memory that was never very popular. However, as a byproduct, the researchers realized that moving charges around for memory could also move around charges from photosensitive diodes. The key idea was that it was harder to connect many photodiodes than it was to create the photodiodes. Using the charge-coupled device or CCD method, the chip could manipulate the charges to reduce the number of connections to the chip.

CCDs opened up the digital image market, but it has some problems. The next stage was CMOS chips. They’d been around for a while since IBM produced the scanistor, but the sensitivity of these CMOS image chips was poor. Since most people were happy with CCD, there wasn’t as much research on CMOS. However, CMOS sensors would eventually become more capable, and the video explains how it works.

We’ve looked at image sensors before, too. The way you read them can make a big difference in your images.

Continue reading “Inside Digital Image Chips”

Dobsonian Telescope Adds Plate Solver

The amateur astronomy world got a tremendous boost during the 1960s when John Dobson invented what is now called the Dobsonian telescope. Made from commonly-sourced materials and mechanically much simpler than what was otherwise available at the time, the telescope dramatically reduced the barrier to entry for larger telescopes and also made them much more portable and inexpensive.

For all their perks, though, a major downside is increased complexity when building automatic tracking systems. [brickbots] went a different way when solving this problem, though: a plate solver.

Plate solving is a method by which the telescope’s field of view is compared to known star charts to determine what it’s currently looking at. Using a Raspberry Pi at the center of the build, the camera module pointed at the sky lets the small computer know exactly what it’s looking at, and the GPS system adds precise location data as well for a quick plate solving solution. A red-tinted screen finishes out the build and lets [brickbots] know exactly what the telescope is pointed towards at all times.

While this doesn’t fully automate or control the telescope like a tracking system would do, it’s much simpler to build a plate solver in this situation. That doesn’t mean it’s impossible to star hop with a telescope like this, though; alt-azimuth mounted telescopes like Dobsonians just need some extra equipment to get this job done. Here’s an example which controls a similar alt-azimuth telescope using an ESP32 and a few rotary encoders.

AI And Savvy Marketing Create Dubious Moon Photos

Taking a high-resolution photo of the moon is a surprisingly difficult task. Not only is a long enough lens required, but the camera typically needs to be mounted on a tracking system of some kind, as the moon moves too fast for the long exposure times needed. That’s why plenty were skeptical of Samsung’s claims that their latest smart phone cameras could actually photograph this celestial body with any degree of detail. It turns out that this skepticism might be warranted.

Samsung’s marketing department is claiming that this phone is using artificial intelligence to improve photos, which should quickly raise a red flag for anyone technically minded. [ibreakphotos] wanted to put this to the test rather than speculate, so a high-resolution image of the moon was modified in such a way that most of the fine detail of the image was lost. Displaying this image on a monitor, standing across the room, and using the smartphone in question reveals details in the image that can’t possibly be there.

The image that accompanies this post shows the two images side-by-side for those skeptical of these claims, but from what we can tell it looks like this is essentially an AI system copy-pasting the moon into images it thinks are of the moon itself. The AI also seems to need something more moon-like than a ping pong ball to trigger the detail overlay too, as other tests appear to debunk a more simplified overlay theory. It seems like using this system, though, is doing about the same thing that this AI camera does to take pictures of various common objects.

Measuring A Millisecond Mechanically

If you are manufacturing something, you have to test it. It wouldn’t do, for example, for your car to say it was going 60 MPH when it was really going 90 MPH. But if you were making a classic Leica camera back in the early 20th century, how do you measure a shutter that operates at 1/1000 of a second — a millisecond — without modern electronics? The answer is a special stroboscope that would look at home in any cyberpunk novel. [SmarterEveryDay] visited a camera restoration operation in Finland, and you can see the machine in action in the video below.

The machine has a wheel that rotates at a fixed speed. By imaging a pattern through the camera, you can determine the shutter speed. The video shows a high-speed video of the shutter operation which is worth watching, and it also explains exactly how the rotating disk combined with the rotating shutter allows the measurement. Continue reading “Measuring A Millisecond Mechanically”