Your Phone Is A 200X Microscope — Sort Of

[A. Cemal Ekin] over on PetaPixel reviewed the Apexel 200X LED Microscope Lens. The relatively inexpensive accessory promises to transform your cell phone camera into a microscope. Of course, lenses that strap over your phone’s camera lens aren’t exactly a new idea, but this one looks a little more substantial than the usual piece of plastic in a spring-loaded clip. Does it work? You should read [Cemal’s] post for the details, but the answer — as you might have expected — is yes and no.

On the yes side, you can get some pretty neat photomicrographs from the adapter. On the negative side, your phone isn’t made to accommodate microscope samples. It also isn’t made to stay stable at 200X.

Continue reading “Your Phone Is A 200X Microscope — Sort Of”

Collection Of Old Films Rescued For Preservation

Periscope Film owners [Doug] and [Nick] just released a mini-documentary about the rescue of a large collection of old 35 and 16 mm celluloid films from the landfill. The video shows the process of the films being collected from the donor and then being sorted and organized in a temporary storage warehouse. There is a dizzying variety of films in this haul, from different countries, in both color and black and white.

We can see in the video that their rented 8 meter (26 foot) cargo truck wasn’t enough to contain the trove, so they dragged along a 1.8 x 3.6 m (6 x 12 ft) double-axle trailer as well. That makes a grand total of 49 cubic meters of space. Our back-of-the-envelope calculations says that filled to the brim, that would be over 30,000 canisters of 600 m (2,000 ft) 35 mm movie reels.

When it comes to preserving these old films, one big problem is physical deterioration of the film stock itself. You will know something is wrong when you get a strong acetic or vinegary odor when opening the can. [Nick] shows some examples where the film has even become solidified, taken on a hexagonal shape. It will take months to just assess and catalog the contents of this collection, with damaged films that are still salvageable jumping to the head of the queue to be digitized.

Film Scanning Artist [Esteban] Performing Color Correction
Films are digitized at 4K resolution using a Lasergraphics ScanStation archival quality film scanning system, and then the restoration fun begins. One issue demonstrated in this video is color deterioration. In the Eastmancolor film technology introduced in the 1950s, the blue dyes deteriorate over time. This, and a plethora of other issues, are corrected in the restoration process.

We’re particularly jealous of film scanning artist [Esteban]’s triple-headed trackball. We learned from a quick Google search this beast is merely the entry level control panel from UK company Tangent — they make even larger flavors.

If you’re interested in doing this with 8 mm home movies, we covered a project way back in 2011 of a DIY home movie scanning project. We also covered one of Periscope Film’s restored training films about NASA soldering techniques from 1958. Kudos to organizations who focus on keeping these types of interesting and historical films from being dumped in the landfill and lost forever.

Continue reading “Collection Of Old Films Rescued For Preservation”

This Camera Produces A Picture, Using The Scene Before It

It’s the most basic of functions for a camera, that when you point it at a scene, it produces a photograph of what it sees. [Jasper van Loenen] has created a camera that does just that, but not perhaps in the way we might expect. Instead of committing pixels to memory it takes a picture, uses AI to generate a text description of what is in the picture, and then uses another AI to generate an image from that picture. It’s a curiously beautiful artwork as well as an ultimate expression of the current obsession with the technology, and we rather like it.

The camera itself is a black box with a simple twin-lens reflex viewfinder. Inside is a Raspberry Pi that takes the photo and sends it through the various AI services, and a Fuji Instax Mini printer. Of particular interest is the connection to the printer which we think may be of interest to quite a few others, he’s reverse engineered the Bluetooth protocols it uses and created Python code allowing easy printing. The images it produces are like so many such AI-generated pieces of content, pretty to look at but otherworldly, and weird parallels of the scenes they represent.

It’s inevitable that consumer cameras will before long offer AI augmentation features for less-competent photographers, meanwhile we’re pleased to see Jasper getting there first.

How To Roll Your Own Custom Object Detection Neural Network

Real-time object detection, which uses neural networks and deep learning to rapidly identify and tag objects of interest in a video feed, is a handy feature with great hacker potential. Happily, it’s also possible to make customized CNNs (convolutional neural networks) tailored for one’s own needs, and that process just got easier thanks to some new documentation for the Vizy “AI camera” by Charmed Labs.

Raspberry Pi-based Vizy camera

Charmed Labs has been making hacker-friendly machine vision devices for a long time, and the Vizy camera impressed us mightily when we checked it out last year. Out of the box, Vizy has a perfectly functional object detector application that runs locally on the device, and can detect and tag many common everyday objects in real time. But what if that default application doesn’t quite meet one’s project needs? Good news, because it’s possible to create a custom-trained CNN, and that process got a lot more accessible thanks to step-by-step examples of training a model to recognize hands doing rock-paper-scissors.

Person and cat with machine-generated tags identifying them
Default object detection works well, but sometimes one needs custom results.

The basic process is this: Start with a variety of images that show the item of interest. Then identify and label the item of interest in each photo. These photos (a “training set”) are then sent to Google Colab, which will be used to generate a neural network. The resulting CNN model can then be downloaded and used, to see how well it performs.

Of course things rarely work perfectly the first time around, so at this point it’s pretty common for some refinement to be needed to increase accuracy. Luckily there are a number of tools to help do this without creating a new model from scratch, so it’s just a matter of tweaking until things perform acceptably.

Google Colab is free and the resulting CNNs are implemented in the TensorFlow Lite framework, meaning it’s possible to use them elsewhere. So if custom object detection has been holding up a project idea of yours, this might be what gets you over that hump.

An Instant Camera Using E-Paper As Film

The original Polaroid cameras were a huge hit not just for their instant delivery, but for the convenient size of the permanent images they delivered. It’s something that digital cameras haven’t been able to replicate, which drew [Cameron] to produce a modern alternative. In the place of the chemical film of the original, it uses a removable e-paper display in a frame. The image is stored in the pixels of the e-paper, which can be kept as a digital version of the photograph until reattached and replaced with another freshly taken picture.

At its heart is an ESP32 with a camera, and the “film” is a Waveshare NFC e-paper module. The device is 3D printed, and manages a very creditable early-1970s aesthetic redolent of the more upmarket Polaroids of the day. Using it is as simple as pressing the button and deciding whether you like what’s on the screen. You can see it in action in the video below the break.

We like his project for its aesthetics, as well as for the very idea of using e-paper as a medium. There’s also something to be said for not having to put a Polaroid print in a clip under your armpit while it develops. Meanwhile if you do hanker for the real thing, it’s a subject we’ve looked at in the past.

Continue reading “An Instant Camera Using E-Paper As Film”

Do You Need The Raspberry Pi Camera Module V3?

This month came the announcement of some new camera modules from Raspberry Pi. All eyes were on version 3 of their standard camera module, but they also sneaked out a new version of their high quality camera with an M12 lens mount. The version 3 module is definitely worth a look, so I jumped on a train to Cambridge for the Raspberry Pi Store, and bought myself one for review.

There’s nothing new about a Pi camera module as they’ve been available for years in both official and third party forms, so to be noteworthy the new one has to offer something a bit special. It uses a 12 megapixel sensor, and is available both in autofocus and wide angle versions in both standard and NoIR variants. Wide angle and autofocus modules may be new in the official cameras, but these are both things which have been on the third-party market for years.

So if an autofocus camera module for your Pi isn’t that new, what can we bring to a review that isn’t simply exclaiming over the small things? Perhaps it’s better instead to view the new camera in the context of the state of the Pi camera ecosystem, and what better way to do that than to turn a Pi and some modules into a usable camera! Continue reading “Do You Need The Raspberry Pi Camera Module V3?”

Better Macro Images With Arduino Focus Stacking

If you’ve ever played around with macro photography, you’ve likely noticed that the higher the lens magnification, the less the depth of field. One way around this issue is to take several slices at different focus points, and then stitch the photos together digitally. As [Curious Scientist] demonstrates, this is a relatively simple motion control project and well within the reach of a garden-variety Arduino.

You can move the camera or move the subject. Either way, you really only need one axis of motion, which makes it quite simple. This build relies on a solid-looking lead screw to move a carriage up or down. An Arduino Nano acts as the brains, a stepper motor drives the lead screw, and a small display shows stats such as current progress and total distance to move.

The stepper motor uses a conventional stepper driver “stick” as you find in many 3D printers. In fact, we wondered if you couldn’t just grab a 3D printer board and modify it for this service without spinning a custom PCB. Fittingly, the example subject is another Arduino Nano. Skip ahead to 32:22 in the video below to see the final result.

We’ve seen similar projects, of course. You can build for tiny subjects. You can also adapt an existing motion control device like a CNC machine.

Continue reading “Better Macro Images With Arduino Focus Stacking”