Raspberry Pi Offers Soulless Work Oversight

If you’re like us, you spend more time than you care to admit staring at a computer screen. Whether it’s trying to find the right words for a blog post or troubleshooting some code, the end result is the same: an otherwise normally functioning human being is reduced to a slack-jawed zombie. Wouldn’t it be nice to be able to quantify just how much of your life is being wasted basking in the flickering glow of your monitor? Surely that wouldn’t be a crushingly depressing piece of information to have at the end of the week.

With the magic of modern technology, you need wonder no longer. Prolific hacker [dekuNukem] has created the aptly named “facepunch”, which allows you to “punch in” with nothing more than your face. Just sit down in front of your Raspberry Pi’s camera, and the numbers start ticking away. It’s like the little clock in the front of a taxi: except at the end you don’t have to pay anyone, you just have to come to terms with what your life has become. So that’s cool.

It doesn’t take much hardware to play along at home. All you need is a Raspberry Pi and the official camera accessory. Though for the full effect you should add one of the displays supported by the Luma.OLED driver so you can see the minutes and hours ticking away in real-time.

To get the facial recognition going, all you need to do is take a well-lit picture of your face and save it as a 400×400 JPEG. The Python 3 script will take care of the rest: checking the frames from the camera every few seconds to see if your beautiful mug is in the frame, and incrementing the counters accordingly.

Even if you’re not in the market for an Orwellian electronic supervisor, this project is a great example to get you started in the world of facial recognition. With a little luck, you’ll be weaponizing it in no time.

Turn A Car Into A Game Controller

The CAN bus has become a staple of automotive engineering since it was introduced in the late ’80s, but in parallel with the spread of electronic devices almost every single piece of equipment inside a car has been put on the CAN bus. While there are opinions on whether or not this is a good thing, the reality is that enough data is gathered on this bus to turn an unmodified modern car into a video game controller with just a little bit of code.

The core of [Scott]’s project is a laptop and a Python program that scrapes information about the car from the car’s CAN bus, including positions of the pedals and the steering wheel. This information can be accessed by plugging an adapter into the OBD-II port (a standard for all cars made after 1995). From there, the laptop parses the CAN data into keyboard and mouse commands for your video game of choice.

This is an interesting investigation into the nitty-gritty of the CAN bus, but also a less dangerous demonstration of all of the data available from the car than some other cases we’ve seen. At least [Scott]’s Mazda (presumably) lacks any wireless attack vectors!

Continue reading “Turn A Car Into A Game Controller”

Statistics And Hacking: A Stout Little Distribution

Previously, we discussed how to apply the most basic hypothesis test: the z-test. It requires a relatively large sample size, and might be appreciated less by hackers searching for truth on a tight budget of time and money.

As an alternative, we briefly mentioned the t-test. The basic procedure still applies: form hypotheses, sample data, check your assumptions, and perform the test. This time though, we’ll run the test with real data from IoT sensors, and programmatically rather than by hand.

The most important difference between the z-test and the t-test is that the t-test uses a different probability distribution. It is called the ‘t-distribution’, and is similar in principle to the normal distribution used by the z-test, but was developed by studying the properties of small sample sizes. The precise shape of the distribution depends on your sample size. Continue reading “Statistics And Hacking: A Stout Little Distribution”

Meet The Modern Meat Man’s Modified Meat-Safe

Charcuterie is delicious — but is it hackable? When talking about the salty preserved meats, one might be more inclined to indulge in the concept of bacon before pondering a way to integrate an electrical monitoring system into the process. However, [Danzetto] decided to do both when he did not have anywhere to cure his meats. He made his own fully automatic meat curing chamber lovingly called the curebOS with the aid of a raspberry pi. It is basically a beefed up mini fridge with all of the bells and whistles.

This baby has everything.  Sitting on top is a control system containing the Pi. There are 5 relays used for the lights, circulating fan, ventilating fans, refrigerator, and humidifier all powered by a 5 amp supply — minus the fridge. Down below that is the 3D printed cover with a damper for one of the many ventilation fans that regulate the internal temperature.  To the right is a touchscreen for viewing and potentially controlling the system if necessary. The control program was written in Python for viewing the different trends. And below that, of course, is a viewing window. On the inside are temperature and humidity probes that can be monitored from the front screen. These readings help determine when to activate the compressor, any of the fans, or the humidifier for optimal settings. For a final touch, there are also some LEDs placed above the hanging meat to cast a glowing effect upon the prized possessions.

Continue reading “Meet The Modern Meat Man’s Modified Meat-Safe”

Purely Functional Selfies: Thermal Printer Speaks Haskell

[Dan] recently got a cheap POS thermal printer to chooch remotely over ESP32. Having conquered that project, he decided to see what else he could get the printer to do. Why not use it to print pictures? Sure, it’s been done, but not with Haskell. And yeah, the pictures will be grainy and weird-ish and limited to black and white, but hey, we love black and white around here as much as the idea of doing something simply because you can.

In the first project, [Dan] had to figure out how to talk to the printer since the RS422 cable it came with didn’t seem to work. He bought a TTL-to-RS485 adapter, but then realized he could use TTL directly and wired up a ESP32/OLED dev board to it. During the course of turning it into a photo booth, he had to switch to a bigger screen with a better refresh rate.

Unfortunately, [Dan] was unable to use Haskell by itself. He blames this on the cobwebs in the Haskell ecosystem, something that isn’t a problem for languages like Python that celebrate wide usage and support. [Dan] wrote a Python script that handles image capturing, display, and listening for touch activity on the screen, but Haskell ultimately controls the printer. Check out [Dan]’s demo after the break.

This project may have been trying at times, but at least [Dan] didn’t have to give it a brain transplant to get it to do what he wanted.

Continue reading “Purely Functional Selfies: Thermal Printer Speaks Haskell”

Rage Against The Dying Of The Light With A Raspi Night Vision Camera

One of the most interesting things about hacking is the difference between the vision we have at the beginning and the reality of we’ve built at the end. What began as a simple plan to build a night vision VR headset turned into a five-month adventure for [facelessloser] that culminated in this great-looking camera. He thought it would be easy, but almost every aspect presented some kind of challenge. The important thing is that he kept at it.

One of the major issues [facelessloser] encountered was power. He found that the Pi (Zero W), the screen, and the IR LEDs draw between 1.5 and 2A altogether. He was able to solve this one by using the charging board from a 2A power bank paired with a 1200mAh Li-Po built for the high draw required by vaping. If not for space issues, he might have used a 18650 or two.

Another challenge he faced was storing the video and images. He’d considered setting up the Pi as an access point to view them from a phone browser, but ultimately extended a USB port with an OTG cable to use flash drives. With a bit of Python he can watch for the drive to mount and then write to it. If the flash drive suddenly disappears, the Pi starts saving to the SD card.

There are two videos after the break, a walk through and a night vision demo. You’ll see a bit of a lag happening in the demo video—that’s because [facelessloser] is running the feed through PyGame first. No matter what nightlife you want to peep, it might be nice to add automated zoom with a rangefinder or get a closer look with some PiNoculars.

Continue reading “Rage Against The Dying Of The Light With A Raspi Night Vision Camera”

MeatBagPnP Makes You The Automatic Pick And Place

It’s amazing how hackers are nowadays building increasingly complex hardware with SMD parts as small as grains of sand. Getting multilayer PCB’s and soldering stencils in small quantities for prototyping is easier than ever before. But Pick-and-Place — the process of taking parts and stuffing them on the PCB in preparation for soldering — is elusive, for several reasons. For one, it makes sense only if you plan to do volume production as the cost and time for just setting up the PnP machine for a small run is prohibitive. And a desktop PnP machine isn’t yet as ubiquitous as a 3D printer. Placing parts on the board is one process that still needs to be done manually. Just make sure you don’t sneeze when you’re doing it.

Of course the human is the slow part of this process. [Colin O’Flynn] wrote a python script that he calls MeatBagPnP to ease this bottleneck. It’s designed to look at a row in a parts position file generated from your EDA program and highlight on a render of the board where that part needs to be placed. The human then does what a robotic PnP would have done.

A bar code scanner is not necessary, but using one does make the process a bit quicker. When you scan a code on the part bag, the script highlights the row on the spreadsheet and puts a marker on the first instance of it on the board. After you’ve placed the part, pressing the space bar puts a marker on the next instance of the same value. The script shows it’s done after all parts of the same value are populated and you can then move on to the next part. If you don’t have a bar code scanner handy, you can highlight a row manually and it’ll tell you where to put that part. Check it out in the video below.

Of course, before you use this tool you need some prior preparation. You need a good PNG image of the board (both sides if it is double-sided) scaled so that it is the same dimensions as the target board. The parts position file generated from your EDA tool must use the lower left corner of the board as the origin. You then tell the tool the board dimensions and it scales up everything so that it can put the red markers at the designated XY positions. The script works for single and double-sided boards. For a board with just a few parts, it may not be worth the trouble of doing this, but if you are trying to manually populate a complex board with a lot of parts, using a script like this could make the process a lot less painful.

The project is still fresh and rough around the edges, so if you have comments or feedback to offer, [Colin] is listening.

[Colin]’s name ought to ring a bell — he’s the hacker who built ChipWhisperer which took 2nd Prize at The Hackaday Prize in 2014. The MeatBagPnP project is a result of having worked at building increasingly complex boards manually and trying to make the process easier. In addition to the walk-through of how the script works after the break we’ve embedded his other video from three years back when he was stuffing parts — including BGA’s — the hard way and then reflowing them in a Chinese oven with hacked firmware.

Continue reading “MeatBagPnP Makes You The Automatic Pick And Place”