Building A Simple Python API for Internet of Things Gadgets

It’s no secret that I rather enjoy connecting things to the Internet for fun and profit. One of the tricks I’ve learned along the way is to spin up simple APIs that can be used when prototyping a project. It’s easy to do, and simple to understand so I’m happy to share what has worked for me, using Web2Py as the example (with guest appearances from ESP8266 and NodeMCU).

Barring the times I’m just being silly, there are two reasons I might do this. Most commonly I’ll need to collect data from a device, typically to be stored for later analysis but occasionally to trigger some action on a server in the cloud. Less commonly, I’ll need a device to change its behavior based on instructions received via the Internet.

Etherscan is an example of an API that saves me a lot of work, letting me pull data from Ethereum using a variety of devices.

In the former case, my first option has always been to use IoT frameworks like Thingsboard or Ubidots to receive and display data. They have the advantage of being easy to use, and have rich features. They can even react to data and send instruction back to devices. In the latter case, I usually find myself using an application programming interface (API) – some service open on the Internet that my device can easily request data from, for example the weather, blockchain transactions, or new email notifications.

Occasionally, I end up with a type of data that requires processing or is not well structured for storage on these services, or else I need a device to request data that is private or that no one is presently offering. Most commonly, I need to change some parameter in a few connected devices without the trouble of finding them, opening all the cases, and reprogramming them all.

At these times it’s useful to be able to build simple, short-lived services that fill in these gaps during prototyping. Far from being a secure or consumer-ready product, we just need something we can try out to see if an idea is worth developing further. There are many valid ways to do this, but my first choice is Web2Py, a relatively easy to use open-source framework for developing web applications in Python. It supports both Python 2.7 and 3.0, although we’ll be using Python 3 today.

Continue reading “Building A Simple Python API for Internet of Things Gadgets”

How To Make Bisected Pine Cones Look Great, Step-by-Step

[Black Beard Projects] sealed some pine cones in colored resin, then cut them in half and polished them up. The results look great, but what’s really good about this project is that it clearly demonstrates the necessary steps and techniques from beginning to end. He even employs some homemade equipment, to boot.

Briefly, the process is to first bake the pine cones to remove any moisture. Then they get coated in a heat-activated resin for stabilizing, which is a process that infuses and pre-seals the pine cones for better casting results. The prepped pine cones go into molds, clear resin is mixed with coloring and poured in. The resin cures inside a pressure chamber, which helps ensure that it gets into every nook and cranny while also causing any small air bubbles introduced during mixing and pouring to shrink so small that they can’t really be seen. After that is cutting, then sanding and polishing. It’s an excellent overview of the entire process.

The video (which is embedded below) also has an outstanding depth of information in the details section. Not only is there an overview of the process and links to related information, but there’s a complete time-coded index to every action taken in the entire video. Now that’s some attention to detail.

Continue reading “How To Make Bisected Pine Cones Look Great, Step-by-Step”

Spectrometer Is Inexpensive And Capable

We know the effect of passing white light through a prism and seeing the color spectrum that comes out of the other side. It will not be noticeable to the naked eye, but that rainbow does not fully span the range of [Roy G. Biv]. There are narrowly absent colors which blur together, and those missing portions are a fingerprint of the matter the white light is passing through or bouncing off. Those with a keen eye will recognize that we are talking about spectrophotometry which is identifying those fingerprints and determining what is being observed and how much is under observation. The device which does this is called a spectrometer and [Justin Atkin] invites us along for his build. Video can also be seen below.

Along with the build, we learn how spectrophotometry works, starting with how photons are generated and why gaps appear in the color spectrum. It is all about electrons, which some of our seasoned spectrometer users already know. The build uses a wooden NanoDrop style case cut on a laser engraver. It needs some improvements which are mentioned and shown in the video so you will want to have some aluminum tape on hand. The rest of the bill of materials is covered including “Black 2.0” which claims to be the “mattest, flattest, black acrylic paint.” Maybe that will come in handy for other optical projects. It might be wise to buy first surface mirrors cut to size, but you can always make bespoke mirrors with carefully chosen tools.

Continue reading “Spectrometer Is Inexpensive And Capable”

AI on Raspberry Pi with the Intel Neural Compute Stick

I’ve always been fascinated by AI and machine learning. Google TensorFlow offers tutorials and has been on my ‘to-learn’ list since it was first released, although I always seem to neglect it in favor of the shiniest new embedded platform.

Last July, I took note when Intel released the Neural Compute Stick. It looked like an oversized USB stick, and acted as an accelerator for local AI applications, especially machine vision. I thought it was a pretty neat idea: it allowed me to test out AI applications on embedded systems at a power cost of about 1W. It requires pre-trained models, but there are enough of them available now to do some interesting things.

You can add a few of them in a hub for parallel tasks. Image credit Intel Corporation.

I wasn’t convinced I would get great performance out of it, and forgot about it until last November when they released an improved version. Unambiguously named the ‘Neural Compute Stick 2’ (NCS2), it was reasonably priced and promised a 6-8x performance increase over the last model, so I decided to give it a try to see how well it worked.

 

I took a few days off work around Christmas to set up Intel’s OpenVino Toolkit on my laptop. The installation script provided by Intel wasn’t particularly user-friendly, but it worked well enough and included several example applications I could use to test performance. I found that face detection was possible with my webcam in near real-time (something like 19 FPS), and pose detection at about 3 FPS. So in accordance with the holiday spirit, it knows when I am sleeping, and knows when I’m awake.

That was promising, but the NCS2 was marketed as allowing AI processing on edge computing devices. I set about installing it on the Raspberry Pi 3 Model B+ and compiling the application samples to see if it worked better than previous methods. This turned out to be more difficult than I expected, and the main goal of this article is to share the process I followed and save some of you a little frustration.

Continue reading “AI on Raspberry Pi with the Intel Neural Compute Stick”

How To Make Your Own Springs for Extruded Rail T-Nuts

Open-Source Extruded Profile systems are a mature breed these days. With Openbuilds, Makerslide, and Openbeam, we’ve got plenty of systems to choose from; and Amazon and Alibaba are coming in strong with lots of generic interchangeable parts. These open-source framing systems have borrowed tricks from some decades-old industry players like Rexroth and 80/20. But from all they’ve gleaned, there’s still one trick they haven’t snagged yet: affordable springloaded T-nuts.

I’ve discussed a few tricks when working with these systems before, and Roger Cheng came up with a 3D printed technique for working with T-nuts. But today I’ll take another step and show you how to make our own springs for VSlot rail nuts.

Continue reading “How To Make Your Own Springs for Extruded Rail T-Nuts”

Plastics: Acrylic

If anything ends up on the beds of hobbyist-grade laser cutters more often than birch plywood, it’s probably sheets of acrylic. There’s something strangely satisfying about watching a laser beam trace over a sheet of the crystal-clear stuff, vaporizing a hairs-breadth line while it goes, and (hopefully) leaving a flame-polished cut in its wake.

Acrylic, more properly known as poly(methyl methacrylate) or PMMA, is a wonder material that helped win a war before being developed for peacetime use. It has some interesting chemistry and properties that position it well for use in the home shop as everything from simple enclosures to laser-cut parts like gears and sprockets.

Continue reading “Plastics: Acrylic”

Project Shows How To Use Machine Learning to Detect Pedestrians

Most people are familiar with the idea that machine learning can be used to detect things like objects or people, but for anyone who’s not clear on how that process actually works should check out [Kurokesu]’s example project for detecting pedestrians. It goes into detail on exactly what software is used, how it is configured, and how to train with a dataset.

The application uses a USB camera and the back end work is done with Darknet, which is an open source framework for neural networks. Running on that framework is the YOLO (You Only Look Once) real-time object detection system. To get useful results, the system must be trained on large amounts of sample data. [Kurokesu] explains that while pre-trained networks can be used, it is still necessary to fine-tune the system by adding a dataset which more closely models the intended application. Training is itself a bit of a balancing act. A system that has been overly trained on a model dataset (or trained on too small of a dataset) will suffer from overfitting, a condition in which the system ends up being too picky and unable to usefully generalize. In terms of pedestrian detection, this results in false negatives — pedestrians that don’t get flagged because the system has too strict of an idea about what a pedestrian should look like.

[Kurokesu]’s walkthrough on pedestrian detection is great, but for those interested in taking a step further back and rolling their own projects, this fork of Darknet contains YOLO for Linux and Windows and includes practical notes and guides on installing, using, and training from a more general perspective. Interested in learning more about machine learning basics? Don’t forget Google has a free online crash course to get you up to speed.