Searchable KiCad Component Database Makes Finding Parts A Breeze

KiCad, the open source EDA software, is popular with Hackaday readers and the hardware community as a whole. But it is not immune from the most common bane of EDA tools. Managing your library of symbols and footprints, and finding new ones for components you’re using in your latest design is rarely a pleasant experience. Swooping in to help alleviate your pain, [twitchyliquid64] has created KiCad Database (KCDB). a beautifully simple web-app for searching component footprints.

The database lets you easily search by footprint name with optional parameters like number of pins. Of course it can also search by tag for a bit of flexibility (searching Neopixel returned the footprint shown above). There’s also an indicator for Kicad-official parts which is a nice touch. One of our favourite features is the part viewer, which renders the footprint in your browser, making it easy to instantly see if the part is suitable. AngularJS and material design are at work here, and the main app is written in Go — very trendy.

The database is kindly publicly hosted by [twitchyliquid64] but can easily be run locally on your machine where you can add your own libraries. It takes only one command to add a GitHub repo as a component source, which then gets regularly “ingested”. It’s great how easy it is to add a neat library of footprints you found once, then forget about them, safe in the knowledge that they can easily be found in future in the same place as everything else.

If you can’t find the schematic symbols for the part you’re using, we recently covered a service which uses OCR and computer vision to automatically generate symbols from a datasheet; pretty cool stuff.

Stars Looking A Bit Dim? Throw Some Math At Them.

As the cost of high-resolution images sensors gets lower, and the availability of small and cheap single board computers skyrockets, we are starting to see more astrophotography projects than ever before. When you can put a $5 Raspberry Pi Zero and a decent webcam outside in a box to take autonomous pictures of the sky all night, why not give it a shot? But in doing so, many hackers are recognizing a fact well-known to traditional telescope jockeys: seeing a few stars is easy, seeing a lot of stars is another story entirely.

The problem is that stars are fairly dim; a problem compounded by the light pollution you get unless you’re out in a rural area. You can’t just brighten up the images either, as that only increases the noise in the image. A programmer always in search of a challenge, [Benedikt Bitterli] decided to take a shot at using software to improve astrophotography images. He documented the entire process, failures and all, on his blog for anyone else who might be curious about what it really takes to create the incredible images of the night sky we see in textbooks.

In principle it’s simple: just take a lot of pictures of the sky, stack them on top of each other, and identify which points of light are stars and which ones are noise artifacts. But of course the execution is considerably more difficult. For one thing, unless the camera was on a mount that was automatically tracking the sky, the stars will have slightly moved in each image. To help with this process, [Benedikt] used a navigational trick that humanity has relied on for millennia: mapping constellations. By comparing groupings of stars in each image, his software is able to accurately overlay each image.

But that’s only one part of the equation. In his post, [Benedikt] goes over the incredible amount of math that goes into identifying individual stars in the sea of noise you get when a digital image sensor looks into the black. You certainly don’t need to understand all the math to appreciate the final results, but it’s a fascinating read for those with an interest in computer vision concepts.

This kind of software is precisely what you want to pair with your 3D printed star tracker, or even better a Raspberry Pi sky monitoring station.

[Thanks to Helio Machado for the tip.]

Someone Set Us Up The Compiler Bomb

Despite the general public’s hijacking of the word “hacker,” we don’t advocate doing disruptive things. However, studying code exploits can often be useful both as an academic exercise and to understand what kind of things your systems might experience in the wild. [Code Explainer] takes apart a compiler bomb in a recent blog post.

If you haven’t heard of a compiler bomb, perhaps you’ve heard of a zip bomb. This is a small zip file that “explodes” into a very large file. A compiler bomb is a small piece of C code that will blow up a compiler — in this case, specifically, gcc. [Code Explainer] didn’t create the bomb though, that credit goes to [Digital Trauma].

Continue reading “Someone Set Us Up The Compiler Bomb”

Stock Market Prediction With Natural Language Machine Learning

Machines – is there anything they can’t learn? 20 years ago, the answer to that question would be very different. However, with modern processing power and deep learning tools, it seems that computers are getting quite nifty in the brainpower department. In that vein, a research group attempted to use machine learning tools to predict stock market performance, based on publicly available earnings documents. 

The team used the Azure Machine Learning Workbench to build their model, one of many tools now out in the marketplace for such work. To train their model, earnings releases were combined with stock price data before and after the announcements were made. Natural language processing was used to interpret the earnings releases, with steps taken to purify the input by removing stop words, punctuation, and other ephemera. The model then attempted to find a relationship between the language content of the releases and the following impact on the stock price.

Particularly interesting were the vocabulary issues the team faced throughout the development process. In many industries, there is a significant amount of jargon – that is, vocabulary that is highly specific to the topic in question. The team decided to work around this, by comparing stocks on an industry-by-industry basis. There’s little reason to be looking at phrases like “blood pressure medication” and “kidney stones” when you’re comparing stocks in the defence electronics industry, after all.

With a model built, the team put it to the test. Stocks were sorted into 3 bins —  low performing, middle performing, and high performing. Their most successful result was a 62% chance of predicting a low performing stock, well above the threshold for chance. This suggests that there’s plenty of scope for further improvement in this area. As with anything in the stock market space, expect development in this area to continue at a furious pace.

We’ve seen machine learning do great things before, too – even creative tasks, like naming tomatoes. 

VCF East 2018: The Mail Order App Store

Today we take the concept of a centralized software repository for granted. Whether it’s apt or the App Store, pretty much every device we use today has a way to pull applications in without the user manually having to search for them on the wilds of the Internet. Not only is this more convenient for the end user, but at least in theory, more secure since you won’t be pulling binaries off of some random website.

But centralized software distribution doesn’t just benefit the user, it can help developers as well. As platforms like Steam have shown, once you lower the bar to the point that all you need to get your software on the marketplace is a good idea, smaller developers get a chance to shine. You don’t need to find a publisher or pay out of pocket to have a bunch of discs pressed, just put your game or program out there and see what happens. Markus “Notch” Persson saw his hobby project Minecraft turn into one of the biggest entertainment franchises in decades, but one has to wonder if it would have ever gotten released commercially if he first had to convince a publisher that somebody would want to play a game about digging holes.

In the days before digital distribution was practical, things were even worse. If you wanted to sell your game or program, it needed to be advertised somewhere, needed to be put on physical media, and it needed to get shipped out to the customer. All this took capital that would easily be beyond many independent developers, to say nothing of single individuals.

But at the recent Vintage Computer Festival East, [Allan Bushman] showed off relics from a little known chapter of early home computing: the Atari Program Exchange (APX). In a wholly unique approach to software distribution at the time, individuals were given a platform by which their software would be advertised and sold to owners of 8-bit machines such as the Atari 400/800 and later XL series computers. In the early days, when the line between computer user and computer programmer was especially blurry, the APX let anyone with the skill turn their ideas into profit. Continue reading “VCF East 2018: The Mail Order App Store”

Train object recognizer for cards

Using TensorFlow To Recognize Your Own Objects

When the time comes to add an object recognizer to your hack, all you need do is choose from many of the available ones and retrain it for your particular objects of interest. To help with that, [Edje Electronics] has put together a step-by-step guide to using TensorFlow to retrain Google’s Inception object recognizer. He does it for Windows 10 since there’s already plenty of documentation out there for Linux OSes.

You’re not limited to just Inception though. Inception is one of a few which are very accurate but it can take a few seconds to process each image and so is more suited to a fast laptop or desktop machine. MobileNet is an example of one which is less accurate but recognizes faster and so is better for a Raspberry Pi or mobile phone.

Collage of images for card datasetYou’ll need a few hundred images of your objects. These can either be scraped from an online source like Google’s images or you get take your own photos. If you use the latter approach, make sure to shoot from various angles, rotations, and with different lighting conditions. Fill your background with various other things and even have some things partially obscuring your objects. This may sound like a long, tedious task, but it can be done efficiently. [Edje Electronics] is working on recognizing playing cards so he first sprinkled them around his living room, added some clutter, and walked around, taking pictures using his phone. Once uploaded, some easy-to-use software helped him to label them all in around an hour. Note that he trained on 24 different objects, which are the number of different cards you get in a pinochle deck.

You’ll need to install a lot of software and do some configuration, but he walks you through that too. Ideally, you’d use a computer with a GPU but that’s optional, the difference being between three or twenty-four hours of training. Be sure to both watch his video below and follow the steps on his Github page. The Github page is kept most up-to-date but his video does a more thorough job of walking you through using the software, such as how to use the image labeling program.

Why is he training an object recognizer on playing cards? This is just one more step in making a blackjack playing robot. Previously he’d done an impressive job using OpenCV, even though the algorithm handled non-overlapping cards only. Google’s Inception, however, recognizes partially obscured cards. This is a very interesting project, one which we’ll be keeping an eye on. If you have any ideas for him, leave them in the comments below.

Continue reading “Using TensorFlow To Recognize Your Own Objects”

Machine Learning Crash Course From Google

We’ve been talking a lot about machine learning lately. People are using it for speech generation and recognition, computer vision, and even classifying radio signals. If you’ve yet to climb the learning curve, you might be interested in a new free class from Google using TensorFlow.

Of course, we’ve covered tutorials for TensorFlow before, but this is structured as a 15 hour class with 25 lessons and 40 exercises. Of course, it is also from the horse’s mouth, so to speak. Google says the class will answer questions like:

  • How does machine learning differ from traditional programming?
  • What is loss, and how do I measure it?
  • How does gradient descent work?
  • How do I determine whether my model is effective?
  • How do I represent my data so that a program can learn from it?
  • How do I build a deep neural network?

Continue reading “Machine Learning Crash Course From Google”