After seeing how Google’s Duplex AI was able to book a table at a restaurant by fooling a human maître d’ into thinking it was human, I wondered if it might be possible for us mere hackers to pull off the same feat. What could you or I do without Google’s legions of ace AI programmers and racks of neural network training hardware? Let’s look at the ways we can make a natural language bot of our own. As you’ll see, it’s entirely doable.
Based on [Ben Jojo’s] title — x86 Assembly Doesn’t have to be Scary — we assume that normal programmers fear assembly. Most hackers don’t mind it, but we also don’t often have an excuse to program assembly for desktop computers.
In fact, the post is really well suited for the typical hacker because it focuses the on real mode of an x86 processor after it boots. What makes this tutorial a little more interesting than the usual lecture is that it has interactive areas, where a VM runs your code in the browser after assembling with NASM.
KiCad, the open source EDA software, is popular with Hackaday readers and the hardware community as a whole. But it is not immune from the most common bane of EDA tools. Managing your library of symbols and footprints, and finding new ones for components you’re using in your latest design is rarely a pleasant experience. Swooping in to help alleviate your pain, [twitchyliquid64] has created KiCad Database (KCDB). a beautifully simple web-app for searching component footprints.
The database lets you easily search by footprint name with optional parameters like number of pins. Of course it can also search by tag for a bit of flexibility (searching Neopixel returned the footprint shown above). There’s also an indicator for Kicad-official parts which is a nice touch. One of our favourite features is the part viewer, which renders the footprint in your browser, making it easy to instantly see if the part is suitable. AngularJS and material design are at work here, and the main app is written in Go — very trendy.
The database is kindly publicly hosted by [twitchyliquid64] but can easily be run locally on your machine where you can add your own libraries. It takes only one command to add a GitHub repo as a component source, which then gets regularly “ingested”. It’s great how easy it is to add a neat library of footprints you found once, then forget about them, safe in the knowledge that they can easily be found in future in the same place as everything else.
If you can’t find the schematic symbols for the part you’re using, we recently covered a service which uses OCR and computer vision to automatically generate symbols from a datasheet; pretty cool stuff.
As the cost of high-resolution images sensors gets lower, and the availability of small and cheap single board computers skyrockets, we are starting to see more astrophotography projects than ever before. When you can put a $5 Raspberry Pi Zero and a decent webcam outside in a box to take autonomous pictures of the sky all night, why not give it a shot? But in doing so, many hackers are recognizing a fact well-known to traditional telescope jockeys: seeing a few stars is easy, seeing a lot of stars is another story entirely.
The problem is that stars are fairly dim; a problem compounded by the light pollution you get unless you’re out in a rural area. You can’t just brighten up the images either, as that only increases the noise in the image. A programmer always in search of a challenge, [Benedikt Bitterli] decided to take a shot at using software to improve astrophotography images. He documented the entire process, failures and all, on his blog for anyone else who might be curious about what it really takes to create the incredible images of the night sky we see in textbooks.
In principle it’s simple: just take a lot of pictures of the sky, stack them on top of each other, and identify which points of light are stars and which ones are noise artifacts. But of course the execution is considerably more difficult. For one thing, unless the camera was on a mount that was automatically tracking the sky, the stars will have slightly moved in each image. To help with this process, [Benedikt] used a navigational trick that humanity has relied on for millennia: mapping constellations. By comparing groupings of stars in each image, his software is able to accurately overlay each image.
But that’s only one part of the equation. In his post, [Benedikt] goes over the incredible amount of math that goes into identifying individual stars in the sea of noise you get when a digital image sensor looks into the black. You certainly don’t need to understand all the math to appreciate the final results, but it’s a fascinating read for those with an interest in computer vision concepts.
[Thanks to Helio Machado for the tip.]
Despite the general public’s hijacking of the word “hacker,” we don’t advocate doing disruptive things. However, studying code exploits can often be useful both as an academic exercise and to understand what kind of things your systems might experience in the wild. [Code Explainer] takes apart a compiler bomb in a recent blog post.
If you haven’t heard of a compiler bomb, perhaps you’ve heard of a zip bomb. This is a small zip file that “explodes” into a very large file. A compiler bomb is a small piece of C code that will blow up a compiler — in this case, specifically, gcc. [Code Explainer] didn’t create the bomb though, that credit goes to [Digital Trauma].
Machines – is there anything they can’t learn? 20 years ago, the answer to that question would be very different. However, with modern processing power and deep learning tools, it seems that computers are getting quite nifty in the brainpower department. In that vein, a research group attempted to use machine learning tools to predict stock market performance, based on publicly available earnings documents.
The team used the Azure Machine Learning Workbench to build their model, one of many tools now out in the marketplace for such work. To train their model, earnings releases were combined with stock price data before and after the announcements were made. Natural language processing was used to interpret the earnings releases, with steps taken to purify the input by removing stop words, punctuation, and other ephemera. The model then attempted to find a relationship between the language content of the releases and the following impact on the stock price.
Particularly interesting were the vocabulary issues the team faced throughout the development process. In many industries, there is a significant amount of jargon – that is, vocabulary that is highly specific to the topic in question. The team decided to work around this, by comparing stocks on an industry-by-industry basis. There’s little reason to be looking at phrases like “blood pressure medication” and “kidney stones” when you’re comparing stocks in the defence electronics industry, after all.
With a model built, the team put it to the test. Stocks were sorted into 3 bins — low performing, middle performing, and high performing. Their most successful result was a 62% chance of predicting a low performing stock, well above the threshold for chance. This suggests that there’s plenty of scope for further improvement in this area. As with anything in the stock market space, expect development in this area to continue at a furious pace.
We’ve seen machine learning do great things before, too – even creative tasks, like naming tomatoes.
Today we take the concept of a centralized software repository for granted. Whether it’s
apt or the App Store, pretty much every device we use today has a way to pull applications in without the user manually having to search for them on the wilds of the Internet. Not only is this more convenient for the end user, but at least in theory, more secure since you won’t be pulling binaries off of some random website.
But centralized software distribution doesn’t just benefit the user, it can help developers as well. As platforms like Steam have shown, once you lower the bar to the point that all you need to get your software on the marketplace is a good idea, smaller developers get a chance to shine. You don’t need to find a publisher or pay out of pocket to have a bunch of discs pressed, just put your game or program out there and see what happens. Markus “Notch” Persson saw his hobby project Minecraft turn into one of the biggest entertainment franchises in decades, but one has to wonder if it would have ever gotten released commercially if he first had to convince a publisher that somebody would want to play a game about digging holes.
In the days before digital distribution was practical, things were even worse. If you wanted to sell your game or program, it needed to be advertised somewhere, needed to be put on physical media, and it needed to get shipped out to the customer. All this took capital that would easily be beyond many independent developers, to say nothing of single individuals.
But at the recent Vintage Computer Festival East, [Allan Bushman] showed off relics from a little known chapter of early home computing: the Atari Program Exchange (APX). In a wholly unique approach to software distribution at the time, individuals were given a platform by which their software would be advertised and sold to owners of 8-bit machines such as the Atari 400/800 and later XL series computers. In the early days, when the line between computer user and computer programmer was especially blurry, the APX let anyone with the skill turn their ideas into profit. Continue reading “VCF East: The Mail Order App Store”