The relatively inexpensive K40 laser cutter/engraver machines from China have brought laser cutting to the masses, but they are not without their faults. Sure, they’re only powerful enough for the lightest cutting tasks, but on top of that, their bundled software is inflexible and disappointing. If your workshop or hackspace has one of these machines languishing in the corner, then the release of a new piece of software, K40 Whisperer from [Scorch], is an interesting and welcome development.
He tells us that the reverse engineering process required to understand the K40’s protocol was non-trivial, given that it does not use handy decimal numbers to issue commands. A spreadsheet was used to collate data packets and spot repeating patterns to analyse the inner workings. Feature-wise, the software reads SVG and DXF files, and can split SVGs by colour. It has a halftone algorithm for rendering grey scales, and cuts from the inside of each shape first to avoid pieces of work dropping out of the piece of material. Currently it works with the stock M2 Nano controller board and is available as a Windows download, though it can also be compiled for Linux distributions, or MacOS, and he is asking owners to test it with as many machines as possible to ensure compatibility with other boards.
He has posted a video of K40 Whisperer in action, which you can see below the break.
Continue reading “Take Control Of Your Cheap Laser Cutter”
Silent film star [Lon Chaney] had the nickname “man of a thousand faces.” The Try It Out website (tio.run) might well be the site of a hundred languages. Well, in all fairness, they only have 97 “practical” languages, but they do have 172 “recreational languages” but the site of 269 languages doesn’t trip off the tongue, does it? The site lets you run some code in each of those languages from inside your browser.
By the site’s definition, practical languages include things like C, Java, Python, and Perl. There’s also old school stuff like FOCAL-69, Fortran, Algol, and APL. There’s several flavors of assembly and plenty of other choices. On the recreational side, you can find Numberwang, LOLCODE, and quite a few we’ve never heard of.
Continue reading “The Site of a Hundred Languages”
The documentation is a bit sparse but readable. You simply define the function you want to execute and the dimensions of the problem. You can specify one, two, or three dimensions, as suits your problem space. When you execute the associated function it will try to run the kernels on your GPU in parallel. If it can’t, it will still get the right answer, just slowly.
Every self-respecting hacker has an automation hack somewhere in his/her bag of tricks. There are a lot of modern-day technologies that facilitate the functionality like GPS, scripting apps, and even IFTTT. In an interesting hack, [Nick Lee] has combined iBeacons and a reverse engineered Starbucks API to create an automated morning routine.
By creating a mobile app that scans for iBeacons, [Nick Lee] was able to reduce the effort made every morning while heading to his office. When the app encounters a relevant beacon, a NodeJS app sitting in the cloud is triggered. This consequently leads to desired actions like ordering an Uber ride and placing an order for an iced latte.
[Nick Lee] shares the code for the Starbucks application on GitHub for anyone who wants to order their favorite cup of joe automatically. This project can be easily expanded to work with GPS or even RFID tags and if you feel like adding IoT to a coffee machine, you could automate all of your beverage requirements in one go.
Machine is an IDE for building machine learning systems using TensorFlow. You can sign up for the alpha, but first, have a look at the video below to see what it is all about.
You’ll see in the video, that you can import data for a model and then do training (in this case, to find a mustache in an image). You’ll see the IDE invites an iterative approach to development since you can alter parameters, run experiments, and see the results.
The IDE syncs with “the cloud” so you can work on it from multiple computers and roll back to previous results easily. We don’t know when the IDE will leave alpha status (or beta, for that matter), but the team’s goal is to release a free version of Machine to encourage widespread adoption.
If you want to learn more about TensorFlow, you are in the right place. We’ve also covered a bare-bones project if you’d rather get started that way. You can also find some good background material going all the way back to the early perceptron-based neural networks.
[153Armstrong] did a short post on how easy it is to generate waveforms using Python. We agree it is simple, but actually, it isn’t so much Python per se, it is some pretty cool libraries (SciPy, in particular) that do all the hard work. That may be splitting hairs, but it is worth nothing that SciPy (pronounced “Sigh Pie”) also does other handy tricks like Fourier transforms, too. You can see a video of his results, below.
The code is simple and one of the commenters pointed out an even more efficient way to write the data to a WAV file. The basic idea is to create an array of samples in a buffer using some features of SciPy’s NumPy component.
Continue reading “Simple Wave Generation in Python (and SciPy)”
What if every time you learned something new, you forgot a little of what you knew before? That sort of overwriting doesn’t happen in the human brain, but it does in artificial neural networks. It’s appropriately called catastrophic forgetting. So why are neural networks so successful despite this? How does this affect the future of things like self-driving cars? Just what limit does this put on what neural networks will be able to do, and what’s being done about it?
Numerical weights in an artificial neural network
Neurons in the brain
The way a neural network stores knowledge is by setting the values of weights (the lines in between the neurons in the diagram). That’s what those lines literally are, just numbers assigned to pairs of neurons. They’re analogous to the axons in our brain, the long tendrils that reach out from one neuron to the dendrites of another neuron, where they meet at microscopic gaps called synapses. The value of the weight between two artificial neurons is roughly like the number of axons between biological neurons in the brain.
To understand the problem, and the solutions below, you need to know a little more detail.
Continue reading “Catastrophic Forgetting: Learning’s Effect on Machine Minds”