All About Eve

Most programming languages today look fairly similar. There’s small differences, of course (Python using spaces, Ruby and Perl have some odd-looking constructs). In the 1960s and 1970s, though, a lot of programming languages were pretty cryptic. Algol, APL, and LISP are great examples of unusual looking programming languages. Even FORTRAN and PL/1 were hard to read. RPG and COBOL were attempts to make programming more accessible, although you could argue that neither of them took over the world. Most programming languages today have more similarity to FORTRAN than either of those two languages.

A new programming language, Eve, claims to be based on years of research in programming from a human perspective instead of from the computer’s. The result is a language that works by pattern matching instead of the usual flow of control. It is also made to live inside of Markdown documents that can serve as documentation. You can see a video about Eve, below.

Neither of these are totally new ideas. SNOBOL, AWK, and Prolog all have some pattern-matching involved. [Donald Knuth] was promoting literate programming back in the 1980s. However, Eve understands modern constructs like web browsers.

Continue reading “All About Eve”

Machine Learning: Foundations

When you want a person to do something, you train them. When you want a computer to do something, you program it. However, there are ways to make computers learn, at least in some situations. One technique that makes this possible is the perceptron learning algorithm. A perceptron is a computer simulation of a nerve, and there are various ways to change the perceptron’s behavior based on either example data or a method to determine how good (or bad) some outcome is.

What’s a Perceptron?

I’m no biologist, but apparently a neuron has a bunch of inputs and if the level of those inputs gets to a certain level, the neuron “fires” which means it stimulates the input of another neuron further down the line. Not all inputs are created equally: in the mathematical model of them, they have different weighting. Input A might be on a hair trigger, while it might take inputs B and C on together to wake up the neuron in question.
Continue reading “Machine Learning: Foundations”

Estimate Your English Vocabulary Using Python

We take our mother tongue for granted, a language we learn as young children without realizing the effort involved. It is only when as adults we try to pick up another language that we fully understand how much hard work surrounds each acquired word.

Depending on who you listen to, estimates vary as to the size of a typical native English speaker’s vocabulary. The ballpark figures seem to put most adults under 20 thousand words, while graduates achieve somewhere around 23 thousand words. It’s a subject [Alex Eames] became interested in after reading a BBC article on it, and he decided to write his own software to produce a personal estimate.

His Python script takes the Scrabble word list, and presents the user with a list of words, for each one of which they have to indicate their comprehension. After a hundred words have been presented it calculates an estimate of the size of the user’s vocabulary. [Alex] wrote it on and for the Raspberry Pi, but it should work quite happily on any platform with Python 3. It certainly had no problem with our Ubuntu-based PC.

There is plenty of opportunity for bragging over the size of one’s vocabulary with a script like this one, but it’s something of a statistical leveler in that if you are truthful in your responses it will almost certainly put you exactly where you might expect for your age or level of education. If you want to know the result this script returned for a Hackaday scribe, for example, the answer is 23554.

This subject is a slight departure into software from our usual hardware subject matter, but it’s one of those tests that becomes rather a consuming interest when performed competitively among a group of friends. How well will you fare?

Via [Recantha]

Scanning Parts Into KiCad

You do not know how to make a PCB unless you can make your own parts. [Jan] knows this, but like everyone else he checked out the usual online sources for a footprint for an SD card socket before making his own. It turns out, this SD card socket bought from an online marketplace was completely undocumented. Not only was an Eagle or KiCad footprint unavailable, but CAD files showing the dimensions of the part were non-existent. A solution had to be devised.

Instead of taking calipers and finely measuring all the pads on this SD card socket – a process that would surely fail – [Jan] decided to use a flatbed scanner to trace out the part. The part was placed on the glass and scanned at 300 dpi with a convenient reference object (a public transport card) in the same picture. This picture was imported into a CAD package, scaled to the correct ratio, and exported as a DXF. Since KiCad readily accepts importing DXFs, the CAD file was easily accessed, traced over, and a new part created.

From start to finish, making the footprint for this no-name, off-brand SD card socket took fifteen minutes. That’s nothing compared to the time it would take to manually measure each of the pads, draw a footprint, and print out the footprint at 1:1 scale to see if it matched up several times. It’s awesome work, and a great reminder that the best tools are usually right in front of you.

Hallucinating Machines Generate Tiny Video Clips

Hallucination is the erroneous perception of something that’s actually absent – or in other words: A possible interpretation of training data. Researchers from the MIT and the UMBC have developed and trained a generative-machine learning model that learns to generate tiny videos at random. The hallucination-like, 64×64 pixels small clips are somewhat plausible, but also a bit spooky.

The machine-learning model behind these artificial clips is capable of learning from unlabeled “in-the-wild” training videos and relies mostly on the temporal coherence of subsequent frames as well as the presence of a static background. It learns to disentangle foreground objects from the background and extracts the overall dynamics from the scenes. The trained model can then be used to generate new clips at random (as shown above), or from a static input image (as shown in pairs below).

Currently, the team limits the clips to a resolution of 64×64 pixels and 32 frames in duration in order to decrease the amount of required training data, which is still at 7 TB. Despite obvious deficiencies in terms of photorealism, the little clips have been judged “more realistic” than real clips by about 20 percent of the participants in a psychophysical study the team conducted. The code for the project (Torch7/LuaJIT) can already be found on GitHub, together with a pre-trained model. The project will also be shown in December at the 2016 NIPS conference.

Grand Theft Auto V Used To Teach Self-Driving AI

For all the complexity involved in driving, it becomes second nature to respond to pedestrians, environmental conditions, even the basic rules of the road. When it comes to AI, teaching machine learning algorithms how to drive in a virtual world makes sense when the real one is packed full of squishy humans and other potential catastrophes. So, why not use the wildly successful virtual world of Grand Theft Auto V to teach machine learning programs to operate a vehicle?

Half and Half GTAV Annotation ThumbThe hard problem with this approach is getting a large enough sample for the machine learning to be viable. The idea is this: the virtual world provides a far more efficient solution to supplying enough data to these programs compared to the time-consuming task of annotating object data from real-world images. In addition to scaling up the amount of data, researchers can manipulate weather, traffic, pedestrians and more to create complex conditions with which to train AI.

It’s pretty easy to teach the “rules of the road” — we do with 16-year-olds all the time. But those earliest drivers have already spent a lifetime observing the real world and watching parents drive. The virtual world inside GTA V is fantastically realistic. Humans are great pattern recognizers and fickle gamers would cry foul at anything that doesn’t analog real life. What we’re left with is a near-perfect source of test cases for machine learning to be applied to the hard part of self-drive: understanding the vastly variable world every vehicle encounters.

A team of researchers from Intel Labs and Darmstadt University in Germany created a program that automatically indexes the virtual world (as seen above), creating useful data for a machine learning program to consume. This isn’t a complete substitute for real-world experience mind you, but the freedom to make a few mistakes before putting an AI behind the wheel of a vehicle has the potential to speed up development of autonomous vehicles. Read the paper the team published Playing for Data: Ground Truth from Video Games.

Continue reading “Grand Theft Auto V Used To Teach Self-Driving AI”

Commanding Kerbals With A Physical Interface

Kerbal Space Program will have you hurling little green men into the wastes of outer space, landing expended boosters back on the launchpad, and using resources on the fourth planet from the Sun to bring a crew back home. Kerbal is the greatest space simulator ever created, teaches orbital mechanics better than the Air Force textbook, but it is missing one thing: switches and blinky LEDs.

[SgtNoodle] felt this severe oversight by the creators of Kerbal could be remedied by building his Kerbal Control Panel, which adds physical buttons, switches, and a real 6-axis joystick for roleplaying as an Apollo astronaut.

The star of this build is the custom six-axis joystick, used for translation control when docking, maneuvering, or simply puttering around in space. Four axis joysticks are easy, but to move forward and backward, [SgtNoodle] replaced the shaft of a normal arcade joystick with a carriage bolt, added a washer on one end, and used two limit switches to give this MDF cockpit Z+ and Z- control.

The rest of the build is equally well detailed, with a CNC’d front panel, toggle switches and missile switch covers, with everything connected to an Arduino Mega. This Arduino interfaces the switches to the game with the kRPC mod, which creates a script-driven interface to the game. So, toggling the landing gear switch, for instance, triggers a script which interfaces with KSP to lower your landing gear prior to a nice, safe landing. Or, more likely, a terrifying crash.