Solving Mazes with Graphics Cards

What if we told you that you are likely to have more computers than you think? And we are not talking about things that are computers while not looking like one, like most modern cars or certain lightbulbs. We are talking about the powerful machines hiding in your desktop computer called ‘graphics card’. In the ordinary gaming rig graphics cards that are much more powerful than the machine they’re built into are a common occurrence. In his tutorial [Viktor Chlumský] demonstrates how to harness your GPU’s power to solve a maze.

Software that runs on a GPU is called a shader. In this example a shader is shown that finds the way through a maze. We also get to catch a glimpse at the limitations that make this field of software special: [Viktor]’s solution has to work with only four variables, because all information is stored in the red, green, blue and alpha channels of an image. The alpha channel represents the boundaries of the maze. Red and green channels are used to broadcast waves from the beginning and end points of the maze. Where these two waves meet is the shortest solution, a value which is captured through the blue channel.

Despite having tons of cores and large memory, programming shaders feels a lot like working on microcontrollers. See for yourself in the maze solving walk through below.

Continue reading “Solving Mazes with Graphics Cards”

Educational Robot for Under $100

While schools have been using robots to educate students in the art of science and engineering for decades now, not every school or teacher can afford to put one of these robots in the hands of their students. For that reason, it’s important to not only improve the robots themselves, but to help drive the costs down to make them more accessible. The CodiBot does this well, and comes in with a price tag well under $100.

The robot itself comes pre-assembled, and while it might seem like students would miss out on actually building the robot, the goal of the robot is to teach coding skills primarily. Some things do need to be connected though, such as the Arduino and other wires, but from there its easy to program the robot to do any number of tasks such as obstacle avoidance and maze navigation. The robot can be programmed using drag-and-drop block programming (similar to Scratch) but can also be programmed the same way any other Arduino can be.

With such a high feature count and low price tag, this might be the key to getting more students exposed to programming in a more exciting and accessible way than is currently available. Of course, if you have a little bit more cash lying around your school, there are some other options available to you as well.

Pogo Pin Serial Adapter Thing

A few weeks ago, I was working on a small project of mine, and I faced a rather large problem. I had to program nearly five hundred badges in a week. I needed a small programming adapter that would allow me to stab a few pads on a badge with six pogo pins, press a button, and move onto the next badge.

While not true for all things in life, sometimes you need to trade quality for expediency. This is how I built a terrible but completely functional USB to serial adapter to program hundreds of badges in just a few hours.

Continue reading “Pogo Pin Serial Adapter Thing”

Using Modern C++ Techniques with Arduino

C++ has been quickly modernizing itself over the last few years. Starting with the introduction of C++11, the language has made a huge step forward and things have changed under the hood. To the average Arduino user, some of this is irrelevant, maybe most of it, but the language still gives us some nice features that we can take advantage of as we program our microcontrollers.

Modern C++ allows us to write cleaner, more concise code, and make the code we write more reusable. The following are some techniques using new features of C++ that don’t add memory overhead, reduce speed, or increase size because they’re all handled by the compiler. Using these features of the language you no longer have to worry about specifying a 16-bit variable, calling the wrong function with NULL, or peppering your constructors with initializations. The old ways are still available and you can still use them, but at the very least, after reading this you’ll be more aware of the newer features as we start to see them roll out in Arduino code.
Continue reading “Using Modern C++ Techniques with Arduino”

Program Your Brain, Hack Your Way to Productivity

Most people wish they were more productive. Some buckle down and leverage some rare facet of their personality to force the work out. Some of them talk with friends. Some go on vision quests. There are lots of methods for lots of types of people. Most hackers, I’ve noticed, look for a datasheet. An engineer’s reference. We want to solve the problem like we solve technical problems.

It's got the cover equivalent of click-bait, but the centimeter thick bibliography listing research sources at the back won me over.
It’s got the cover equivalent of click-bait, but the centimeter thick bibliography listing research sources at the back won me over.

There were three books that gave me the first hints at how to look objectively at my brain and start to hack on it a little. These were The Power of Habit by Charles Duhigg, Flow By Mihaly Csikszentmihalyi, and Getting Things Done By David Allen.

I sort of wandered into these books in a haphazard path. The first I encountered was The Power of Habit which I found to be a bit of a revelation. It presented the idea of habits as functions in the great computer program that makes up a person. The brain sees that you’re doing a task over and over again and just learns to do it. It keeps optimizing and optimizing this program over time. All a person needs to do is trigger the habit loop and then it will run.

For example: Typing. At first you either take a course or, if your parents left you alone with a computer for hours on end, hunt-and-peck your way to a decent typing speed. It involves a lot of looking down at the keyboard. Eventually you notice that you don’t actually need to look at the keyboard at all. Depending on your stage you may still be “t-h-i-n-k-i-n-g”, mentally placing each letter as you type. However, eventually your brain begins to abstract this away until it has stored, somewhere, a combination of hand movements for every single word or key combination you typically use. It’s only when you have to spell a new word that you fall back on older programs.

Continue reading “Program Your Brain, Hack Your Way to Productivity”

Taking a U2F Hardware Key from Design to Production

Building a circuit from prototyping to printed circuit board assembly is within the reach of pretty much anyone with the will to get the job done. If that turns out to be something that everyone else wants, though, the job gets suddenly much more complex. This is what happened to [Conor], who started with an idea to create two-factor authentication tokens and ended up manufacturing an selling them on Amazon. He documented his trials and tribulations along the way, it’s both an interesting and perhaps cautionary tale.

[Conor]’s tokens themselves are interesting in their simplicity: they use an Atmel ATECC508A specifically designed for P-256 signatures and keys, a the cheapest USB-enabled microcontroller he could find: a Silicon Labs EFM8UB1. His original idea was to solder all of the tokens over the course of one night, which is of course overly optimistic. Instead, he had the tokens fabricated and assembled before being shipped to him for programming.

Normally the programming step would be straightforward, but using identical pieces of software for every token would compromise their security. He wrote a script based on the Atmel chip and creates a unique attestation certificate for each one. He was able to cut a significant amount of time off of the programming step by using the computed values with a programming jig he built to flash three units concurrently. This follows the same testing and programming path that [Bob Baddeley] advocated for in his Tools of the Trade series.

From there [Conor] just needed to get set up with Amazon. This was a process worthy of its own novel, with Amazon requiring an interesting amount of paperwork from [Conor] before he was able to proceed. Then there was an issue of an import tariff, but all-in-all everything seems to have gone pretty smoothly.

Creating a product from scratch like this can be an involved process. In this case it sounds like [Conor] extracted value from having gone through the entire process himself. But he also talks about a best-case-scenario margin of about 43%. That’s a tough bottom line but a good lesson anyone looking at building low-cost electronics.

Gawking Hex Files

Last time I talked about how to use AWK (or, more probably the GNU AWK known as GAWK) to process text files. You might be thinking: why did I care? Hardware hackers don’t need text files, right? Maybe they do. I want to talk about a few common cases where AWK can process things that are more up the hardware hacker’s alley.

The Simple: Data Logs

If you look around, a lot of data loggers and test instruments do produce text files. If you have a text file from your scope or a program like SIGROK, it is simple to slice and dice it with AWK. Your machines might not always put out nicely formatted text files. That’s what AWK is for.

AWK makes the default assumption that fields break on whitespace and end with line feeds. However, you can change those assumptions in lots of ways. You can set FS and RS to change the field separator and record separator, respectively. Usually, you’ll set this in the BEGIN action although you can also change it on the command line.

For example, suppose your test file uses semicolons between fields. No problem. Just set FS to “;” and you are ready to go. Setting FS to a newline will treat the entire line as a single field. Instead of delimited fields, you might also run into fixed-width fields. For a file like that, you can set FIELDWIDTHS.

If the records aren’t delimited, but a fixed length, things are a bit trickier. Some people use the Linux utility dd to break the file apart into lines by the number of bytes in each record. You can also set RS to a limited number of any character and then use the RT variable (see below) to find out what those characters were. There are other options and even ways to read multiple lines. The GAWK manual is your friend if you have these cases.

BEGIN { RS=".{10}"   # records are 10 characters
      }

   {
   $0=RT
   }

   {
   print $0  # do what you want here
   }

Once you have records and fields sorted, it is easy to do things like average values, detect values that are out of limit, and just about anything else you can think of.

Spreadsheet Data Logs

Some tools output spreadsheets. AWK isn’t great at handling spreadsheets directly. However, a spreadsheet can be saved as a CSV file and then AWK can chew those up easily. It is also an easy format to produce from an AWK file that you can then read into a spreadsheet. You can then easily produce nice graphs, if you don’t want to use GNUPlot.

Simplistically, setting FS to a comma will do the job. If all you have is numbers, this is probably enough. If you have strings, though, some programs put quotes around strings (that may contain commas or spaces). Some only put quotes around strings that have commas in them.

To work around this problem cleanly, AWK offers an alternate way to define fields. Normally, FS tells you what characters separate a field. However, you can set FPAT to define what a field looks like. In the case of CSV file, a field is any character other than a comma or a double quote and then anything up to the next double quote.

The manual has a good example:

BEGIN {
  FPAT = "([^,]+)|(\"[^\"]+\")"
  }

  {
  print "NF = ", NF
  for (i = 1; i <= NF; i++) {
  printf("$%d = <%s>\n", i, $i)
  }

This isn’t perfect. For example, escaped quotes don’t work right. Quoted text with new lines in it don’t either. The manual has some changes that remove quotes and handle empty fields, but the example above works for most common cases. Often the easiest approach is to change the delimiter in the source program to something unique, instead of a comma.

Hex Files

Another text file common in hardware circles is a hex file. That is a text file that represents the hex contents of a programmable memory (perhaps embedded in a microcontroller). There are two common formats: Intel hex files and Motorola S records. AWK can handle both, but we’ll focus on the Intel variant.

Old versions of AWK didn’t work well with hex input, so you’d have to resort to building arrays to convert hex digits to numbers. You still see that sometimes in old code or code that strives to be compatible. However, GNU AWK has the strtonum function that explicitly converts a string to a number and understands the 0x prefix. So a highly compatible two digit hex function looks like this (not including the code to initialize the hexdigit array):

function hex2dec(x) {
  return (hexdigit[substr(x,1,1)]*16)+hexdigit[substr(x,2,1)]
}

If you don’t mind requiring GAWK, it can look like this:

function hex2dec(x) {
  return strtonum("0x" x);
}

In fact, the last function is a little better (and misnamed) because it can handle any hex number regardless of length (up to whatever limit is in GAWK).

Hex output is simple since you have printf and the X format specifier is available. Below is an AWK script that chews through a hex file and provides a count of the entire file, plus shows a breakdown of the segments (that is, non-contiguous memory regions).

BEGIN { ct=0;
  adxpt=""
}


function hex4dec(y) {
  return strtonum("0x" y)
}


function hex2dec(x) {
  return strtonum("0x" x);
}

/:[[:xdigit:]][[:xdigit:]][[:xdigit:]][[:xdigit:]][[:xdigit:]][[:xdigit:]]00/ {

  ad = hex4dec(substr($0, 4, 4))
  if (ad != adxpt) {
  block[++n] = ad
  adxpt = ad;
    }
  l = hex2dec(substr($0, 2, 2))
  blockct[n] = blockct[n] + l
  adxpt = adxpt + l
  ct = ct + l
  }

END { printf("Count=%d (0x%04x) bytes\t%d (0x%04x) words\n\n", ct, ct, ct/2, ct/2)
  for (i = 1 ; i <= n ; i++) {
  printf("%04x: %d (0x%x) bytes\t", block[i], blockct[i], blockct[i])
  printf("%d (0x%x) words\n", blockct[i]/2, blockct[i]/2)
  }

}

This shows a few AWK features: the BEGIN action, user-defined functions, the use of named character classes (:xdigit: is a hex digit) and arrays (block and blockct use numeric indices even though they don’t have to). In the END action, the summary uses printf statements for both decimal and hex output.

Once you can parse a file like this, there are many things you could do with the resulting data. Here’s an example of some similar code that does a sanity check on hex files.

Binary Files

Text files are fine, but real hardware uses binary files that people (and AWK) can’t easily read, right? Well, maybe people, but AWK can read binary files in a few ways. You can use getline in the BEGIN part of the script and control how things are read directly. You can also use the RS/RT trick mentioned above to read a specific number of bytes. There are a few other AWK-only methods you can read about if you are interested.

However, the easiest way to deal with binary files in AWK is to convert them to text files using something like the od utility. This is a program available with Linux (or Cygwin, and probably other Windows toolkits) that converts a binary file to different readable formats. You probably want hex bytes, so that’s the -t x2 option (or use x4 for 16-bit words). However, the output is made for humans, not machines, so when a long run of the same output occurs, od omits them replacing all the missing lines with a single asterisk. For AWK use, you want to use the -v option to turn that behavior off. There are other options to change the output radix of the address, swap bytes, and more.

Here are a few lines from a random binary file:

0000000 d8ff e0ff 1000 464a 4649 0100 0001 0100
0000020 0100 0000 dbff 4300 5900 433d 434e 5938
0000040 484e 644e 595e 8569 90de 7a85 857a c2ff
0000060 a1cd ffde ffff ffff ffff ffff ffff ffff
0000100 ffff ffff ffff ffff ffff ffff ffff ffff
0000120 ffff ffff ffff ffff ffff 00db 0143 645e
0000140 8564 8575 90ff ff90 ffff ffff ffff ffff
0000160 ffff ffff ffff ffff ffff ffff ffff ffff
0000200 ffff ffff ffff ffff ffff ffff ffff ffff
0000220 ffff ffff ffff ffff ffff ffff ffff c0ff
0000240 1100 0108 02e0 0380 2201 0200 0111 1103
0000260 ff01 00c4 001f 0100 0105 0101 0101 0001
0000300 0000 0000 0000 0100 0302 0504 0706 0908
0000320 0b0a c4ff b500 0010 0102 0303 0402 0503
0000340 0405 0004 0100 017d 0302 0400 0511 2112
0000360 4131 1306 6151 2207 1471 8132 a191 2308

This is dead simple to parse with AWK. The address will be $1 and each field will be $2, $3, etc. You can just convert the file yourself, use a pipe in the shell, or–if you want a clean solution–have AWK run od as a subprocess. Since the input is text, all of AWK’s regular expression features still work, which is useful.

Writing binary files is easy, too, since printf can output nearly anything. An alternative is to use xxd instead of od. It can convert binary files to text, but also can do the reverse.

Full Languages

There’s an old saying that if all you have is a hammer, everything looks like a nail. I doubt that AWK is the best tool to build full languages, but it can be a component of some quick and dirty hacks. For example, the universal cross assembler uses AWK to transform assembly language files into an internal format the C preprocessor can handle

Since AWK can call out to external programs easily, it would be possible to write things that, for example, processed a text file of commands and used them to drive a robot arm. The regular expression matching makes text processing easy and external programs could actually handle the hardware interface.

awk-wolfensteinThink that’s far fetched? We’ve covered stranger AWK use cases, including a Wolfenstien-like game that uses 600 lines of AWK script (as seen to the right).

So, sure it is software, but it is a tool that has that Swiss Army knife quality that makes it a useful tool for software and hardware hackers alike. Of course, other tools like Perl, Python, and even C or C++ can do more. But often with a price in complexity and learning curve. AWK isn’t for every job, but when it works, it works well.