Tiny Neural Network Library In 200 Lines Of Code

Neural networks have gone mainstream with a lot of heavy-duty — and heavy-weight — tools and libraries. What if you want to fit a network into a little computer? There’s tinn — the tiny neural network. If you can compile 200 lines of standard C code with a C or C++ compiler, you are in business. There are no dependencies on other code.

On the other hand, there’s not much documentation, either. However, between the header file and two examples, you should be able to figure it out. After all, it isn’t much code. The example in the repository directs you to download a handwriting number recognition dataset from the Internet. Once it trains that data, it shows you the expected output from the first item in the data set and then processes the first item and shows you the result.

For simplicity, the test program just uses the first training item. However, keep in mind that the program shuffles the data during training, so you won’t always get the same result. Here’s an example output:

0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 
0.000294 0.000581 0.000000 0.000045 0.000101 0.002943 0.000000 0.000000 0.000408 0.998187

The top row is the expected result. All the numbers are zero except the last one because this is a number “9” input. The bottom row shows the results of the network. Most of the values are not zero, but are close to it. The last value is not quite one, but it is close.

You might prefer looking at the much simpler program in the README file:

#include "Tinn.h"
#include <stdio.h>

#define len(a) ((int) (sizeof(a) / sizeof(*a)))

int main()
{
   float in[] = { 0.05, 0.10 };
   float tg[] = { 0.01, 0.99 };
 /* Two hidden neurons */
   const Tinn tinn = xtbuild(len(in), 2, len(tg));
   for(int i = 0; i < 1000; i++)
     {
     float error = xttrain(tinn, in, tg, 0.5);
     printf("%.12f\n", error);
     }
   xtfree(tinn);
   return 0;
}

The first array is the expected inputs and the second array are the expected outputs. This simple program doesn’t actually use the network it trains, but the xtpredict function would be easy to add.

If you want some more reading on neural networks, we can help you. There’s also a rundown of other tools and techniques available.

20 thoughts on “Tiny Neural Network Library In 200 Lines Of Code

  1. “Neural networks have gone mainstream with a lot of heavy-duty — and heavy-weight — tools and libraries. ”

    And of course hardware like Xilinx , Google, and others are producing.

      1. It’s only a “monster” price-wise. Baidu has been using boxes with up to 64 custom Deep Learning Accelerators for a while now.

        With the DGX-2, about 60% of the money you spend goes into the NVLINK interconnect – which you don’t even need for AI workloads. Standard Deep Learning boxes with 16 GPU slots go for a fraction of the DGX-2.

  2. I google, therefore I am… ? Or… Imagination is more important than intelligence.
    All these [AI’s] offer is some discernment. Of course that is a grand quality. When I was 4 my grandfather, (a psychiatrist,) said; There are a great many excellant doctors, but few have [enough] discernment. So… once AI is conquered, there are at least 2 more levels to, uh…. hack. ????

  3. Maybe worth noting that the network architecture is fixed in the library: one hidden layer, sigmoid activation, MSE cost function… Only the widths of the layers can be changed. Still, a nice example how simple an MLP actually is.

  4. After all of the hype, it was interesting to me to learn how simple neural networks can be at their core. It’s also interesting that training them is the hard part. Once they are trained, they can be run on fairly light-weight hardware.

Leave a Reply to ThinkererCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.