Neural networks have gone mainstream with a lot of heavy-duty — and heavy-weight — tools and libraries. What if you want to fit a network into a little computer? There’s tinn — the tiny neural network. If you can compile 200 lines of standard C code with a C or C++ compiler, you are in business. There are no dependencies on other code.
On the other hand, there’s not much documentation, either. However, between the header file and two examples, you should be able to figure it out. After all, it isn’t much code. The example in the repository directs you to download a handwriting number recognition dataset from the Internet. Once it trains that data, it shows you the expected output from the first item in the data set and then processes the first item and shows you the result.
For simplicity, the test program just uses the first training item. However, keep in mind that the program shuffles the data during training, so you won’t always get the same result. Here’s an example output:
0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000294 0.000581 0.000000 0.000045 0.000101 0.002943 0.000000 0.000000 0.000408 0.998187
The top row is the expected result. All the numbers are zero except the last one because this is a number “9” input. The bottom row shows the results of the network. Most of the values are not zero, but are close to it. The last value is not quite one, but it is close.
You might prefer looking at the much simpler program in the README file:
#include "Tinn.h" #include <stdio.h> #define len(a) ((int) (sizeof(a) / sizeof(*a))) int main() { float in[] = { 0.05, 0.10 }; float tg[] = { 0.01, 0.99 }; /* Two hidden neurons */ const Tinn tinn = xtbuild(len(in), 2, len(tg)); for(int i = 0; i < 1000; i++) { float error = xttrain(tinn, in, tg, 0.5); printf("%.12f\n", error); } xtfree(tinn); return 0; }
The first array is the expected inputs and the second array are the expected outputs. This simple program doesn’t actually use the network it trains, but the xtpredict
function would be easy to add.
If you want some more reading on neural networks, we can help you. There’s also a rundown of other tools and techniques available.
“Neural networks have gone mainstream with a lot of heavy-duty — and heavy-weight — tools and libraries. ”
And of course hardware like Xilinx , Google, and others are producing.
And this absolute monster:
https://www.nvidia.com/en-us/data-center/dgx-2/
It’s only a “monster” price-wise. Baidu has been using boxes with up to 64 custom Deep Learning Accelerators for a while now.
With the DGX-2, about 60% of the money you spend goes into the NVLINK interconnect – which you don’t even need for AI workloads. Standard Deep Learning boxes with 16 GPU slots go for a fraction of the DGX-2.
Still pretty f*ckn rad though :-) I love hilarity of some of the numbers in its spec sheet. Weird form factor (or maybe I’ve just never seen it before).
It says “fucking” for those curious
Odd mix of curly bracket conventions there.
I’ll blame WordPress.
Says the AI… I’m on to you.
I google, therefore I am… ? Or… Imagination is more important than intelligence.
All these [AI’s] offer is some discernment. Of course that is a grand quality. When I was 4 my grandfather, (a psychiatrist,) said; There are a great many excellant doctors, but few have [enough] discernment. So… once AI is conquered, there are at least 2 more levels to, uh…. hack. ????
Neural networks are absolutely amazing, its great to see such a lightweight library.
I took an unconventional way at making a neural networks library,
Check it out here: https://bit.ly/neuralDuino
Here’s another simple one which only needs 187 lines of code:
https://github.com/MKesenheimer/SimpleNN
Cool. Thanks for posting that up!
Maybe worth noting that the network architecture is fixed in the library: one hidden layer, sigmoid activation, MSE cost function… Only the widths of the layers can be changed. Still, a nice example how simple an MLP actually is.
fann is better
Here is the Python version of TINN:
https://github.com/nvalis/pyTinn
nothing there, only readme !
Someone did a quick commit! It’s there now.
So…modest adaptive smarts for our favorite project chips (Pi/Atmel et al.)?
This sounds really familiar. Obviously the git tree is not that old but did the project itself start some time in the 1990s?
After all of the hype, it was interesting to me to learn how simple neural networks can be at their core. It’s also interesting that training them is the hard part. Once they are trained, they can be run on fairly light-weight hardware.