A Neural Net For A Graphing Calculator?

An image of a light grey graphing calculator with a dark grey screen and key surround. The text on the monochrome LCD screen shows "Input: ENEB Result 1: BEEN Confidence 1: 14% [##] Result 2: Good Confidence 2: 12% [#] Press ENTER key..."

Machine learning and neural nets can be pretty handy, and people continue to push the envelope of what they can do both in high end server farms as well as slower systems. At the extreme end of the spectrum is [ExploratoryStudios]’s Hermes Optimus Neural Net for a TI-84 Plus Silver Edition.

This neural net is setup as an autocorrect system that can take four character inputs and match them to a library of twelve words. That’s not a lot, but we’re talking about a device with 24 kB of RAM, so the little machine is doing its best. Perhaps more interesting than any practical output is the puzzle solving involved in getting this to work within the memory constraints.

The neural net “employs a feedforward neural network with a precisely calibrated 4-60-12 architecture and sigmoid activation functions.” This leads to an approximate 85% accuracy being able to identify and correct the given target words. We appreciate the readout of the net’s confidence as well which is something that seems to have gone out the window with many newer “AI” systems.

We’ve seen another TI-84 neural net for handwriting recognition, but is the current crop of AI still headed in the wrong direction?

3 thoughts on “A Neural Net For A Graphing Calculator?

  1. Works on my TI-84 Plus, no Silver Edition needed. Just needs a lot of memory.

    Although I’m also not sure if it works correctly. The library contains the word ‘BACK’. And if I try to classify ‘BOCK’, I would have expected ‘BACK’ with some high confidence. However, the app returns ‘LIKE’ with a 97% confidence, and then ‘BEEN’ with a confidence of 0%.

    Likewise, the library contains ‘JUST’, but inputting ‘JEST’ comes with ‘GOOD’ and a confidence of 54%, and ‘ONLY’ with a confidence of 25%.

    Entering ‘JUST’ does return ‘JUST’ with a confidence of 99%.

    This is using the default weights included with the application. Maybe some more training would yield better results, neural nets are always quite fuzzy things.

    1. Hello there! I am the developer. There were some bugs that have been worked out in the system, it should now work correctly, with correct confidence scores. A kind individual I have linked to on my website helped me fix everything up. I hope you will give it another try and please if you have any questions do shoot me an email, which is available in the top right corner of my webpage with the mail button.

    2. Correction on my prior comment. I have tested it and you are correct. The Neural Network seems to have better associations between mistyped letters that are adjacent, thusly it is processing the words based on their relative location to one another, so mistyped letters that are far away from one another seem to confound the network a bit. I will now begin improving the model for distantly mistyped characters.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.