Neural Network In Glass Requires No Power, Recognizes Numbers

We’ve all come to terms with a neural network doing jobs such as handwriting recognition. The basics have been in place for years and the recent increase in computing power and parallel processing has made it a very practical technology. However, at the core level it is still a digital computer moving bits around just like any other program. That isn’t the case with a new neural network fielded by researchers from the University of Wisconsin, MIT, and Columbia. This panel of special glass requires no electrical power, and is able to recognize gray-scale handwritten numbers.

The glass contains precisely controlled inclusions such as air holes or an impurity such as graphene or other material. When light strikes the glass, complex wave patterns occur and light becomes more intense in one of the ten areas on the glass. Each of those areas corresponds to a digit. For example, here are two examples of the pattern of light recognizing a two on the glass:

With a training set of 5,000 images, the network was able to correctly identify 79% of 1,000 input images. The team thinks they could do better if they allowed looser constraints on the glass manufacturing. They started with very strict design rules to assist in getting a working device, but they will evaluate ways to improve recognition percentage without making it too difficult to produce. The team also has plans to create a network in 3D, as well.

If you want to learn more about traditional neural networks, we have seen plenty of starter projects. If TensorFlow is too much to swallow, try these 200 lines of C code.

59 thoughts on “Neural Network In Glass Requires No Power, Recognizes Numbers

    1. Could you imagine if you buy a special ‘compute’ cable and it converts say raw video to x265 as it moves from the storage medium to computer or camera sensor data to encoded files on an sd card…

      And yes that example probably wouldn’t work because of complexity but something like this could be the beginning of light based computers. Intel has been working on those for decades but this is an actual proof of getting something useful from light based computation. Fucking awesome.

  1. Interestingly, you can do something very similar to this with holograms, which are pretty close to being Fourier transforms of the light field. The idea is that you take a hologram of each character you want to recognize, then a hologram of the page, put them on top of each other, shine a laser on the assembly, and tada! You get a bright spot where the character was recognized. The brighter, the better the recognition. It’s true “optical” character recognition.

  2. Correction: IN A SIMULATION, so all the “The glass contains precisely” needs to be “The glass would contain precisely”
    And the big deal is not that this is an unpowered glass neural net but that it’s not a DIGITAL neural net.
    There was no network actually fielded or any proposed method of manufacturing such a neural network, just the hypothesis that such an optomechanical network could be created along with their simulation and training data that they say proves their hypothesis or something, Look I just actually read more than the synopsis and the captions under the graphics it’s not hard.
    I might not understand the science but I speak fluent Scientist.

    1. Look for a company called LightOn. They are doing much better than this and it’s working in real life.
      BTW, this is not unpowered method, since it needs light to process information (just as a GPU uses electricity).

      1. It’s as unpowered as a crystal radio, or an NFC card, or a magnetic pickup, or a book for that matter, the method of data transmission provides the power.
        Yeah I said book, it’s engineered to selectively reflect specific bands of electromagnetic waves in a way that conveys information and ceases to transmit data when deprived of those wavelengths.

        I should also note, there are both Active and Passive optical networks and by requiring no externally powered light amplification and relying only on the passively received light this can be termed an “unpowered” network.

      1. I wasn’t trying to cast doubt with the neutral tone in my first post, just trying to characterize my own layperson understanding of the technology.
        Also the example you cite in the first sentence of the second paragraph says “[…]where the neural network is physically formed by multiple layers of diffractive surfaces […]” making it fundamentally different than the tech here.
        Mostly what this paper seems to be is a concept for miniaturization and cost reduction of optical neural nets by using a piece of continuous material that processes in all directions not just front to back as well as “[…] by tapping into sub-wavelength features.” and doing it all “using direct laser writing”.

        All in all the paper says “Here is a bunch of stuff unlike what is being done right now and here is a computer model that seems to support it being possible” but they produced no end product or concrete path to one.

  3. Any chance this, combined with holograms in some way, could be even more amazing? Just a random thought, and holograms seem like lasers did back in the day. We had them, but until tech advanced, they were often solutions looking for problems.

    This looks like one of those basic advancements that might lay down a path for the hologram.

  4. The idea to use analog optics for image recognition isn’t new at all. Holograms (diffraction gratings) can “convert” images both ways: a spot of light into a character as well as a character into a dot at an exact position. The grating is prepared, so different characters (images) produce spots at different positions which may be detected with an array of photosesnors.

  5. This isn’t a “neural network in glass”, this is the resulting logic implemented as an optical interference device.

    There’s a great difference, because the neural network is, or should be, a device that can alter itself to adapt to training. If you fix the algorithm, then it’s just a program, or a piece of glass with specific refraction properties.

    You can implement the same thing by putting pegs on a board and dropping ball bearings through it – the locations of the pegs are solved by the neural network simulation while it is being trained, but the pegboard itself isn’t the network. It’s just a board with pegs that implements the solution found by the network. This is simple: if you train a neural network to implement an OR gate, and then build the OR gate as described by the network, then you’ve simply built an OR gate.

    Calling it a neural network is a smoke-and-mirrors thing that AI researchers and reporters like to use to “simplify” it to the public, but it ends up obfuscating the matter. In reality, confusing the logic produced by the network with the network producing the logic is the same as saying that a book is telling a story, rather than the author of the book. It’s a trick of language.

    1. Exactly. From the paper: “Although one couldenvisionin situtraining of NNM using tunable optical materials [3], here we focus on training in the digital domain and useNNM only for inference”.

    2. “There’s a great difference, because the neural network is, or should be, a device that can alter itself to adapt to training.”

      And SLNN or NN undergoing training will modify internal weights, but once a network is trained you can operate it with no further modifications (‘inference mode’ in modern parlance). There is NO requirement for a NN to continue to modify weights in operation to be called a neural network. There are all sorts of ways to implement a trained NN including many electronic methods, mechanical methods, optical methods, organic methods (not jsut neurons, but fun things like guided fungal growth or independent agents), etc.

      1. Technically wrong. A “trained” neural network is just generic software. A neural network by very definition is self-modifying. Trained neural networks are, what the neurology department would call, “brain-dead”.
        There’s a reason every single “practical” ML image-recognition system out there sucks 10 kinds of balls – they were trained and then “locked-in”.
        That’s where every single one of them fail. They are dumb. Figuratively brain-dead.
        There’s not a single system out there in use that isn’t terrible. Google and Facebook are particularly terrible, despite the stupid amounts of hype around their hilariously bad “AI”. Google, the AI-first company. lmao

        Of course, the very thought of a self-modifying NN being run live out in the wild scares the absolute pants off any AI researcher because if any of it is responsible for human life, it WILL fail. Spectacularly fail.
        A middle-ground is the best solution, a collection of systems trained differently that will converse on a proper solution to a scenario. No agreement = full stop. (gracefully, that is, god forbid a car handbrake’d middle of an interstate!)

        1. Arm chair Indeed, have you used a NN in an application? Because what you are describing is a human neural network, not a machine learned neural network. At the heart of this application the ML-NN is nothing more than a classifier (from a set of known possible classes determine which best characterizes a given input). To date there is no schema capable of training a ML-NN “on the fly” while it is being used as a classifier, you are either training it or using it, never simultaneously.

          You are confusing theory with practice.

      2. > There is NO requirement for a NN to continue to modify weights in operation to be called a neural network.

        And that’s the semantic magic trick right there.

        You can call it whatever you like, but if you train an NN to act as a logic gate and then stop the training, the remaining program or algorithm no longer functions as a neural network – it functions as a logic gate. It IS a logic gate.

      3. Besides, before you operate the network in “inference mode”, what usually happens first is:

        1) Pruning. You get rid of all the bits that don’t contribute to the correct answers so you don’t have to compute them.
        2) Fusing. Multiple layers or multiple nodes of the network are fused into single computational steps wherever possible.

        After the training. you distill the network down to a simpler form that can be computed more efficiently. If your trained function is something simple like a logic OR gate, after the pruning and fusing steps you’d be ideally left with a very short program that sums the inputs, and any resemblance of the neural network would vanish.

  6. Something somewhat similar is available in reverse. There is a sundial design out there that displays the time digitally depending on the direction from which sunlight enters.
    I am not mentioning this to detract from this work, it is actually very different except for having the same sort of “wow, cool”
    https://en.wikipedia.org/wiki/Digital_sundial
    A quick google suggests that there are 3D-printable designs out there now.

  7. Gues this is combinatoric processing in a glass no sequential just combinatorik, think it would be easier to just programm the combinatorik logic into an fpga. But worth mentioning both the glass and the fpga contain Silizium.

  8. This is great. Looking at the light moving through the glass is like oberving the tensor coeficients varying as they move from one layer to another. This is a visualisation of the tensor flow itself. A lot more easier to grasp than a bunch of numbers in a matrix.

    1. By the way it is very similar to ligth going through an hologram. As the waves front move in space there is reinforcement and weakening at different points in the 3D space recreating the illusion of 3D objects.

      I think that it should be possible to encode an AI pattern recogniser in an hologram.

  9. Their NN accuracy is limited by the current IC manufacturing methods. Traditional mask-based optical manufacturing technologies are weak at rendering arbitrary patterns with lots of local variation. That’s why IC designs are strictly gridded with orientations in one direction in advanced technology “nodes” (everything ≤90 nm). The projection lens that demagnifies the mask image filters away the critical information needed for this application. Ideally you need an e-beam direct-write method for that, but single e-beam is too slow.

  10. Amazing, though it is still a simulation. A few weeks ago I saw a measuring device created by laser induced disorders in the glass structure. Pushing the idea a bit further: For fixed system, the learned AI can be put on a glassplate with some optics, and can serve complex decisions for practically no power consumption. My system (classic simulation based) needs about 1.5kW 24/7…… and a pretty amount of maintenance. An upgrade/repair would be nearly as simple as replacing a fuse. oh Stargate memories are popping up xD

  11. This is just a glorified sieve, there is no memory capability, no accumulators so series data does not work with it without preprocessing via a fourier transform.

  12. OK, it is “just” a fixed algorithm. However, what strikes me is the processing speed of this number-recognition device. Has anyone noticed that it operates at the speed of light? Probably this is physically the fastest “number cruncher” which is technically possible. No machine using any kind of electric circuit will ever surpass this one.

    1. Yeah the speed is insane. It wasn’t emphasized in this article but in the last analog computer article it was

      https://hackaday.com/2019/05/03/swiss-cheese-metamaterial-is-an-analog-computer/

      “The computation is very fast. Using microwaves, the answer comes out in a few hundred nanoseconds — a speed a conventional computer could not readily match. The team hopes to scale the system to use light which will speed the computation into the picosecond range. “

  13. If they can do this, which is amazing, it seems to me they could have two points for input and make a NAND or any other of the sixteen possible logic gates. Then you would have the building blocks for more complexity.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.