We keep seeing more and more Tensor Flow neural network projects. We also keep seeing more and more things running in the browser. You don’t have to be Mr. Spock to see this one coming. TensorFire runs neural networks in the browser and claims that WebGL allows it to run as quickly as it would on the user’s desktop computer. The main page is a demo that stylizes images, but if you want more detail you’ll probably want to visit the project page, instead. You might also enjoy the video from one of the creators, [Kevin Kwok], below.
TensorFire has two parts: a low-level language for writing massively parallel WebGL shaders that operate on 4D tensors and a high-level library for importing models from Keras or TensorFlow. The authors claim it will work on any GPU and–in some cases–will be actually faster than running native TensorFlow.
This is a logical progression of using WebGL to do browser-based parallel processing, which we’ve covered before. The work has been done by a group of recent MIT graduates who applied for (and received) an AI Grant for their work. We wonder if some enterprising Hackaday readers might not get some similar financing (be aware, you have to apply by the end of August).
If you have been itching to learn more about TensorFlow, we’ve covered it in depth. If you want the bare-bones example, we’ve looked at that, too.
Thanks [Patrick] for the tip.
Has anyone a good manual/tutorial/read that explains neural networks for a total newcomer? Sorry for sidetracking the comments.
This should get you started: https://ujjwalkarn.me/2016/08/09/quick-intro-neural-networks/
Uh oh, extremely impressive but then… Skynet coming to a browser near you.
So basically they’re saying that WebGL does what it’s supposed to do?
No, WebGL is meant to render textured triangles. Using it for neural networks is a huge waste of processing power because for every convolution operation, a bunch of pixels have to be rasterized, which would not be required if major browser vendors could get their shit together and provide a real GPU API instead of this gimped piece of decade-old garbage that is WebGL.
An associate of mine did propose that but the vendor he ‘works’ for won’t even build for modern architectures.
Man, I thought this was going to be a machine-learning accelerated web browser, not a machine-learning app running in a browser. And here I had all of my “you don’t want your computer learning from what you browse on the Internet” jokes all ready.
Well, many sites such as YouTube and PornHub and all sorts of other “content consumption” sites already do this to improve their marketing and relevance. Advertisements do it as well. Not sure it qualifies as machine learning so much as basic aggregation of categories of content though. People who like this balloon fetish video probably also tend to like this belly inflation video for example so the algorithm puts it and many others out there for the user to view, rather than a completely unrelated amateur dinosaur fetish video, for example. That kind of thing happens all the time.
“you don’t want your computer learning from what you browse on the Internet”
Especially if it is HackaDay, it will start hacking itself! And then this article comes along, and it gets the idea to start hacking you by mixing meaningless phrases into the text you are trying to read.
Maybe we should start asking, what can’t one do with a browser, since we’re bound and determined to cram everything else in?
Reminds me of the guy who can remote start his car from an Excel Workbook.
https://www.youtube.com/watch?v=ax2UBISNv2A
Hmmm, “Yo, dawg, …”
I guess it would screech to a halt.
Just crashes whenever I use this with anything but the stock images
Same. Crashes WebGL in Chrome and my AMD display driver.
This is very impressive, and actually runs much, much faster than the original Python implementation.
It also didn’t take 3 hours of following broken build instructions or putzing around with Docker.