Google Launches AI Platform That Looks Remarkably Like A Raspberry Pi

Google has promised us new hardware products for machine learning at the edge, and now it’s finally out. The thing you’re going to take away from this is that Google built a Raspberry Pi with machine learning. This is Google’s Coral, with an Edge TPU platform, a custom-made ASIC that is designed to run machine learning algorithms ‘at the edge’. Here is the link to the board that looks like a Raspberry Pi.

This new hardware was launched ahead of the TensorFlow Dev Summit, revolving around machine learning and ‘AI’ in embedded applications, specifically power- and computationally-limited environments. This is ‘the edge’ in marketing speak, and already we’ve seen a few products designed from the ground up to run ML algorithms and inference in embedded applications. There are RISC-V microcontrollers with machine learning accelerators available now, and Nvidia has been working on this for years. Now Google is throwing their hat into the ring with a custom-designed ASIC that accelerates TensorFlow. It just so happens that the board looks like a Raspberry Pi.

What’s On The Board

On board the Coral dev board is an NXP i.MX 8M SOC with a quad-core Cortex-A53 and a Cortex-M4F. The GPU is listed as ‘Integrated GC7000 Lite Graphics’. RAM is 1 GB of LPDDR4, Flash is provided with 8GB of eMMC, and WiFi and Bluetooth 4.1 are included. Connectivity is provided through USB, with Type-C OTG, a Type-C power connection, a Type-A 3.0 host, and a micro-B serial console. Gigabit Ethernet, a 3.5mm audio jack, a microphone, full-size HDMI, 4-lane MIPI-DSI, and 4-lane MIPI-CSI2 camera support. The GPIO pins are exactly — and I mean exactly — like the Raspberry Pi GPIO pins. The GPIO pins provide the same signals in the same places, although due to the different SOCs, you will need to change a line or two of code defining the pin numbers.

You might be asking why Google would build a Raspberry Pi clone. That answer comes in the form of a machine learning accelerator chip implanted on the board. Machine learning and AI chips were popular in the 80s and everything old is new again, I guess. The Google Edge TPU coprocessor has support for TensorFlow Lite, or ‘machine learning at the edge’. The point of TensorFlow Lite isn’t to train a system, but to run an existing model. It’ll do facial recognition.

The Coral dev board is available for $149.00, and you can order it on Mouser. As of this writing, there are 1320 units on order at Mouser, with a delivery date of March 6th (search for Mouser part number 212-193575000077).

Also In Dongle Form

There’s also another device in the hardware portfolio, called a USB accelerator, which we can only assume is the Edge TPU connected to a USB cable. This USB accelerator will work with the Raspberry Pi — that’s from Google’s product copy, by the way — and will get you started on machine learning inferencing with the Edge TPU designed by Google. The price for this USB accelerator is $75 USD.

We would like to congratulate the Raspberry Pi foundation for creating something so ubiquitous even Google feels the need to ride the coat tails.

80 thoughts on “Google Launches AI Platform That Looks Remarkably Like A Raspberry Pi

        1. My guess (need to check) is that the GPIO header’s distance from the standoff holes is identical, as is the general board footprint.

          None of the board edge connectors match though, and that heatsink would kill interoperability with most HATs.

    1. One thing is for sure: with this author’s attitude and how it aggressively responds to its viewers, I will be sure to avoid any articles written by Brian Benchoff.

      1. lol. I’ll call that a successful Benchoffizm.

        @[Garret], please meet Brian [Benchoff] He’s cheeky, entertaining. Has colorful language, a silver tongue and cast iron ….

    2. You’re acting as though you have no agency or responsibility in this matter of despicable publishing practices, when if fact you have the most out of anyone here. Stop absolving yourself, which you have done multiples times when this has come up in the past. Stop passing the buck.

      1. I grabbed the man by the collar and shook him violently. ‘You have personal agency’, I screamed in his face.

        Tears welled up in his eyes, and a whimper of denial escaped his lips.

    3. Part of the problem, I guess, is that it’s not really possible to tell if a headline is clickbait without having read the accompanying article. Perhaps installing an ad-blocker and spamming the comments with crud is a better path to personal empowerment.

      1. One partial solution is to avoid all the articles that contain a question mark (?), for example “Will be XYZ your next video card?”. Well, those kind of are 99,99% ads-masked-as-articles, or worse, a “sequence of words” that does not deserve your time.
        Another rule: when the title give you a curiosity, for example “this man loose 20 kilos in 1 month, discover how”. Those are the classic title “click to know” and it is 100% cickbait, skip it.
        Another rule: when the title anticipate you something too much nice to be real, it means it is not real, so skip to the next title.
        To be a responsable reader is a duty that is not so easy, it means read the title, THINK ABOUT IT, wait 10 seconds, read it again, think again and then decide it to click on. Compulsive clickers are the fortune of those clickbaiters, iti s in charge or the reader to select the diamonds inside the shit.

    4. I really don’t like your overt embrace of pandering to lowest-common-denominator journalism. While you might feel absolved by full-disclosure honesty, it comes across as ugly cynicism to the readers. These are the same readers who you depend on for income, yet you don’t respect their intellectual curiosity it seems.

      I’ll help out with the more pertinent info as I’ve been waiting for the Edge TPU hardware to be released for almost a year now. The USB dongle will ‘work’ with a Raspberry Pi as the clickbait composer has copy-pasted, but it’s less performant than the SOC board. The USB interface creates a CPU-bound bottleneck on the rPi. For image recognition tasks, the frames per second will be much higher with the full-blown SOC board than the USB dongle implementation. It’s good enough for a proof-of-concept, but won’t give the real performance you’ll see with the SOC dev board.

      1. the huge percentage of readers are “compulsive clickers” that feel robbed if they don’t click on every link possible. this clickbaiter don’t care at all if 1% of readers get bad to him.
        on these 1%, many will forget/forgive, so only really few reader will be lost.
        How many of you remember the scandal of rootkit (read: “virus”) than SONY placed on some of they CDs?
        https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
        quite noone, i suppose. But i remember it very wall, i remember how they tried to hide it, auto-assolve theyself. No mercy. And from that day i skipped all SONY labeled products. How many people did like me? i think close to zero.
        People are a sheep’s flock: they know (they: sony, google, microsoft, and go down till this Brian), and take advantage on them.

    5. because one thing works, (clickbait) it does not means that you can use it.
      I have some people that gave me problems, and yes, to shot them will be a way to solve the problems at they root, right?
      ok, clickbait is still not illegal, it does not allow you to use. But “the time is a gentleman” (means: you will receive what you deserve), i like to think that what one give to the world, he will have boomeranged back, in a way or another. The Crocodile Karma can bite you ass in any time, in any form: the car stoled by thief, a fatal new from your lungs X ray control, some like that..
      Sleep worried, man.

    6. Noooo, you start with a click bait and then continue by triggering a flame war! :) You just became the most read writer on HAD. At least by the statistics….
      So basically you say: your are an a..hole and you don’t care. So be it then! :D

  1. Calling this a “Raspberry Pi clone” takes really a lot of imagination … The only RPi relevant bit is the GPIO layout – which makes a complete sense because it allows to use a lot of RPi HATs. Plus the connector is standard so adding whatever else is needed is easy.

    Otherwise the SoC is fairly different (even though it is the same A53 core), there is a very different GPU, there is an extra CortexM4 on board, plus the AI ASIC.

    That’s a bit like calling a Ferrari a clone of a Fiat because both have 4 wheels and the name starts with “F”.

    1. Pretty much, and the price isn’t that bad for what one gets. Just make certain good and plentiful documentation, and supported for a long time because most aren’t going to throw this away for the next big thing.

    2. well you have to admit, it is rectangular (just like the raspberry pi) and it has a header (just like the raspberry pi) and it has network and usb ports (just like…).
      However… it also has a screw terminal, which no raspberry pi has, and I’m pretty sure it has more parts the rasp doesn’t have. So making the comparison with the rasp is understandable, I think I would be only fair to concentrate on the function of the board and not the shape and common parts.
      I wonder what the future brings us with this new board, I’m pretty sure we’ll read another article within a few months.

    3. It’s absolutely not a Pi clone. Although the pinout is identical and the form-factor certainly rhymes with “Raspberry”.

      But that was Brian’s point. How wacky is it that Google is making their “edge” computing thing look like a Pi? Are they trying to get into the maker market simply by copying the form factor, and hoping to get all of the good associations with the Pi’s software environment simply by making it look similar? My guess: yeah. That’s marketing for ya.

        1. ….and this was the most viewed post on Hackaday yesterday, including views on main page.

          I’m here to do a job. I’m here to sell eyeballs. I’m very good at my job.

    4. It’s the raspberry pi form factor, primary cpu, and gpio. That’s a lot more than the 4 tires comparison. It’s not so much Fiat vs Ferrari, but stock Chevy 1500 vs lifted, blown Chevy 1500 with a winch grill.

  2. “Machine learning and AI chips were popular in the 80s and everything old is new again, I guess.”

    For that to be true, then that would mean we’ve learned nothing in the intervening decades.

      1. Not being difficult but I don’t see why it should be similar in price. I believe their point is not to build a clone, that is a side effect of their primary goal: get Edge TPUs out.

        Just guessing but it seems likely to me that they needed to get round the bottlenecks the Pi has, but that they like many of the other aspects, hence their board is broadly similar.

      1. You know how many weird looks I get from people when I tell them that “cloud” computing was all the rage in the 60s & 70s? It’s the same idea – “dumb” terminal at the user end, big powerful machines at the “cloud” end. The major difference, aside from how powerful everything is, is that the “dumb” terminal isn’t dumb at all.

        1. The funny thing about most smart devices is they’re not actually all that smart most of the heavy lifting is done on a remote server. that might even be a mainframe.

    1. Surprisingly most AI concepts have been around since the late 1970s it’s only recently processing and storage have become cheap enough to make them practical.

  3. Python library with C++ to be released. Did we wake up in a parallel time-line with an alternate reality?

    Some of us are not happy with Python encroaching on C/C++ territory just because you have a ton of powerful hardware ready to burn through clock cycles.

      1. Even without HW acceleration the thinky parts of a python program are generally done in C or cpp by Qt, Numpy, custom extensions, and a bazillion other libraries.

        Most things seem to have an outer loop in the 10 to 60hz range that python can totally handle, and when it can’t, python libs for this kind of thing are usually bindings to C++, which you can usually just use directly.

    1. As well as you train the network. BTW. you still need powerful GPUs to train custom networks, but once you have all the coefficients, running classifier should be much faster on this board than on any RasPi.

      1. Is there a image classification benchmark to measure performance (speed, not accuracy)? Seems like there’s quite a few options these days between the Edge TPU, Intel USB stick, and various CUDA or OpenCL options.

        1. One of the pretrained ImageNet models designed to be efficient would be good candidates. MobileNets for instance, is available for Tensorflow and Keras. Will run on Coral and Jetson at least. Don’t know about Nervana

      1. It’s not remotely a fair fight to compare it to an ordinary Raspberry Pi – a sensible comparison would at least be a Movidius ASIC combined with a Raspberry, or one of the NVidia offerings like Jetson TX2 or Jetson Nano.

    1. Likewise Canada. ” Mouser does not presently [sic] sell this product in your region.”
      (which I suppose you could parse to mean they currently sell it, but I’m guessing that’s not true either)
      Darn. I even had my credit card out.

      1. Or NZ. I sent Mouser an email pointing out that we do actually exist, despite not being on most maps. I also suggested that as our queen used to lease the whole damn Island, perhaps they could send me one for old time’s sake. Basically I begged them to sell me one. We’ll see how that goes…

  4. Screw the ML accelerator, just using this thing as an i.MX8M mobo for a DIY laptop is a decent deal. It doesn’t have a ton of RAM but it’d be happy running a basic desktop with a lightweight browser, at least.

    1. I’d still rather see a flat pack macchiatobin with sodimm, a battery controller and mxm. Ports all on one side, support for a decent GPU, quad a72, 16gb ddr4, and in place of a 480mbps (theoretical) cap, 23.5gbps of network connectivity in each direction. That’s my kind of laptop.

    2. For that use I don’t see a reason to pick this over RPI, since it’s got a similar processor and 2GB of ram for a 1/4 of the price. Perhaps I’m overlooking some other benefits, correct me if I’m wrong.

  5. The fact that this is based around a Freescale MX makes it a lot more “open” from the bottom up compared to a RP. You can actually get full datasheets for the silicon, reference designs and schematics, and buy the chips easily!

  6. I very briefly considered using Google cloud TPU for training. The prices looked good, the web interface easy to navigate, but digging deeper I started to look for community support and how to upload my own data. Community support consists of ‘look it up up stack exchange or Google groups’ ….. So that seems to mean ‘no community support’. Now try to find the means to upload your own data …. Hmmmmmn …… maybe need to get my eyesight tested, but as far as I can tell, the provision is not there yet.

    So it appears this gadget will do some of the standard out of the box stuff, but that could very quickly become quite boring. Compare this to other systems eg Intel Myriad X where you can train your own images, albeit with a lot of struggle or even better Nvidia Jetson where training custom image sets is a breeze, community support is vibrant and the cost is reassuringly expensive.

    1. https://cdn.hackaday.io/images/8638641551864620528.jpg

      The Jetson TX2 developer board is fine for getting started, but really, I should have bought the TX2 Jetson module separately and bolted it straight to the Connecttech Orbitty carrier as shown above. The size is approx same as RPi, except a lot deeper (Z axis).

      The system has all the features required for deployment and can even be used for training custom image sets. Most important is: Ethernet connector for flashing CUDA etc, USB3 for camera connect.

  7. Using the form factor of the Pi means that people will feel familiar with it, Its easier to learn something when there are some common points of reference to something you know. Not sure why anybody would be surprised by the choice of the RPi since both the voice and vision kits from google aiy used them and this is just a continuation of that project.

  8. Wow…interesting…VERY interesting…Thanks for the article!! If society is to survive mass automation and AI implementation to avoid employing human citizens, workers, we must DECENTRALIZE manufacturing by enabling more citizens with more manufacturing capabilities and greater local self-reliance. This article is an example of spreading the knowledge critical to decentralizing manufacturing…as is the entire “maker movement.” This means local, small scale manufacturing can produce the same quality products and designs as large scale AI automated factories designed to “disemploy” the rest of us! Meanwhile, be excited about AMERICAN manufacturing…let’s bring our jobs back home!!

  9. The price and specs seem decent compared to RPi + Intel Movidius so long as it really is a pure edge solution. For me, the problem is that I don’t trust Google not to use this for “Surveillance Capitalism”, vacuuming up data, and the RPi platform is so well supported. So, I will stick to RPi + Movidius for now. I have been burned too often with great hardware that has bad support or requires the cloud to work effectively.

  10. i.MX 8M SOC (witout the AI part) is the perfect candidate for a better Raspberry Pi Clone! (read: Model D)
    4xARMv8 Cortex-A53 just like RPI3, but at 1500MHz (+100MHz).
    and a ARM m4 to emulate the RPI firmware! (with twice the sram at 256kb)
    but it also build in gigabit Ethernet, OpenGL3ES, Vulkan, HEVC encoder/decoder,
    4K HDMI, 2USB3 with USBC support and 2PCIe (for M2 SSD).

    it just need new firmware that boot to the arm-m4 and emulate the mailbox, for backward compability… and bobs your uncle.

    only minus is it seems to be missing tvout, and the price.

    1. I am curious too about the different abilities of this TPU vs the Kendryte K210 “Convolutional Neural Network” accelerator. However, they seem to target very different market segments as there is only 8MB on the Kendryte (vs 1GB on this Google TPU board – plus 8GB flash too). Kendryte seems more for IoT “device” products (can’t run “real” Linux).

  11. Great for mass surveillance “asset classification” in Google’s Surveillance City testing ground.. The new cameras will be a little bit bigger, but oh, think of the possibilities!

    “Alexa, identify target.. cross reference gait profile, and browsing fingerprint.”

    “Target identified.. here are their browsing trends..”

  12. one thing I never understood, all these “AI accelerators” ASICs, each company has their own flavor, yet I’ve never seen any explanation on what they do or how they work, each one has their own spin on the ISA, Thus each one has their own IDE.

    aren’t they really just little “compute unites” that take some data, pass it though its instruction set, and passing results? other then the ISs being proprietarial and basically unknown, can you not just use them like some parallel processing unit, like a GPU?

    (correct me if im wrong here, but I can find very little about these things, only just they are there and here’s a dev kit, start coding)

    1. Yes, they are exactly like that – a small CPU with a giant coprocessor bolted on. Google has a couple of versions of TPU. The TPU1 implemented specialised arithmetic (eg: 8 bit fixed-precision multiplication, builtin instructions for common activation functions), could multiply 64K numbers per cycle, and had 24Mb of “register” (scratch) space – stuff they thought was “sufficiently accurate” for NN calcs but that was also “sufficiently simple” to allow a very fast hardware implementation. Details here: https://cloud.google.com/blog/products/gcp/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu.

      IIRC, TPU2 added some other arithmetic precisions and some more instructions (based on their experience with TPU1), although the most important improvement was using HBM for the memory system (which had been the bottleneck). This one – TPU3? – no doubt makes different cost/speed tradeoffs again. The Tensorflow compiler knows and JITs the instruction sets (much like nvidia’s drivers are the only things that know the difference between their video cards).

Leave a Reply to ROBCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.