Ztachip Accelerates Tensorflow And Image Workloads

[Vuong Nguyen] clearly knows his way around artificial intelligence accelerator hardware, creating ztachip: an open source implementation of an accelerator platform for AI and traditional image processing workloads. Ztachip (pronounced “zeta-chip”) contains an array of custom processors, and is not tied to one particular architecture. Ztachip implements a new tensor programming paradigm that [Vuong] has created, which can accelerate TensorFlow tasks, but is not limited to that. In fact it can process TensorFlow in parallel with non-AI tasks, as the video below shows.

A RISC-V core, based on the VexRiscV design, is used as the host processor handling the distribution of the application. VexRiscV itself is quite interesting. Written in SpinalHDL (a Scala variant), it’s super configurable, producing a Verilog core, ready to drop into the design.

A Digilent Arty-A7, Arducam and a VGA PMOD is all you need

From a hardware design perspective the RISC-V core hooks up to an AXI crossbar, with all the AXI-lite busses muxed as is usual for the AMBA AXI ecosystem. The Ztachip core as well as a DDR3 controller are also connected, together with a camera interface and VGA video.

Other than providing an FPGA-specific DDR3 controller and AXI crossbar IP, the rest of the design is generic RTL. This is good news. The demo below deploys onto an Artix-7 based Digilent (Arty-A7) with a VGA PMOD module, but little else needed. Pre-build Xilinx IP is provided, but targeting a different FPGA shouldn’t be a huge task for the experienced FPGA ninja.

Ztachip top level architecture

The magic happens in the Ztachip core, which is mostly an array of Pcores. Each Pcore has both vector and scalar processing capability, making it super flexible. The Tensor Engine (internally this is the ‘dataplane processor’) is in charge here, sending instructions from the RISC-V core into the Pcore array together with image data, as well as streaming video data out. That camera is only a 0.3 MP Arducam, and the video is VGA resolution, but give it a bigger FPGA and those limits could be raised.

This domain-specific approach uses a highly modified C-like language (with a custom compiler) to describe the application that is to be distributed across the accelerator array. We couldn’t find any documentation on this, but there are a few example algorithms.

The demo video shows a real-time mix of four algorithms running in parallel; one object classification (Google’s Tensorflow mobilenet-ssd, a pre-trained AI model) canny edge detection, a Harris corner detection, and Optical flow which gives it a predator-like motion vision.

[Vuong] reckons, efficiency wise it is 5.5x more computationally efficient than a Jetson Nano and 37x more than Google’s TPU edge. These are bold claims, to say the least, but who are we to argue with a clearly incredibly talented engineer?

We cover many AI-related topics, like this AI assisted tap-typing gadget, for starters. And not wanting to forget about the original AI hardware, the good old-fashioned neuron, we got that covered as well!

Continue reading “Ztachip Accelerates Tensorflow And Image Workloads”

Blog Title Optimizer Uses AI, But How Well Does It Work?

[Max Woolf] sometimes struggles to create ideal headlines for his blog posts, and decided to apply his experience with machine learning to the problem. He asked: could an AI be trained to optimize his blog titles? It is a fascinating application of natural language processing, and [Max] explains all about what it does and how it works.

The machine learning framework [Max] uses is GPT-3, a language model that works with natural-seeming human language that is capable of being tweaked in different ways. [Max] uses OpenAI’s GPT-3 API (which, by the way, is much easier to experiment with than one might think) and here is the basic workflow for his title optimizer:

  1. The optimizer takes as input a blog post title to optimize.
  2. OpenAI’s pre-trained GPT-3 engine is used to generate six alternate titles.
  3. For each of those alternate titles, a fine-tuned version of GPT-3 is consulted to judge how “good” they are based on custom training data. (“Good” in this context means “similar to titles of successful submissions on Hacker News“, but more on that in a moment.)
  4. Print the results.

Continue reading “Blog Title Optimizer Uses AI, But How Well Does It Work?”

Hackaday Links Column Banner

Hackaday Links: August 14, 2022

What’s this? News about robot dogs comes out, and there’s no video of the bots busting a move on the dance floor? Nope — it looks like quadruped robots are finally going to work for real as “ground drones” are being deployed to patrol Cape Canaveral. Rather than the familiar and friendly Boston Dynamics “Big Dog” robot, the US Space Force went with Ghost Robotics Vision 60 Q-UGVs, or “quadruped unmanned ground vehicles.” The bots share the same basic layout as Big Dog but have a decidedly more robust appearance, and are somehow more sinister. The dogs are IP67-rated for all-weather use, and will be deployed for “damage assessments and patrols,” whatever that means. Although since this is the same dog that has had a gun mounted to it, we’d be careful not to stray too far from the tours at Kennedy Space Center.

Continue reading “Hackaday Links: August 14, 2022”

Microsoft’s New Simulator Helps Train Drone AIs

Testing any kind of project in the real world is expensive. You have to haul people and equipment around, which costs money, and if you break anything, you have to pay for that too! Simulation tends to come first. Making mistakes in a simulation is much cheaper, and the lessons learned can later be verified in the real world. If you want to learn to fly a quadcopter, the best thing to do is get some time behind the sticks of a simulator before you even purchase anything with physical whirly blades.

Oddly enough, the same goes for AI. Microsoft built a simulation product to aid the development of artificial intelligence systems for drones by the name of Project AirSim. It aims to provide a comprehensive environment for the testing of drone AI systems, making development faster, cheaper, and more practical.

Continue reading “Microsoft’s New Simulator Helps Train Drone AIs”

AI Creates Your Spreadsheets, Sometimes

We’ve been interested in looking at how AI can process things other than silly images. That’s why the “Free AI Bot that Generates the Excel Formula for Any Problem” caught our eye. Based on GPT-3, it supposedly transforms your problem description into a formula suitable for Excel or Google Sheets.

Our first prompt didn’t work out very well. But that was sort of our fault. When they say “Excel formula” they mean that quite literally. So trying to describe the actual result you want in terms of columns or rows seems to be beyond it. Not realizing that, we asked:

If the sum of column H is greater than 50, multiply column A by 0.33

And got:

=IF(SUM(H:H)>50,A*0.33,0)

A Better Try

Which is close, but not really how anyone even mildly proficient with Excel would interpret that request. But that’s not fair. It really needs to be a y=f(x) sort of problem, we suppose.

Continue reading “AI Creates Your Spreadsheets, Sometimes”

AI Image Generation Sharpens Your Bad Photos And Kills Photography?

We don’t fully understand the appeal of asking an AI for a picture of a gorilla eating a waffle while wearing headphones. However, [Micael Widell] shows something in a recent video that might be the best use we’ve seen yet of DALL-E 2. Instead of concocting new photos, you can apparently use the same technology for cleaning up your own rotten pictures. You can see his video, below. The part about DALL-E 2 editing is at about the 4:45 mark.

[Nicholas Sherlock] fed the AI a picture of a fuzzy ladybug and asked it to focus the subject. It did. He also fed in some other pictures and asked it to make subtle variations of them. It did a pretty good job of that, too.

Continue reading “AI Image Generation Sharpens Your Bad Photos And Kills Photography?”

A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot

While it is hard to tell with a photo, this robot looks more like a model of an old- fashioned clock than anything resembling a Nixie tube. It’s the kind of project that could have been created by anyone with a little bit of Arduino tinkering experience. In this case, the 3D printer used by the Nixie clock project is a Prusa i3 (which is the same printer used to make the original Nixie tubes).

The Nixie clock project was started by a couple of students from the University of Washington who were bored one day and decided to have a go at creating their own timepiece. After a few prototypes and tinkering around with the code , they came up with a design for the clock that was more functional than ornate.

The result is a great example of how one can create a functional and aesthetically pleasing project with a little bit of free time.

Confused yet? You should be.

If you’ve read this far then you’re probably scratching your head and wondering what has come over Hackaday. Should you not have already guessed, the paragraphs above were generated by an AI — in this case Transformer — while the header image came by the popular DALL-E Mini, now rebranded as Craiyon. Both of them were given the most Hackaday title we could think of, “A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot“, and told to get on with it. This exercise was sparked by curiosity following the viral success of AI generators, which posed the question of whether an AI could make a passable stab at a Hackaday piece. Transformer runs on a prompt model in which the operator is given a choice of several sentence fragments so the text reflects those choices, but the act of choosing could equally have followed any of the options.

The text is both reassuring as a Hackaday writer because it doesn’t manage to convey anything useful, and also slightly shocking because from just that single prompt it’s created meaningful and clear sentences which on another day might have flowed from a Hackaday keyboard as part of a real article. It’s likely that we’ve found our way into whatever corpus trained its model and it’s also likely that subject matter so Hackaday-targeted would cause it to zero in on that part of its source material, but despite that it’s unnerving to realise that a computer somewhere might just have your number. For now though, Hackaday remains safe at the keyboards of a group of meatbags.

We’ve considered the potential for AI garbage before, when we looked at GitHub Copilot.