AI Makes Hyperbolic Brain Hats A Reality

It isn’t often that the world of Hackaday intersects with the world of crafting, which is perhaps a shame because many of the skills and techniques of the two have significant overlap. Crochet for instance has rarely featured here, but that is about to change with [Janelle Shane]’s HAT3000 neural network trained to produce crochet hat patterns.

Taking the GPT-2 neural network trained on Internet text  and further training it with a stack of crochet hat patterns, she was able to generate AI-designed hats which her friends on the Ravelry yarn forum set to crochet into real hats. It’s a follow-up to a previous knitting-based project, and instead of producing the hats you might expect it goes into flights of fancy. Some are visibly hat-like while others turn into avant-garde creations that defy any attempt to match them to real heads. A whole genre of hyperbolic progressions of crochet rows produce hats with organic folds that begin to resemble brains, and tax both the stamina of the person doing the crochet and their supply of yarn.

Perhaps most amusingly the neural network retains the ability to produce text, but when it does so it now inevitably steers the subject back to crochet hats. A Harry Potter sentence spawns a passage of something she aptly describes as “terrible crochet-themed erotica“, and such is the influence of the crochet patterns that this purple prose can even include enough crochet instructions to make them crochetable. It would be fascinating to see whether a similar model trained with G-code from Thingiverse would produce printable designs, what would an AI make with Benchy for example?

We’ve been entertained by [Janelle]’s AI work before, both naming tomato varieties, and creating pie recipes.

Thanks [Laura] for the tip.

Machine Learning With Microcontrollers Hack Chat

Join us on Wednesday, September 11 at noon Pacific for the Machine Learning with Microcontrollers Hack Chat with Limor “Ladyada” Fried and Phillip Torrone from Adafruit!

We’ve gotten to the point where a $35 Raspberry Pi can be a reasonable alternative to a traditional desktop or laptop, and microcontrollers in the Arduino ecosystem are getting powerful enough to handle some remarkably demanding computational jobs. But there’s still one area where microcontrollers seem to be lagging a bit: machine learning. Sure, there are purpose-built edge-computing SBCs, but wouldn’t it be great to be able to run AI models on versatile and ubiquitous MCUs that you can pick up for a couple of bucks?

We’re moving in that direction, and our friends at Adafruit Industries want to stop by the Hack Chat and tell us all about what they’re working on. In addition to Ladyada and PT, we’ll be joined by Meghna NatrajDaniel Situnayake, and Pete Warden, all from the Google TensorFlow team. If you’ve got any interest in edge computing on small form-factor computers, you won’t want to miss this chat. Join us, ask your questions about TensorFlow Lite and TensorFlow Lite for Microcontrollers, and see what’s possible in machine learning way out on the edge.

join-hack-chatOur Hack Chats are live community events in the Hackaday.io Hack Chat group messaging. This week we’ll be sitting down on Wednesday, September 11 at 12:00 PM Pacific time. If time zones have got you down, we have a handy time zone converter.

Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.

Largest Chip Ever Holds 1.2 Trillion Transistors

We get it, press releases are full of hyperbole. Cerebras recently announced they’ve built the largest chip ever. The chip has 400,000 cores and contains 1.2 trillion transistors on a die over 46,000 square mm in area. That’s roughly the same as a square about 8.5 inches on each side. But honestly, the WSE — Wafer Scale Engine — is just most of a wafer not cut up. Typically a wafer will have lots of copies of a device on it and it gets split into pieces.

According to the company, the WSE is 56 times larger than the largest GPU on the market. The chip boasts 18 gigabytes of storage spread around the massive die. The problem isn’t making such a beast — although a normal wafer is allowed to have a certain number of bad spots. The real problems come through things such as interconnections and thermal management.

Continue reading “Largest Chip Ever Holds 1.2 Trillion Transistors”

Brain-Computer Interfaces: Separating Fact From Fiction On Musk’s Brain Implant Claims

When it comes to something as futuristic-sounding as brain-computer interfaces (BCI), our collective minds tend to zip straight to scenes from countless movies, comics, and other works of science-fiction (including more dystopian scenarios). Our mind’s eye fills with everything from the Borg and neural interfaces of Star Trek, to the neural recording devices with parent-controlled blocking features from Black Mirror, and of course the enslavement of the human race by machines in The Matrix.

And now there’s this Elon Musk guy, proclaiming that he’ll be wiring up people’s brains to computers starting next year, as part of this other company of his: Neuralink. Here the promises and imaginings are truly straight from the realm of sci-fi, ranging from ‘reading and writing’ to the brain, curing brain diseases and merging human minds with artificial intelligence. How much of this is just investor speak? Please join us as we take a look at BCIs, neuroprosthetics and what we can expect of these technologies in the coming years.

Continue reading “Brain-Computer Interfaces: Separating Fact From Fiction On Musk’s Brain Implant Claims”

Neural Network In Glass Requires No Power, Recognizes Numbers

We’ve all come to terms with a neural network doing jobs such as handwriting recognition. The basics have been in place for years and the recent increase in computing power and parallel processing has made it a very practical technology. However, at the core level it is still a digital computer moving bits around just like any other program. That isn’t the case with a new neural network fielded by researchers from the University of Wisconsin, MIT, and Columbia. This panel of special glass requires no electrical power, and is able to recognize gray-scale handwritten numbers.

Continue reading “Neural Network In Glass Requires No Power, Recognizes Numbers”

Keep Pesky Cats At Bay With A Machine-Learning Turret Gun

It doesn’t take long after getting a cat in your life to learn who’s really in charge. Cats do pretty much what they want to do, when they want to do it, and for exactly as long as it suits them. Any correlation with your wants and needs is strictly coincidental, and subject to change without notice, because cats.

[Alvaro Ferrán Cifuentes] almost learned this the hard way, when his cat developed a habit of exploring the countertops in his kitchen and nearly turned on the cooktop while he was away. To modulate this behavior, [Alvaro] built this AI Nerf turret gun. The business end of the system is just a gun mounted on a pan-tilt base made from 3D-printed parts and a pair of hobby servos. A webcam rides atop the gun and feeds into a PC running software that implements the YOLO3 localization algorithm. The program finds the cat, tracks its centroid, and swivels the gun to match it. If the cat stays in the no-go zone above the countertop for three seconds, he gets a dart in his general direction. [Alvaro] found that the noise of the gun tracking him was enough to send the cat scampering, proving that cats are capable of learning as long as it suits them.

We like this build and appreciate any attempt to bring order to the chaos a cat can bring to a household. It also puts us in mind of [Matthias Wandel]’s recent attempt to keep warm in his shop, although his detection algorithm was much simpler.

Continue reading “Keep Pesky Cats At Bay With A Machine-Learning Turret Gun”

Wimbledon 2019: IBM’s Slammtracker AI Technology Heralds The Demise Of The Human Player

Whilst we patiently wait for the day that Womble-shaped robots replace human tennis players at Wimbledon, we can admire the IBM powered AI technology that the organisers of the Wimbledon tennis tournament use to enhance the experience for TV and phone viewers.

As can be expected, the technology tracks the ball, analyses player gestures, crowd cheers/booing but can’t yet discern the more subtle player behaviour such as serving an ace or the classic John McEnroe ‘smash your racket on the ground’ stunt. Currently a large number of expert human side kicks are required for recording these facets and manually uploading them into the huge Watson driven analytics system.

Phone apps are possibly the best places to see the results of the IBM Slammtracker system and are perfect for the casual tennis train spotter. It would be interesting to see the intrinsic AI bias at work – whether it can compensate for the greater intensity of the cheer for the more popular celebrities rather than the skill, or fluke shot, of the rank outsider. We also wonder if it will be misogynistic – will it focus on men rather than women in the mixed doubles or the other way round? Will it be racist? Also, when will the umpires be replaced with 100% AI?

Finally, whilst we at Hackaday appreciate the value of sport and exercise and the technology behind the apps, many of us have no time to mindlessly watch a ball go backwards and forwards across our screens, even if it is accompanied by satisfying grunts and the occasional racket-to-ground smash. We’d much rather entertain ourselves with the idea of building the robots that will surely one day make watching human tennis players a thing of the past.