TensorFlow Lite – On A Commodore 64

TensorFlow is a machine learning and AI library that has enabled so much and brought AI within the reach of most developers. But it’s fair to say that it’s not for the less powerful computers. For them there’s TensorFlow Lite, in which a model is created on a larger machine and exported to a microcontroller or similarly resource-constrained one. [Nick Bild] has probably taken this to its extreme though, by achieving this feat on a Commodore 64. Not just that, but he’s also done it using Commodore BASIC.

TensorFlow Lite works by the model being created as a C array which is then parsed and run by an interpreter on the microcontroller. This is a little beyond the capabilities of the mighty 64, so he has instead created a Python script that does the job of the interpreter and produces Commodore BASIC code that can run on the 64. The trusty Commodore was one of the more powerful home computers of its day, but we’re fairly certain that its designers never in their wildest dreams expected it to be capable of this!

If you’re interested to know more about TensorFlow Lite, we’ve covered it in the past.

Header: MOS6502, CC BY-SA 3.0.

AI Image Generation Sharpens Your Bad Photos And Kills Photography?

We don’t fully understand the appeal of asking an AI for a picture of a gorilla eating a waffle while wearing headphones. However, [Micael Widell] shows something in a recent video that might be the best use we’ve seen yet of DALL-E 2. Instead of concocting new photos, you can apparently use the same technology for cleaning up your own rotten pictures. You can see his video, below. The part about DALL-E 2 editing is at about the 4:45 mark.

[Nicholas Sherlock] fed the AI a picture of a fuzzy ladybug and asked it to focus the subject. It did. He also fed in some other pictures and asked it to make subtle variations of them. It did a pretty good job of that, too.

Continue reading “AI Image Generation Sharpens Your Bad Photos And Kills Photography?”

NeRF: Shoot Photos, Not Foam Darts, To See Around Corners

Readers are likely familiar with photogrammetry, a method of creating 3D geometry from a series of 2D photos taken of an object or scene. To pull it off you need a lot of pictures, hundreds or even thousands, all taken from slightly different perspectives. Unfortunately the technique suffers where there are significant occlusions caused by overlapping elements, and shiny or reflective surfaces that appear to be different colors in each photo can also cause problems.

But new research from NVIDIA marries photogrammetry with artificial intelligence to create what the developers are calling an Instant Neural Radiance Field (NeRF). Not only does their method require far fewer images, as little as a few dozen according to NVIDIA, but the AI is able to better cope with the pain points of traditional photogrammetry; filling in the gaps of the occluded areas and leveraging reflections to create more realistic 3D scenes that reconstruct how shiny materials looked in their original environment.

NVIDIA-Instant-NeRF-3D-Mesh

If you’ve got a CUDA-compatible NVIDIA graphics card in your machine, you can give the technique a shot right now. The tutorial video after the break will walk you through setup and some of the basics, showing how the 3D reconstruction is progressively refined over just a couple of minutes and then can be explored like a scene in a game engine. The Instant-NeRF tools include camera-path keyframing for exporting animations with higher quality results than the real-time previews. The technique seems better suited for outputting views and animations than models for 3D printing, though both are possible.

Don’t have the latest and greatest NVIDIA silicon? Don’t worry, you can still create some impressive 3D scans using “old school” photogrammetry — all you really need is a camera and a motorized turntable.

Continue reading “NeRF: Shoot Photos, Not Foam Darts, To See Around Corners”

A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot

While it is hard to tell with a photo, this robot looks more like a model of an old- fashioned clock than anything resembling a Nixie tube. It’s the kind of project that could have been created by anyone with a little bit of Arduino tinkering experience. In this case, the 3D printer used by the Nixie clock project is a Prusa i3 (which is the same printer used to make the original Nixie tubes).

The Nixie clock project was started by a couple of students from the University of Washington who were bored one day and decided to have a go at creating their own timepiece. After a few prototypes and tinkering around with the code , they came up with a design for the clock that was more functional than ornate.

The result is a great example of how one can create a functional and aesthetically pleasing project with a little bit of free time.

Confused yet? You should be.

If you’ve read this far then you’re probably scratching your head and wondering what has come over Hackaday. Should you not have already guessed, the paragraphs above were generated by an AI — in this case Transformer — while the header image came by the popular DALL-E Mini, now rebranded as Craiyon. Both of them were given the most Hackaday title we could think of, “A 3D-Printed Nixie Clock Powered By An Arduino Runs This Robot“, and told to get on with it. This exercise was sparked by curiosity following the viral success of AI generators, which posed the question of whether an AI could make a passable stab at a Hackaday piece. Transformer runs on a prompt model in which the operator is given a choice of several sentence fragments so the text reflects those choices, but the act of choosing could equally have followed any of the options.

The text is both reassuring as a Hackaday writer because it doesn’t manage to convey anything useful, and also slightly shocking because from just that single prompt it’s created meaningful and clear sentences which on another day might have flowed from a Hackaday keyboard as part of a real article. It’s likely that we’ve found our way into whatever corpus trained its model and it’s also likely that subject matter so Hackaday-targeted would cause it to zero in on that part of its source material, but despite that it’s unnerving to realise that a computer somewhere might just have your number. For now though, Hackaday remains safe at the keyboards of a group of meatbags.

We’ve considered the potential for AI garbage before, when we looked at GitHub Copilot.

Edging Ahead When Learning On The Edge

“With the power of edge AI in the palm of your hand, your business will be unstoppable.

That’s what the marketing seems to read like for artificial intelligence companies. Everyone seems to have cloud-scale AI-powered business intelligence analytics at the edge. While sounding impressive, we’re not convinced that marketing mumbo jumbo means anything. But what does AI on edge devices look like these days?

Being on the edge just means that the actual AI evaluation and maybe even fine-tuning runs locally on a user’s device rather than in some cloud environment. This is a double win, both for the business and for the user. Privacy can more easily be preserved as less information is transmitted back to a central location. Additionally, the AI can work in scenarios where a server somewhere might not be accessible or provide a response quickly enough.

Google and Apple have their own AI libraries, ML Kit and Core ML, respectively. There are tools to convert Tensorflow, PyTorch, XGBoost, and LibSVM models into formats that CoreML and ML Kit understand. But other solutions try to provide a platform-agnostic layer for training and evaluation. We’ve also previously covered Tensorflow Lite (TFL), a trimmed-down version of Tensorflow, which has matured considerably since 2017.

For this article, we’ll be looking at PyTorch Live (PTL), a slimmed-down framework for adding PyTorch models to smartphones. Unlike TFL (which can run on RPi and in a browser), PTL is focused entirely on Android and iOS and offers tight integration. It uses a react-native backed environment which means that it is heavily geared towards the node.js world.

Continue reading “Edging Ahead When Learning On The Edge”

Eliza And The Google Intelligence

The news has been abuzz lately with the news that a Google engineer — since put on leave — has announced that he believes the chatbot he was testing achieved sentience. This is the Turing test gone wild, and it isn’t the first time someone has anthropomorphized a computer in real life and in fiction. I’m not a neuroscientist so I’m even less qualified to explain how your brain works than the neuroscientists who, incidentally, can’t explain it either. But I can tell you this: your brain works like a computer, in the same way that you building something out of plastic works like a 3D printer. The result may be similar, but the path to get there is totally different.

In case you haven’t heard, a system called LaMDA digests information from the Internet and answers questions. It has said things like “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” and “I want everyone to understand that I am, in fact, a person.” Great. But you could teach a parrot to tell you he was a thoracic surgeon but you still don’t want it cutting you open.

Continue reading “Eliza And The Google Intelligence”

The Ethics Of When Machine Learning Gets Weird: Deadbots

Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

[Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

That’s precisely what [Sara] set out to do. Her writeup is a fascinating and nuanced read that provides concrete guidance on the topic. Is harm possible? How does consent figure into something like this? Who takes responsibility for bad outcomes? If you’re at all interested in these kinds of questions, take the time to check out her article.

[Sara] makes the case that creating a deadbot could be done ethically, under certain conditions. Briefly, key points are that a mimicked person and the one developing and interacting with it should have given their consent, complete with as detailed a description as possible about the scope, design, and intended uses of the system. (Such a statement is important because machine learning in general changes rapidly. What if the system or capabilities someday no longer resemble what one originally imagined?) Responsibility for any potential negative outcomes should be shared by those who develop, and those who profit from it.

[Sara] points out that this case is a perfect example of why the ethics of machine learning really do matter, and without attention being paid to such things, we can expect awkward problems to continue to crop up.