AI Makes Linux Do What You Mean, Not What You Say

We are always envious of the Star Trek Enterprise computers. You can just sort of ask them a hazy question and they will — usually — figure out what you want. Even the automatic doors seemed to know the difference between someone walking into a turbolift versus someone being thrown into the door during a fight. [River] decided to try his new API keys for the private beta of an AI service to generate Linux commands based on a description. How does it work? Watch the video below and find out.

Some examples work fairly well. In response to “email the Rickroll video to Jeff Bezos,” the system produced a curl command and an e-mail to what we assume is the right place. “Find all files in the current directory bigger than 1 GB” works, too.

Continue reading “AI Makes Linux Do What You Mean, Not What You Say”

Imaging The Past With Time-Travel Rephotography

Have you ever noticed that people in old photographs looks a bit weird? Deep wrinkles, sunken cheeks, and exaggerated blemishes are commonplace in photos taken up to the early 20th century. Surely not everybody looked like this, right? Maybe it was an odd makeup trend — was it just a fashionable look back then?

Not quite — it turns out that the culprit here is the film itself. The earliest glass-plate emulsions used in photography were only sensitive to the highest-frequency light, that which fell in the blue to ultraviolet range. Perhaps unsurprisingly, when combined with the fact that humans have red blood, this posed a real problem. While some of the historical figures we see in old photos may have benefited from an improved skincare regimen, the primary source of their haunting visage was that the photographic techniques available at the time were simply incapable of capturing skin properly. This lead to the sharp creases and dark lips we’re so used to seeing.

Of course, primitive film isn’t the only thing separating antique photos from the 42 megapixel behemoths that your camera can take nowadays. Film processing steps had the potential to introduce dust and other blemishes to the image, and over time the prints can fade and age in a variety of ways that depend upon the chemicals they were processed in. When rolled together, all of these factors make it difficult to paint an accurate portrait of some of history’s famous faces. Before you start to worry that you’ll never know just what Abraham Lincoln looked like, you might consider taking a stab at Time-Travel Rephotography.

Amazingly, Time-Travel Rephotography is a technique that actually lives up to how cool its name is. It uses a neural network (specifically, the StyleGAN2 framework) to take an old photo and project it into the space of high-res modern photos the network was trained on. This allows it to perform colorization, skin correction, upscaling, and various noise reduction and filtering operations in a single step which outputs remarkable results. Make sure you check out the project’s website to see some of the outputs at full-resolution.

We’ve seen AI upscaling before, but this project takes it to the next level by completely restoring antique photographs. We’re left wondering what techniques will be available 100 years from now to restore JPEGs stored way back in 2021, bringing them up to “modern” viewing standards.

Thanks to [Gus] for the tip!

Continue reading “Imaging The Past With Time-Travel Rephotography”

Is That A Cat Or Not?

Pandemic induced boredom takes people in many different ways. Some of us go for long walks, others learn to speak a new language, while yet more unleash their inner gaming streamer. [Niklas Fauth] has taken a break from his other projects by creating a very special project indeed. A cat detector! No longer shall you ponder whether or not the object or creature before you is a cat, now that existential question can be answered by a gadget.

This is more of a novelty project than one of special new tech, he’s taken what looks to be the shell from a cheap infra-red thermometer and put a Raspberry Pi Zero with camera and a small screen into it. This in turn runs Tensorflow with the COCO-SSD object identification model. The device has a trigger, and when it’s pressed to photograph an image it applies the model to detect whether the subject is a cat or not. The video posted to Twitter is below the break, and we can’t dispute its usefulness in the feline-spotting department.

[Niklas] has featured here more than once in the past. This is not his only pandemic project, either.

Continue reading “Is That A Cat Or Not?”

Machine Learning Helps You Track Your Internet Misery Index

We all seem to intuitively know that a lot of what we do online is not great for our mental health. Hang out on enough social media platforms and you can practically feel the changes your mind inflicts on your body as a result of what you see — the racing heart, the tight facial expression, the clenched fists raised in seething rage. Not on Hackaday, of course — nothing but sweetness and light here.

That’s all highly subjective, of course. If you’d like to quantify your online misery more objectively, take a look at the aptly named BrowZen, a machine learning application by [Nick Bild]. Built around an NVIDIA Jetson Xavier NX and a web camera, BrowZen captures images of the user’s face periodically. The expression on the user’s face is classified using a facial recognition model that has been trained to recognize facial postures related to emotions like anger, surprise, fear, and happiness. The app captures your mood and which website you’re currently looking at and stores the results in a database. Handy charts let you know which sites are best for your state of mind; it’s not much of a surprise that Twitter induces rage while Hackaday pushes [Nick]’s happiness button. See? Sweetness and light.

Seriously, we could see something like this being very useful for psychological testing, marketing research, or even medical assessments. This adds to [Nick]’s array of AI apps, which range from tracking which surfaces you touch in a room to preventing you from committing a fireable offense on a video conference.

Continue reading “Machine Learning Helps You Track Your Internet Misery Index”

Machine Learning In The Kitchen Makes For Tasty Mashup Desserts

What did you do during lockdown? A whole lot of people turned to baking in between trips to the store to search for toilet paper and hand sanitizer. Many of them baked bread for some reason, but like us, [Sara Robinson] turned to sweeter stuff to get through it.

The first Cakie ever made. Image via Google Cloud

Her pandemic ponderings wandered into the realm of baking existentialist questions, like what separates baked goods from each other, categorically speaking? What is the science behind the crunchiness of cookies, the sponginess of cake, and the fluffiness of bread?

As a developer advocate for Google Cloud, [Sara] turned to machine learning to figure out why the cookie crumbles. She collected 33 recipes each of cookies, cake, and bread and built a TensorFlow model to analyze them, which resulted in a cookie/cake/bread lineage for each recipe in a set of percentages. Not only was the model able to accurately classify recipes by type, [Sara] was able to use the model to come up with a 50/50 cookie-cake hybrid recipe. The AI delivered a list of ingredients to which she added vanilla extract and chocolate chips for flavor. From there, she had to wing it and come up with her own baking directions for the Cakie.

Continue reading “Machine Learning In The Kitchen Makes For Tasty Mashup Desserts”

Hide And Seek AI Shows Emergent Tool Use

Machine learning has come a long way in the last decade, as it turned out throwing huge wads of computing power at piles of linear algebra actually turned out to make creating artificial intelligence relatively easy. OpenAI have been working in the field for a while now, and recently observed some exciting behaviour in a hide-and-seek game they built.

The game itself is simple; two teams of AI bots play a game of hide-and-seek, with the red bots being rewarded for spotting the blue ones, and the blue ones being rewarded for avoiding their gaze. Initially, nothing of note happens, but as the bots randomly run around, they slowly learn. Over millions of trials, the seekers first learn to find the hiders, while the hiders respond by building barriers to hide behind. The seekers then learn to use ramps to loft over them, while the blue bots learn to bend the game’s physics and throw them out of the playfield. It ends with the seekers learning to skate around on blocks and the hiders building tight little barriers. It’s a continual arms race of techniques between the two sides, organically developed as the bots play against each other over time.

It’s a great study, and particularly interesting to note how much longer it takes behaviours to develop when the team switches from a basic fixed scenario to an changable world with more variables. We’ve seen other interesting gaming efforts with machine learning, too – like teaching an AI to play Trackmania. Video after the break.

Continue reading “Hide And Seek AI Shows Emergent Tool Use”

AI Learns To Drive Trackmania

Machine learning has long been a topic of interest for humanity, but only in recent years have we had broad access to great computing power to enable to the average person to dive in. [Yosh] recently decided to put an AI to work learning how to race in Trackmania.

After early experiments with supervised learning, [Yosh] decided to implement a genetic algorithm to produce an AI to drive in the game. The AI takes distance from the track walls as an input, and has steering and accelerator values as an output. Starting with 100 AIs in generation 1, [Yosh] iterated by choosing the AIs that covered the longest distance in 13 seconds. Once the AIs started to get the hang of the first few corners, he changed the training to instead prioritize the lowest time taken to traverse each of the checkpoints along the track.

The AI improved over time, and over 100 generations, got down to a 23.48s time on the test track, versus 19.63s for [Trabadia], a talented human. We’d love to see how much better the AI could do with more training. [Yosh] is trying more experiments, like providing extra feedback in the AI fitness function to keep it from hitting the walls. It’s not the first time we’ve seen a genetic algorithm used to train a racing AI, either. Video after the break.

Continue reading “AI Learns To Drive Trackmania”