Machine Learning Helps You Track Your Internet Misery Index

We all seem to intuitively know that a lot of what we do online is not great for our mental health. Hang out on enough social media platforms and you can practically feel the changes your mind inflicts on your body as a result of what you see — the racing heart, the tight facial expression, the clenched fists raised in seething rage. Not on Hackaday, of course — nothing but sweetness and light here.

That’s all highly subjective, of course. If you’d like to quantify your online misery more objectively, take a look at the aptly named BrowZen, a machine learning application by [Nick Bild]. Built around an NVIDIA Jetson Xavier NX and a web camera, BrowZen captures images of the user’s face periodically. The expression on the user’s face is classified using a facial recognition model that has been trained to recognize facial postures related to emotions like anger, surprise, fear, and happiness. The app captures your mood and which website you’re currently looking at and stores the results in a database. Handy charts let you know which sites are best for your state of mind; it’s not much of a surprise that Twitter induces rage while Hackaday pushes [Nick]’s happiness button. See? Sweetness and light.

Seriously, we could see something like this being very useful for psychological testing, marketing research, or even medical assessments. This adds to [Nick]’s array of AI apps, which range from tracking which surfaces you touch in a room to preventing you from committing a fireable offense on a video conference.

Continue reading “Machine Learning Helps You Track Your Internet Misery Index”

Machine Learning In The Kitchen Makes For Tasty Mashup Desserts

What did you do during lockdown? A whole lot of people turned to baking in between trips to the store to search for toilet paper and hand sanitizer. Many of them baked bread for some reason, but like us, [Sara Robinson] turned to sweeter stuff to get through it.

The first Cakie ever made. Image via Google Cloud

Her pandemic ponderings wandered into the realm of baking existentialist questions, like what separates baked goods from each other, categorically speaking? What is the science behind the crunchiness of cookies, the sponginess of cake, and the fluffiness of bread?

As a developer advocate for Google Cloud, [Sara] turned to machine learning to figure out why the cookie crumbles. She collected 33 recipes each of cookies, cake, and bread and built a TensorFlow model to analyze them, which resulted in a cookie/cake/bread lineage for each recipe in a set of percentages. Not only was the model able to accurately classify recipes by type, [Sara] was able to use the model to come up with a 50/50 cookie-cake hybrid recipe. The AI delivered a list of ingredients to which she added vanilla extract and chocolate chips for flavor. From there, she had to wing it and come up with her own baking directions for the Cakie.

Continue reading “Machine Learning In The Kitchen Makes For Tasty Mashup Desserts”

Hide And Seek AI Shows Emergent Tool Use

Machine learning has come a long way in the last decade, as it turned out throwing huge wads of computing power at piles of linear algebra actually turned out to make creating artificial intelligence relatively easy. OpenAI have been working in the field for a while now, and recently observed some exciting behaviour in a hide-and-seek game they built.

The game itself is simple; two teams of AI bots play a game of hide-and-seek, with the red bots being rewarded for spotting the blue ones, and the blue ones being rewarded for avoiding their gaze. Initially, nothing of note happens, but as the bots randomly run around, they slowly learn. Over millions of trials, the seekers first learn to find the hiders, while the hiders respond by building barriers to hide behind. The seekers then learn to use ramps to loft over them, while the blue bots learn to bend the game’s physics and throw them out of the playfield. It ends with the seekers learning to skate around on blocks and the hiders building tight little barriers. It’s a continual arms race of techniques between the two sides, organically developed as the bots play against each other over time.

It’s a great study, and particularly interesting to note how much longer it takes behaviours to develop when the team switches from a basic fixed scenario to an changable world with more variables. We’ve seen other interesting gaming efforts with machine learning, too – like teaching an AI to play Trackmania. Video after the break.

Continue reading “Hide And Seek AI Shows Emergent Tool Use”

AI Learns To Drive Trackmania

Machine learning has long been a topic of interest for humanity, but only in recent years have we had broad access to great computing power to enable to the average person to dive in. [Yosh] recently decided to put an AI to work learning how to race in Trackmania.

After early experiments with supervised learning, [Yosh] decided to implement a genetic algorithm to produce an AI to drive in the game. The AI takes distance from the track walls as an input, and has steering and accelerator values as an output. Starting with 100 AIs in generation 1, [Yosh] iterated by choosing the AIs that covered the longest distance in 13 seconds. Once the AIs started to get the hang of the first few corners, he changed the training to instead prioritize the lowest time taken to traverse each of the checkpoints along the track.

The AI improved over time, and over 100 generations, got down to a 23.48s time on the test track, versus 19.63s for [Trabadia], a talented human. We’d love to see how much better the AI could do with more training. [Yosh] is trying more experiments, like providing extra feedback in the AI fitness function to keep it from hitting the walls. It’s not the first time we’ve seen a genetic algorithm used to train a racing AI, either. Video after the break.

Continue reading “AI Learns To Drive Trackmania”

OpenCV Spreads Smart Camera Joy To See Ideas Come To Life

Do you have a great application for computer vision, but couldn’t spare the cost of hardware needed to build it? Or perhaps you just need a deadline to pull you away from endless doom scrolling? Either way, the OpenCV team wants you to enter their OpenCV AI Competition 2021 and they’re willing to pitch in hardware to make it happen.

This competition is part of OpenCV’s 20th anniversary celebration, and the field of machine vision has changed a lot in those two decades. OpenCV started within Intel harnessing power of their high end CPUs, but today the excitement is around specialized acceleration hardware for vision processing. Which is why OpenCV put their support and lent their name to the OpenCV AI Kit (OAK) Kickstarter we covered a few months ago. Since then, the hardware was produced and starting to arrive in project backer’s hands. (Barring pandemic-related shipping restrictions…)

This shiny new hardware is the competition’s focus. Phase one solicits team proposals for putting an OAK-D’s power to novel use. University teams may have up to ten members, general teams are limited to four. Each team’s geographic home will put them in one of six global regions. Proposals must be submitted by January 27th, 2021. By February 11th, judges will select the best twenty-five general and ten university team proposals from each region, and every member of the team gets an OAK-D unit to turn their idea into reality by phase two deadline of June 27th. That’s up to 1,200 OAK-D modules available to anyone who can convince the judges they have a great idea and they are capable of bringing it to fruition. Is that you? Of course it is!

Teams will also receive additional resources such as an allotment of cloud compute credits to train their models, and naturally all tutorials and sample code released as part of OAK Kickstarter. No explicit resource for project team organization is mentioned, but of course our own Hackaday.io is available to support you. Best of luck to everyone who enters and we look forward to seeing all the projects this contest will bring to life.

Hands-Free Page Turning

For people who can’t lift a finger to turn the page on their ebooks, a solution is at hand. Seoul based technology company Visual Camp has adapted their eye tracking algorithms to an ebook reader. (Video, embedded below.) Reportedly this is the first time an ebook reader has been so equipped.

If your eye lingers on the page turn button, it will turn the page. While this particular application seems innocuous, some of the other applications being touted seem a little contrived if not invasive. For example, applying gaze analysis while you are reading a book, they claim to be able to make targeted recommendations for other books.

We’ve discussed eye tracking devices before, but they have utilized hardware. Visual Camp claims their AI-based technology only requires a color camera and can be integrated into existing camera-equipped devices, such an this ebook reader. They also offer a SDK for developers who want to add eye tracking control into their apps. Eye tracking is hard, though, and the devil is in the details. It’d be neat to see what they’re up to.

Continue reading “Hands-Free Page Turning”

Giving Blind Runners Independence With AI

Being able to see, move, and exercise independently is something most of us take for granted. [Thomas Panek] was an avid runner before losing his sight due to a genetic condition, and had to rely on other humans and guide dogs to run again. After challenging attendants at a Google hackathon, Project Guideline was established to give blind runners (or walkers) independence from a cane, dog or another human, while exercising outdoors. Using a smartphone with line following AI software, and bone conduction headphones, users can be guided along a path with a line painted on it. You need to watch the video below to get a taste of just how incredible it is for the users.

Getting a wheeled robot to follow a line is relatively simple, but a running human is by no means a stable sensor platform. At the previously mentioned hackathon, developers put together a rough proof of concept with a smartphone, using its camera to recognize a painted line on the ground and provide left/right audio cues.  As the project developed, the smartphone was attached to a waist belt and bone conduction headphones were used,  which don’t affect audio situational awareness as much as normal headphones.

The shaking and side to side movement of running, and varying light conditions and visual obstructions in the outdoors made the problem more difficult to solve, but within a year the developers had completed successful running tests with [Thomas] on a well-lit indoor track and an outdoor pedestrian path with a temporary line. For the first time in 25 years, [Thomas] was able to run independently.

While guide dogs have proven effective for both daily life and running, they cost approximately $60000 over an average working life of 8 years, putting them out of reach of many sight-impaired people around the world. Project Guideline is still in the early stages, and real-world problems like obstacles and traffic still need to be addressed, but there is massive potential.

Continue reading “Giving Blind Runners Independence With AI”