Twitch And Blink Your Way Through Typing With This Facial Keyboard

For those that haven’t experienced it, the early days of parenthood are challenging, to say the least. Trying to get anything accomplished with a raging case of sleep deprivation is hard enough, but the little bundle of joy who always seems to need to be in physical contact with you makes doing things with your hands nigh impossible. What’s the new parent to do when it comes time to be gainfully employed?

Finding himself in such a boat, [Fletcher]’s solution was to build a face-activated keyboard to work around his offspring’s needs. Before you ask: no, voice recognition software wouldn’t work, at least according to the sleepy little boss who protests noisy awakenings. The solution instead was to first try OpenCV and the dlib facial recognition library to watch [Fletcher] blinking out Morse code. While that sorta-kinda worked, one’s blinkers can’t long endure such a workout, so he moved on to an easier set of gestures. Mouthing Morse code covers most of the keyboard, while a combination of eye, eyebrow, and other facial twitches and tics cover the rest, with MediaPipe’s Face Mesh doing the heavy-lifting in terms of landmark detection.

The resulting facial keyboard, aptly dubbed “CheekyKeys,” performed well enough for [Fletcher] to use for a skills test during an interview with a Big Tech Company. Imagining the interviewer on the other end watching him convulse his way through the interview was worth the price of admission, and we don’t even care if it was a put-on. Video after the break.

CheekyKeys is pretty cool, doing something with a webcam and Python that we thought would have needed a dedicated AI depth camera to accomplish. But perhaps the real hack here was how [Fletcher] taught himself Morse in fifteen minutes.

Continue reading “Twitch And Blink Your Way Through Typing With This Facial Keyboard”

Hackaday Links Column Banner

Hackaday Links: March 13, 2022

As Russia’s war on Ukraine drags on, its knock-on effects are being felt far beyond the eastern Europe theater. And perhaps nowhere is this more acutely felt than in the space launch industry, seeing that at least until recently, Russia was pretty much everyone’s go-to ride to orbit. All that has changed now, at least temporarily, and has expanded to include halting sales of rocket engines used in other nations’ launch vehicles. Specifically, Roscosmos has put an end to exports of the RD-180 engine used in the US Atlas V launch vehicle, along with the RD-181 thrusters found in the Antares rocket. The loss of these engines may be more symbolic than practical, at least for the RD-180 — United Launch Alliance stopped selling launches on Atlas V back last year, and had secured the engines it needed for the 29 flights it has booked by that April. Still, there’s some irony that the Atlas V, which started life as an ICBM aimed at the USSR in the 1950s, has lost its Russian-made engines.

Bad news for Jan Mrázek’s popular open-source parametric search utility which made JLCPCB’s component library easier to use. We wrote about it back in 2020, and things seemed to be going fine up until this week, when Jan got a take-down request for his service. When we first heard about this, we checked the application’s web page, which bore a big red banner that included what were apparently unpleasant accusations Jan had received, including the words “reptile” and “parasitic.” The banner is still there, but the text has changed to a more hopeful tone, noting that LCSC, the component supplier for JLC’s assembly service, objected to the way Jan was pulling component data, and that they are now working together on something that everyone can be happy with. Here’s hoping that the service is back in action again soon.

Good news, everyone: Epson is getting into the 3D printer business. Eager to add a dimension to the planar printing world they’ve mostly worked in, they’ve announced that they’ll be launching a direct-extrusion printer sometime soon. Aimed at the industrial market, the printer will use a “flat screw extruder,” which is supposed to be similar to what the company uses on its injection molding machines. We sure didn’t know Epson was in the injection molding market, so it’ll be interesting to see if expertise there results in innovation in 3D printing, especially if it trickles down to the consumer printing market. Just as long as they don’t try to DRM the pellets, of course.

You can’t judge a book by its cover, but it turns out that there’s a lot you can tell about a person’s genetics just by looking at their face. At least that’s according to an AI startup called FDNA, which makes an app called “Face2Gene” that the company claims can identify 300 genetic disorders by analyzing photos of someone’s face. Some genetic disorders, like Down Syndrome, leave easily recognizable facial features, but some changes are far more subtle and hard to recognize. We had heard of cases where photos of toddlers posted on social media were used to diagnose retinoblastoma, a rare cancer of the retina. But this is on another level entirely.

And finally, working in an Amazon warehouse has got to be a tough gig, and if some of the stories are to be believed, it borders on being a horror show. But one Amazonian recently shared a video that showed what it’s like to get trapped by his robotic coworkers. The warehouse employee somehow managed to get stuck in a maze created by Amazon’s pods, which are stacks of shelves that hold merchandise and are moved around the warehouse floor by what amounts to robotic pallet jacks. Apparently, the robots know enough to not collide with their meat-based colleagues, but not enough to not box them in. To be fair, the human eventually found a way out, but it was a long search and it seems like another pod could have moved into position to block the exit at any time. You could see it as a scary example of human-robot interaction gone awry, but we prefer to look at it as the robots giving their friend a little unscheduled break away from the prying eyes of his supervisor.

Facial Recognition For Covid-19 Tracking In Seoul

The city of Bucheon, population 830,000, is a satellite city southwest of Seoul and part of the greater metropolitan area and the site of a pilot program to apply AI facial recognition and tracking technologies to aid Covid-19 epidemiological investigators. South Korea has been generally praised for its rapid response to coronavirus patient tracking since the beginning of the outbreak. People entering public facilities enter their information on a roster or scan a QR code. Epidemiologists tracking outbreaks use a variety of data available to them, including these logs, electronic transaction data, mobile phone location logs, CCTV footage, and interviews. But the workload can be overwhelming, and there are only a fixed number of workers with the required training available, despite efforts to hire more.

As contract tracing has been done to-date, it takes one investigator up to an hour to trace the movements of one patient. When the system goes online in January, it should be able to trace one patient in less than a minute, handling up to ten traces simultaneously. Project officials say there is no plan for this system to expand to the rest of Seoul, nor nationwide. But with the growing virus caseloads and continued difficulties hiring and training investigators, it’s not unexpected that officials will be turning to these technologies more and more to keep up with the increasing workload.

Like the controversy surrounding the recent facial recognition project at Incheon International Airport, people are becoming concerned about the privacy implications and the specter of a Big Brother government that tracks each and every move of its citizens — a valid fear, given the state of technology today. The project planners note that the data is being legally collected and its usage subject to strict rules. Korean privacy law requires consent for the collecting and storage of biometric data. But there are exceptions for situations such as disease control and prevention.

Even if all the privacy concerns are solves, we wonder just how effective these AI systems will be for tracking people wearing masks. This is not an issue unique to South Korea or even Asia. Many countries around the world are turning to such technologies (see this article from the Columbia School of Law) and are having similar struggles striking the balance between privacy and public health requirements.

[Banner image: “facial-recognition-1” by Electronic_Frontier_Foundation. Thanks for all you do!]

Korean Facial Recognition Project Faces Opposition

It was discovered last month that a South Korean government project has been providing millions of facial images taken at Incheon International Airport to private industry without the consent of those photographed. Several civic groups called this a “shocking human rights disaster” in a 9 Nov press conference, and formally requested that the project be cancelled. In response, the government has only promised that “the project would be conducted at a minimum level to ensure personal information is not abused”. These groups are now planning a lawsuit to challenge the project.

Facial information and other biometric data aren’t easily altered and are unique to the individuals concerned. If this data were to be leaked, it would constitute a devastating infringement upon their privacy. It’s unheard of for state organizations — whose duty it is to manage and control facial recognition technology — to hand over biometric information collected for public purposes to a private-sector company for the development of technology.

The program itself wasn’t secret, and had been publicly announced back in 2019. But the project’s scope and implementation weren’t made clear until a lawmaker recently requested documents on the project from the responsible government agencies. The system, called the Artificial Intelligence and Tracking System Construction Project, was a pilot program set to run until 2022. Its goals were to simplify the security and immigration screening of passengers, improve airport security, and to promote the local AI industry in South Korea. If the project proves successful, the plan is to expand it to other airports and ports in the country.

Current systems at the airport do one-to-one facial recognition. For example, they try to determine whether the face of the person presenting a passport matches the photo in the passport. The goal of this new project was to develop one-to-many matching algorithms, which can match one face against the plethora of faces in an airport, track the movement of a face within the airport, and flag “suspicious” activities which could be a security concern.

The groups protesting the project note that the collection and sharing of these images without the travelers’ consent is prohibited by the Personal Information Protection Act, the South Korean law which governs such things. Under this act, a project like this would ordinarily require consent of the participants. But the government’s interpretation relies on an exception in the act, specifically, Article 15 Section 3, which states:

A personal information controller may use personal information without the consent of a data subject within the scope reasonably related to the initial purpose of the collection

Basically they are saying that since the images were collected at the security and immigration checkpoints, and that the project will be using them to improve the security and immigration checkpoints, no consent is required.

  • Foreigners: 120 million individuals, face image, nationality, gender, age
  • Korean citizens: 57.6 million individuals, face image, nationality, gender, age
  • Other: unknown number of individuals, images and videos of atypical behavior and travelers in motion

The breakdown of the numbers above reveals that 57 million Korean citizens are in the data set, a bit surprising to many since the collection of biometric data on Korean citizens at immigration is prohibited by law. The project circumvented this by only collecting data from citizens who participate in the automated Smart Entry service, a voluntary program which uses fingerprints and facial recognition. It’s interesting to note that the number of passengers using Incheon airport since May 2019 (the program was announced 30 Apr 2019) is only 62 million, so the average passenger appears approximately three times in the data set.

Are there any similar programs in your region? How do they handle the issue of consent, if at all? Let us know in the comments below.

[Banner image: “Customer uses facial recognition as identification at TSA security checkpoint” by DeltaNewsHub, CC BY 2.0  — Yes, it’s from another country with similar problems, but much less public outcry. Discuss in the comments!]

Adversarial Makeup: Your Contouring Skills Could Defeat Facial Recognition

Facial recognition is everywhere these days. Cloud servers churn through every picture uploaded to social media, phone cameras help put faces to names, and CCTV systems are being used to trace citizens in their day-to-day lives. You might want to dodge this without arousing suspicion, just for a little privacy now and then. As it turns out, common makeup techniques can help you do just that.

In research from a group at the Ben-Gurion University of the Negev, the team trialled whether careful makeup contouring techniques could fool a facial recognition system. There are no wild stripes or dazzle patterns here; these techniques are about natural looks and are used by makeup artists every day.

The trick is to use a surrogate facial recognition system and a picture of the person who intends to evade. Digital techniques are used to alter the person’s appearance until it fools the facial recognition system. This is then used as a guide for a makeup artist to recreate using typical contouring techniques.

The theory was tested with a two-camera system in a corridor. The individual was identified correctly in 47.57% of frames in which a face was detected when wearing no makeup. With random makeup, this dropped to 33.73%, however with the team’s intentionally-designed makeup scheme applied, the attacker was identified in just 1.22% of frames. (PDF)

The attack relies on having a good surrogate of the facial recognition system one wishes to fool. Else, it’s difficult to properly design appropriate natural-look makeup to fool the system. However, it goes to show the power of contouring to completely change one’s look, both in front of humans and the machines!

Facial recognition remains a controversial issue, but nothing is stopping its rollout across the world. Indeed, your facial profile may already be out there.

OPARP Telepresence Robot

[Erik Knutsson] is stuck inside with a bunch of robot parts, and we know what lies down that path. His Open Personal Assistant Robotic Platform aims to help out around the house with things like filling pet food bowls, but for now, he is taking one step at a time and working out the bugs before adding new features. Wise.

The build started with a narrow base, an underpowered RasPi, and a quiet speaker, but those were upgraded in turn. Right now, it is a personal assistant on wheels. Alexa was the first contender, but Mycroft is in the spotlight because it has more versatility. At first, the mobility was a humble web server with a D-pad, but now it leverages a distance sensor and vision, and can even follow you with a voice command.

The screen up top gives it a personable look, but it is slated to become a display for everything you’d want to see on your robot assistant, like weather, recipes, or a video chat that can walk around with you. [Erik] would like to make something that assists the elderly who might need help with chores and help connect people who are stuck inside like him.

Expressive robots have long since captured our attention and we’re nuts for privacy-centric personal assistants.

Continue reading “OPARP Telepresence Robot”

Quadcopter With Stereo Vision

Flying a quadcopter or other drone can be pretty exciting, especially when using the video signal to do the flying. It’s almost like a real-life video game or flight simulator in a way, except the aircraft is physically real. To bring this experience even closer to the reality of flying, [Kevin] implemented stereo vision on his quadcopter which also adds an impressive amount of functionality to his drone.

While he doesn’t use this particular setup for drone racing or virtual reality, there are some other interesting things that [Kevin] is able to do with it. The cameras, both ESP32 camera modules, can make use of their combined stereo vision capability to determine distances to objects. By leveraging cloud computing services from Amazon to offload some of the processing demands, the quadcopter is able to recognize faces and keep the drone flying at a fixed distance from that face without needing power-hungry computing onboard.

There are a lot of other abilities that this drone unlocks by offloading its resource-hungry tasks to the cloud. It can be flown by using a smartphone or tablet, and has its own web client where its user can observe the facial recognition being performed. Presumably it wouldn’t be too difficult to use this drone for other tasks where having stereoscopic vision is a requirement.

Thanks to [Ilya Mikhelson], a professor at Northwestern University, for this tip about a student’s project.