OpenCV is an open source library of computer vision algorithms, its power and flexibility made many machine vision projects possible. But even with code highly optimized for maximum performance, we always wish for more. Which is why our ears perk up whenever we hear about a hardware accelerated vision module, and the latest buzz is coming out of the OpenCV AI Kit (OAK) Kickstarter campaign.
There are two vision modules launched with this campaign. The OAK-1 with a single color camera for two dimensional vision applications, and the OAK-D which adds stereo cameras for that third dimension. The onboard brain is a Movidius Myriad X processor which, according to team members who have dug through its datasheet, have been massively underutilized in other products. They believe OAK modules will help the chip fulfill its potential for vision applications, delivering high performance while consuming low power in a small form factor. Reading over the spec sheet, we think it’s fair to call these “Ultimate Myriad X Dev Boards” but we must concede “OpenCV AI Kit” sounds better. It does not provide hardware acceleration for the entire OpenCV library (likely an impossible task) but it does cover the highly demanding subset suitable for Myriad X acceleration.
Since the campaign launched a few weeks ago, some additional information have been released to help assure backers that this project has real substance. It turns out OAK is an evolution of a project we’ve covered almost exactly one year ago that became a real product DepthAI, so at least this is not their first rodeo. It is also encouraging that their invitation to the open hardware community has already borne fruit. Check out this thread discussing OAK for robot vision, where a question was met with an honest “we don’t have expertise there” from the OAK team, but then ArduCam pitched in with their camera module experience to help.
We wish them success for their planned December 2020 delivery. They have already far surpassed their funding goals, they’ve shipped hardware before, and we see a good start to a development community. We look forward to the OAK-1 and OAK-D joining the ranks of other hacking friendly vision modules like OpenMV, JeVois, StereoPi, and AIY Vision.
A group of researchers have built an algorithm for finding hidden connections in artwork.
The team, comprised of computer scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Microsoft, used paintings from the Metropolitan Museum of Art and Amsterdam’s Rijksmuseum to demonstrate these hidden connections, which link artwork that shares similar styles, such as Francisco de Zurbarán’s The Martyrdom of Saint Serapion (above left) and Jan Asselijn’s The Threatened Swan (above right). They were initially inspired by the “Rembrandt and Velazquez” exhibition in the Rijksmuseum, which demonstrated similarities between the artists’ work despite the former hailing from the Protestant Netherlands and the latter from Catholic Spain.
The algorithm, dubbed “MosAIc”, differs from probabilistic generative adversarial network (GAN)-based projects that generate artwork since it focuses on image retrieval instead. Rather than focusing solely on obvious factors such as color and style, the algorithm also tries to uncover meaning and theme. It does this by constructing a data structure called a conditional k-nearest neighbor (KNN) tree, which provides a tree-like structure where branches off a central image indicate similarity to the image. In order to query the data structure, these branches are followed until the closest match to an image in a dataset is found. In further iterations, it prunes unpromising branches in order to improve its time for new queries.
Some results from running the algorithm against museum collections were finding similarities between the Dutch Double Face Banyan and a Chinese ceramic figurine, traced to the flow of porcelain and iconography from the Chinese to the Dutch in the 16th to 20th centuries.
A surprising result of this study was discovering that the approach could also be applied to find problems with deep nerual networks, which are used for creating deepfakes. While GANs can often have blind spots in their models, struggling to recreate certain classes of photos, MosAIc was able to overcome these shortcomings and accurately reproduce realistic images.
While the team admits that their implementation isn’t the most optimized version of KNN, their main objective was to present a broad conditioning scheme that is simple but effective for applications. Their hope is to inspire related researchers to consider multi-disciplinary applications for algorithms.
[Monica] wanted to try a bit of facial detection with her Raspberry Pi and she found some pretty handy packages in MATLAB to help her do just that. The packages are based on the Viola-Jones algorithm which was the first real-time object detection framework for facial detection.
She had to download MATLAB’s Raspbian image to allow the Pi to interpret MATLAB commands over a custom server. That setup is mostly pretty easy and she does a good job walking you through the setup on her project page.
With that, now she can control the Pi in MATLAB: configure the camera, toggle GPIO, etc. The real fun comes with the facial detection program. In addition to opening up a live video feed of the Pi camera, the program outputs pixel data. [Monica] was mostly just testing the stock capabilities, but wants to try detecting other objects next. We’ll see what cool modifications she’s able to come up with.
If MATLAB doesn’t quite fit your taste, we have a slew of facial detection projects on Hackaday.
[8BitsAndAByte] are back, and this time they’re taking on the comments section with art. They wondered whether or not they can take something as dubious as the comments section and redeem it into something more appealing like art.
They started by using remo.tv, a tool they’ve used in other projects, to read comments from their video live feeds and extract random phrases. The phrases are then analyzed by text to speech, and a publicly available artificial intelligence algorithm that generates an image from a text description. They can then specify art styles like modern, abstract, cubism, etc to give their image a unique appeal. They then send the image back to the original commenter, crediting them for their comment, ensuring some level of transparency.
We were a bit surprised that the phrase dog with a funny hat generated an image of a cat, so I think it’s fair to say that their AI engine could use a bit of work. But really, we could probably say that about AI as a whole.
Continue reading “Art Generated From The Dubious Comments Section”
If you live outside the UK you may not be familiar with Argos, but it’s basically what Americans would have if Sears hadn’t become a complete disaster after the Internet became popular. While they operate many brick-and-mortar stores and are a formidable online retailer, they still have a large physical catalog that is surprisingly popular. It’s so large, in fact, that interesting (and creepy) things can be done with it using machine learning.
This project from [Chris Johnson] is called the Book of Horrors and was made by feeding all 16,000 pages of the Argos catalog into a machine learning algorithm. The computer takes all of the pages and generates a model which ties the pages together into a series of animations that blends the whole catalog into one flowing, ever-changing catalog. It borders on creepy, both in visuals and in the fact that we can’t know exactly what computers are “thinking” when they generate these kinds of images.
The more steps the model was trained on the creepier the images became, too. To see more of the project you can follow it on Twitter where new images are released from time to time. It also reminds us a little of some other machine learning projects that have been used recently to create short films with equally mesmerizing imagery. Continue reading “Argos Book Of Horrors”
A new study from West Virginia University (WVU) Rockefeller Neuroscience Institute (RNI) uses a wearable device and artificial intelligence (AI) to predict COVID-19 up to 3 days before symptoms occur. The study has been an impressive undertaking involving over 1000 health care workers and frontline workers in hospitals across New York, Philadelphia, Nashville, and other critical COVID-19 hotspots.
The implementation of the digital health platform uses a custom smartphone application coupled with an Ōura smart ring to monitor biometric signals such as respiration and temperature. The platform also assesses psychological, cognitive, and behavioral data through surveys administered through a smartphone application.
We know that wearables tend to suffer from a lack of accuracy, particularly during activity. However, the Ōura ring appears to take measurements while the user is very still, especially during sleep. This presents an advantage as the accuracy of wearable devices greatly improves when the user isn’t moving. RNI noted that the Ōura ring has been the most accurate device they have tested.
Given some of the early warning signals for COVID-19 are fever and respiratory distress, it would make sense that a device able to measure respiration and temperature could be used as an early detector of COVID-19. In fact, we’ve seen a few wearable device companies attempt much of what RNI is doing as well as a few DIY attempts. RNI’s study has probably been the most thorough work released so far, but we’re sure that many more are upcoming.
The initial phase of the study was deployed among healthcare and frontline workers but is now open to the general public. Meanwhile the National Basketball Association (NBA) is coordinating its re-opening efforts using Ōura’s technology.
We hope to see more results emerge from RNI’s very important work. Until then, stay safe Hackaday.
What do you do when you have to disinfect an entire warehouse? You could send a group of people through the place with UV-C lamps, but that would take a long time as said humans cannot be in the same area as the UV-C radiation, as much as they may like the smell of BBQ chicken. Constantly repositioning the lamps or installing countless lamps would get in the way during normal operation. The answer is to strap UV-C lights to a robot according to MIT’s CSAIL, and have it ride around the space.
As can be seen in the video (also embedded after the break), a CSAIL group has been working with telepresence robotics company Ava Robotics and the Greater Boston Food Bank (GBFB). Their goal was to create a robotic system that could autonomously disinfect a GBFB warehouse using UV-C without exposing any humans to the harmful radiation. While the robotics can be controlled remotely, they can also map the space and navigate between waypoints.
While testing the system, the team used a UV-C dosimeter to confirm the effectiveness of this setup. With the robot driving along at a leisurely 0.22 miles per hour (~0.35 kilometer per hour), it was able to cover approximately 4,000 square feet (~372 square meter) in about half an hour. They estimated that about 90% of viruses like SARS-CoV-2 could be neutralized this way.
During trial runs, they discovered the need to have the robot adapt to the constantly changing layout of the warehouse, including which aisles require which UV-C depending on how full they are. Having multiple of these robots in the same space coordinate with each other would also be a useful feature addition.
Continue reading “Automating The Disinfection Of Large Spaces With Robots”