Litter-windrow detections in the Mediterranean Sea. (Credit: ESA)

Mapping Litter In The Oceans From Space With Existing Satellites

Aerial drone image of a litter windrow in Bay of Biscay, Spain. Windrow width: 1-2 meters. (Credit: ESA)
Aerial drone image of a litter windrow in Bay of Biscay, Spain. Windrow width: 1-2 meters. (Credit: ESA)

Recently ESA published the results of a proof-of-concept study into monitoring marine litter using existing satellites, with promising results for the Mediterranean study area. For the study, six years of historical data from the Sentinel-2 satellite multispectral imaging  cameras were used, involving 300,000 images with a resolution of 10 meters. The focus was on litter windrows as common collections of litter like plastic, wood and other types of marine debris that float on the surface, forming clearly visible lines that can be meters wide and many times as long.

These were processed as explained in the open access paper in Nature Communications by [Andrés Cózar] and colleagues. As marine litter (ML) tends to be overwhelmingly composed of plastic, this eases the detection, as any ML that’s visible from space can generally be assumed to be primarily plastic litter. This was combined with the spectral profile of common plastics, so that other types of floating materials (algae, driftwood, seafoam, etc.) could be filtered out, leaving just the litter.

This revealed many of these short-lived litter windrows, with spot confirmation from ships in the area. Some of the windrows were many kilometers in length, with an average of around 1 km.

Although just a PoC, it nevertheless shows that monitoring such plastic debris from space is quite doable, even without dedicated satellites. As every day tons more plastics make their way into the oceans, this provides us with the means to at least keep track of the scope of the problem. Even if resolving it and the associated microplastics problem is still a far-off dream.

Human Brains Can Tell Deepfake Voices From Real Ones

Although it’s generally accepted that synthesized voices which mimic real people’s voices (so-called ‘deepfakes’) can be pretty convincing, what does our brain really think of these mimicry attempts? To answer this question, researchers at the University of Zurich put a number of volunteers into fMRI scanners, allowing them to observe how their brains would react to real and a synthesized voices.  The perhaps somewhat surprising finding is that the human brain shows differences in two brain regions depending on whether it’s hearing a real or fake voice, meaning that on some level we are aware of the fact that we are listening to a deepfake.

The detailed findings by [Claudia Roswandowitz] and colleagues are published in Communications Biology. For the study, 25 volunteers were asked to accept or reject the voice samples they heard as being natural or synthesized, as well as perform identity matching with the supposed speaker. The natural voices came from four male (German) speakers, whose voices were also used to train the synthesis model with. Not only did identity matching performance crater with the synthesized voices, the resulting fMRI scans showed very different brain activity depending on whether it was the natural or synthesized voice.

One of these regions was the auditory cortex, which clearly indicates that there were acoustic differences between the natural and fake voice, the other was the nucleus accumbens (NAcc). This part of the basal forebrain is involved in the cognitive processing of e.g. motivation, reward and reinforcement learning, which plays a key role in social, maternal and addictive behavior. Overall, the deepfake voices are characterized by acoustic imperfections, and do not elicit the same sense of recognition (and thus reward sensation) as natural voices do.

Until deepfake voices can be made much better, it would appear that we are still safe, for now.

Upper stage of a Japanese H-2A rocket which has been in orbit since 2009. It's one of the largest pieces of orbital debris. (Credit: Astroscale)

Astroscale’s ADRAS-J Satellite Takes Up-Close Photo Of Discarded Rocket Stage

Although there is a lot of space in Earth orbit, there are also some seriously big man-made objects in those orbits, some of which have been there for decades. As part of efforts to remove at least some of this debris from orbit, Astroscale’s ADRAS-J (“Active Debris Removal by Astroscale-Japan”) satellite has been partaking in JAXA’s Commercial Removal of Space Debris Demonstration (CRD2). After ADRAS-J was launched by a Rocket Lab Electron rocket on February 18, it’s been moving closer to its target, with June 14th seeing an approach by roughly 50 meters, allowing for an unprecedented photo to be made of the H-2A stage in orbit. This upper stage of a Japanese H-2A rocket originally launched the GOSAT Earth observation satellite into orbit back in 2009.

The challenges with this kind of approach is that the orbital debris does not actively broadcast its location, ergo it requires a combination of on-ground and on-satellite tracking to match the orbital trajectory for a safe approach. Here ADRAS-J uses what is called Model Matching Navigation, which uses known visual information to compare it with captured images, to use these to estimate the relative distance to the target.

Although the goal of ADRAS-J is only to study the target from as closely as possible, the next phase in the CRD2 program would involve actively deorbiting this upper stage, with phase start projected to commence in 2026.

Thanks to [Stephen Walters] for the tip.

Continue reading “Astroscale’s ADRAS-J Satellite Takes Up-Close Photo Of Discarded Rocket Stage”

Recovering An Agilent 2000a/3000a Oscilloscope With Corrupt Firmware NAND Flash

Everyone knows that you can never purchase enough projects off EBay, lest boredom might inadvertently strike. That’s why [Anthony Kouttron] got his mitts on an Agilent DSO-X 2014A digital oscilloscope that was being sold as defective and not booting, effectively just for parts. When [Anthony] received the unit, this turned out to be very much the case, with the front looking like it got dragged over the tarmac prior to having the stuffing beaten out of its knobs with a hammer. Fortunately, repairing the broken encoder and the plastic enclosure was easy enough, but the scope didn’t want to boot when powered on. How bad was the damage?

As [Anthony] describes in the article, issues with this range of Agilent DSOs are well-known, with for example the PSU liking to fry the primary side due to soft power button leaving it powered 24/7 with no cooling. The other is corrupted NAND storage, which he confirmed after figuring out the UART interface on the PCB with the ST SPEAr600 ARM-based SoC. Seeing the sad Flash block decompression error from the Windows CE said enough.

This led him down the rabbithole of finding the WinCE firmware images (nuked by Keysight, backed up on his site) for this scope, along with the InfiniiVision scope application. The former is loaded via the bootloader in binary YMODEM mode, followed by installing InfiniiVision via a USB stick. An alternate method is explained in the SPEAr600 datasheet, in the form of USB BootROM, which can also be reached via the bootloader with some effort.

As for the cause of the NAND corruption, it’s speculated that the scope writes to the same section of NAND Flash on boot, with the SPEAr600’s Flash controller documentation not mentioning wear leveling. Whether that’s true or not, at least it can be fixed with some effort even without replacing the NAND Flash IC.

McDonald’s Terminates Its Drive-Through Ordering AI Assistant

McDonald’s recently announced that it will be scrapping the voice-assistant which it has installed at over 100 of its drive-throughs after a two-year trial run. In the email that was sent to franchises, McDonald’s did say that they are still looking at voice ordering solutions for automated order taking (AOT), but it appears that for now the test was a disappointment. Judging by the many viral videos of customers struggling to place an order through the AOT system, it’s not hard to see why.

This AOT attempt began when in 2019 McDonald’s acquired AI company Apprente to create its McD Tech Labs, only to sell it again to IBM who then got contracted to create the technology for McDonald’s fast-food joints. When launched in 2021, it was expected that McDonald’s drive-through ordering lanes would eventually all be serviced by AOT, with an experience akin to the Alexa and Siri voice assistants that everyone knows and loves (to yell at).

With the demise of this test at McDonald’s, it would seem that the biggest change is likely to be in the wider automation of preparing fast-food instead, with robots doing the burger flipping and freedom frying rather than a human. That said, would you prefer the McD voice assistant when going through a Drive-Thru® over a human voice?

Credit: Xinmei Liu

The US Surgeon General’s Case For A Warning Label On Social Media

The term ‘Social Media’ may give off a benign vibe, suggesting that it’s a friendly place where everyone is welcome to be themselves, yet reality has borne out that it is anything but. This is the reason why the US Surgeon General [Dr. Vivek H. Murthy] is pleading for a health warning label on social media platforms. Much like with warnings on tobacco products, it’s not expected that such a measure would make social media safe for children and adolescents, but would remind them and their parents about the risks of these platforms.

While this may sound dire for what is at its core about social interactions, there is a growing body of evidence to support the notion that social media can negatively impact mental health. A 2020 systematic review article in Cureus by [Fazida Karim] and colleagues found anxiety and depression to be the most notable negative psychological health outcomes. A 2023 editorial in BMC Psychology by [Ágnes Zsila] and [Marc Eric S. Reyes] concurs with this notion, while contrasting these cons of social media with the pros, such as giving individuals an online community where they feel that they belong.

Ultimately, it’s important to realize that social media isn’t the end-all, be-all of online social interactions. There are still many dedicated forums, IRC channels and newsgroups far away from the prying eyes and social pressure  of social media to act out a personality. Having more awareness of how social interactions affect oneself and/or one’s children is definitely essential, even if we’re unlikely to return to the ‘never give out your real name’ days of  the pre-2000s Internet.

Reverse-Engineering Makita Batteries To Revive Them

Modern lithium-ion battery packs for cordless power tools contain an incredible amount of energy, which necessitates that they come with a range of safeties. Although it’s good when the battery management system (BMS) detects a fault and cuts power to prevent issues, there exist the possibility of false positives. Having an expensive battery pack brick itself for no good reason is rather annoying, as is being unable to reuse a BMS in for example a re-manufactured battery. This was the reasoning that led [Martin Jansson] down the path of reverse-engineering Makita batteries for starters.

After that initial reverse-engineering attempt involving a firmware dump of the NEC (Renesas) F0513 MCU, [Martin] didn’t get back to the project until recently, when he was contacted by [Romain] who donated a few BMS boards to the cause. One of these features an STM32 MCU, which made the task much easier. Ultimately [Martin] was able to determine the command set for the Maxim OneWire-based communication protocol, as was a hidden UART mode.

Due to the critical timing required, off-the-shelf programmers didn’t work, so an Arduino Uno-based programmer (ArduinoOBI) was created instead, which can be found on GitHub along with the Open Battery Information desktop application which provides access to these BMS features after connecting to the battery pack. Although only Makita is supported right now, [Martin] would like to see support for other brands being added as well.