First Hubble Image Taken In New Single Gyro Pointing Mode

After Space Shuttle Atlantis’ drive-by repair of the Hubble Space Telescope (HST) in May of 2009, the end of the STS program meant that the space telescope had to fend for itself with no prospect for any further repair missions. The weakest point turned out to be the gyroscopes, with of the original six only three functioning until May 24th of 2024 when one failed and couldn’t be reset any more. To make the most out of the HST’s remaining lifespan, NASA decided to transition again to single-gyroscope operation, with the most recent imaging results showing that this enables HST to return to its science mission.

Although the HST has operated with a reduced number of gyroscopes before, while awaiting its (much delayed) 2009 Servicing Mission 4, this time around it would appear that no such aid is coming. Although HST is still very much functional even after recently celebrating its 34th year in space, there is a lot of debate about whether another servicing mission could be organized, or whether HST will be deorbited in a number of years. Recently people like [Jared Isaacman] have suggested ideas for an STS servicing mission, with [Jared] even offering to pay for the entire servicing mission out of pocket.

While there is an argument to be made that a Crew Dragon is a poor substitute for a Shuttle with its big cargo bay, airlock and robotic arm, it’s promising to see at least that for now HST can do what it does best with few compromises, while we may just see Servicing Mission 5 happening at some point before that last gyro kicks the bucket.

Litter-windrow detections in the Mediterranean Sea. (Credit: ESA)

Mapping Litter In The Oceans From Space With Existing Satellites

Aerial drone image of a litter windrow in Bay of Biscay, Spain. Windrow width: 1-2 meters. (Credit: ESA)
Aerial drone image of a litter windrow in Bay of Biscay, Spain. Windrow width: 1-2 meters. (Credit: ESA)

Recently ESA published the results of a proof-of-concept study into monitoring marine litter using existing satellites, with promising results for the Mediterranean study area. For the study, six years of historical data from the Sentinel-2 satellite multispectral imaging  cameras were used, involving 300,000 images with a resolution of 10 meters. The focus was on litter windrows as common collections of litter like plastic, wood and other types of marine debris that float on the surface, forming clearly visible lines that can be meters wide and many times as long.

These were processed as explained in the open access paper in Nature Communications by [Andrés Cózar] and colleagues. As marine litter (ML) tends to be overwhelmingly composed of plastic, this eases the detection, as any ML that’s visible from space can generally be assumed to be primarily plastic litter. This was combined with the spectral profile of common plastics, so that other types of floating materials (algae, driftwood, seafoam, etc.) could be filtered out, leaving just the litter.

This revealed many of these short-lived litter windrows, with spot confirmation from ships in the area. Some of the windrows were many kilometers in length, with an average of around 1 km.

Although just a PoC, it nevertheless shows that monitoring such plastic debris from space is quite doable, even without dedicated satellites. As every day tons more plastics make their way into the oceans, this provides us with the means to at least keep track of the scope of the problem. Even if resolving it and the associated microplastics problem is still a far-off dream.

Human Brains Can Tell Deepfake Voices From Real Ones

Although it’s generally accepted that synthesized voices which mimic real people’s voices (so-called ‘deepfakes’) can be pretty convincing, what does our brain really think of these mimicry attempts? To answer this question, researchers at the University of Zurich put a number of volunteers into fMRI scanners, allowing them to observe how their brains would react to real and a synthesized voices.  The perhaps somewhat surprising finding is that the human brain shows differences in two brain regions depending on whether it’s hearing a real or fake voice, meaning that on some level we are aware of the fact that we are listening to a deepfake.

The detailed findings by [Claudia Roswandowitz] and colleagues are published in Communications Biology. For the study, 25 volunteers were asked to accept or reject the voice samples they heard as being natural or synthesized, as well as perform identity matching with the supposed speaker. The natural voices came from four male (German) speakers, whose voices were also used to train the synthesis model with. Not only did identity matching performance crater with the synthesized voices, the resulting fMRI scans showed very different brain activity depending on whether it was the natural or synthesized voice.

One of these regions was the auditory cortex, which clearly indicates that there were acoustic differences between the natural and fake voice, the other was the nucleus accumbens (NAcc). This part of the basal forebrain is involved in the cognitive processing of e.g. motivation, reward and reinforcement learning, which plays a key role in social, maternal and addictive behavior. Overall, the deepfake voices are characterized by acoustic imperfections, and do not elicit the same sense of recognition (and thus reward sensation) as natural voices do.

Until deepfake voices can be made much better, it would appear that we are still safe, for now.

Upper stage of a Japanese H-2A rocket which has been in orbit since 2009. It's one of the largest pieces of orbital debris. (Credit: Astroscale)

Astroscale’s ADRAS-J Satellite Takes Up-Close Photo Of Discarded Rocket Stage

Although there is a lot of space in Earth orbit, there are also some seriously big man-made objects in those orbits, some of which have been there for decades. As part of efforts to remove at least some of this debris from orbit, Astroscale’s ADRAS-J (“Active Debris Removal by Astroscale-Japan”) satellite has been partaking in JAXA’s Commercial Removal of Space Debris Demonstration (CRD2). After ADRAS-J was launched by a Rocket Lab Electron rocket on February 18, it’s been moving closer to its target, with June 14th seeing an approach by roughly 50 meters, allowing for an unprecedented photo to be made of the H-2A stage in orbit. This upper stage of a Japanese H-2A rocket originally launched the GOSAT Earth observation satellite into orbit back in 2009.

The challenges with this kind of approach is that the orbital debris does not actively broadcast its location, ergo it requires a combination of on-ground and on-satellite tracking to match the orbital trajectory for a safe approach. Here ADRAS-J uses what is called Model Matching Navigation, which uses known visual information to compare it with captured images, to use these to estimate the relative distance to the target.

Although the goal of ADRAS-J is only to study the target from as closely as possible, the next phase in the CRD2 program would involve actively deorbiting this upper stage, with phase start projected to commence in 2026.

Thanks to [Stephen Walters] for the tip.

Continue reading “Astroscale’s ADRAS-J Satellite Takes Up-Close Photo Of Discarded Rocket Stage”

Recovering An Agilent 2000a/3000a Oscilloscope With Corrupt Firmware NAND Flash

Everyone knows that you can never purchase enough projects off EBay, lest boredom might inadvertently strike. That’s why [Anthony Kouttron] got his mitts on an Agilent DSO-X 2014A digital oscilloscope that was being sold as defective and not booting, effectively just for parts. When [Anthony] received the unit, this turned out to be very much the case, with the front looking like it got dragged over the tarmac prior to having the stuffing beaten out of its knobs with a hammer. Fortunately, repairing the broken encoder and the plastic enclosure was easy enough, but the scope didn’t want to boot when powered on. How bad was the damage?

As [Anthony] describes in the article, issues with this range of Agilent DSOs are well-known, with for example the PSU liking to fry the primary side due to soft power button leaving it powered 24/7 with no cooling. The other is corrupted NAND storage, which he confirmed after figuring out the UART interface on the PCB with the ST SPEAr600 ARM-based SoC. Seeing the sad Flash block decompression error from the Windows CE said enough.

This led him down the rabbithole of finding the WinCE firmware images (nuked by Keysight, backed up on his site) for this scope, along with the InfiniiVision scope application. The former is loaded via the bootloader in binary YMODEM mode, followed by installing InfiniiVision via a USB stick. An alternate method is explained in the SPEAr600 datasheet, in the form of USB BootROM, which can also be reached via the bootloader with some effort.

As for the cause of the NAND corruption, it’s speculated that the scope writes to the same section of NAND Flash on boot, with the SPEAr600’s Flash controller documentation not mentioning wear leveling. Whether that’s true or not, at least it can be fixed with some effort even without replacing the NAND Flash IC.

McDonald’s Terminates Its Drive-Through Ordering AI Assistant

McDonald’s recently announced that it will be scrapping the voice-assistant which it has installed at over 100 of its drive-throughs after a two-year trial run. In the email that was sent to franchises, McDonald’s did say that they are still looking at voice ordering solutions for automated order taking (AOT), but it appears that for now the test was a disappointment. Judging by the many viral videos of customers struggling to place an order through the AOT system, it’s not hard to see why.

This AOT attempt began when in 2019 McDonald’s acquired AI company Apprente to create its McD Tech Labs, only to sell it again to IBM who then got contracted to create the technology for McDonald’s fast-food joints. When launched in 2021, it was expected that McDonald’s drive-through ordering lanes would eventually all be serviced by AOT, with an experience akin to the Alexa and Siri voice assistants that everyone knows and loves (to yell at).

With the demise of this test at McDonald’s, it would seem that the biggest change is likely to be in the wider automation of preparing fast-food instead, with robots doing the burger flipping and freedom frying rather than a human. That said, would you prefer the McD voice assistant when going through a Drive-Thru® over a human voice?

Credit: Xinmei Liu

The US Surgeon General’s Case For A Warning Label On Social Media

The term ‘Social Media’ may give off a benign vibe, suggesting that it’s a friendly place where everyone is welcome to be themselves, yet reality has borne out that it is anything but. This is the reason why the US Surgeon General [Dr. Vivek H. Murthy] is pleading for a health warning label on social media platforms. Much like with warnings on tobacco products, it’s not expected that such a measure would make social media safe for children and adolescents, but would remind them and their parents about the risks of these platforms.

While this may sound dire for what is at its core about social interactions, there is a growing body of evidence to support the notion that social media can negatively impact mental health. A 2020 systematic review article in Cureus by [Fazida Karim] and colleagues found anxiety and depression to be the most notable negative psychological health outcomes. A 2023 editorial in BMC Psychology by [Ágnes Zsila] and [Marc Eric S. Reyes] concurs with this notion, while contrasting these cons of social media with the pros, such as giving individuals an online community where they feel that they belong.

Ultimately, it’s important to realize that social media isn’t the end-all, be-all of online social interactions. There are still many dedicated forums, IRC channels and newsgroups far away from the prying eyes and social pressure  of social media to act out a personality. Having more awareness of how social interactions affect oneself and/or one’s children is definitely essential, even if we’re unlikely to return to the ‘never give out your real name’ days of  the pre-2000s Internet.