Lacking a DVD drive, [jg] was watching a TV series in the form of a bunch of .avi video files. Of course, when every episode contains a full intro, it is only a matter of time before that gets too annoying to sit through.
The usual method of skipping the intro on a plain video file is a simple one:
Manually drag the playback forward past the intro.
Oops that’s too far, bring it back.
Ugh reversed it too much, nudge it forward.
Okay, that’s good.
[jg] was certain there was a better way, and the solution was using audio fingerprinting to insert chapter breaks. The plain video files now have a chapter breaks around the intro, allowing for easy skipping straight to content. The reason behind selecting this method is simple: the show intro is always 52 seconds long, but it isn’t always in the same place. The intro plays somewhere within the first two to five minutes of an episode, so just skipping to a specific timestamp won’t do the trick.
The first job is to extract the audio of an intro sequence, so that it can be used for fingerprinting. Exporting the first 15 minutes of audio with ffmpeg easily creates a wav file that can be trimmed down with an audio editor of choice. That clip gets fed into the open-source SoundFingerprinting library as a signature, then each video has its audio track exported and the signature gets identified within it. SoundFingerprinting therefore detects where (down to the second) the intro exists within each video file.
Marking out chapter breaks using that information is conceptually simple, but ends up being a bit roundabout because it seems .avi files don’t have a simple way to encode chapters. However, .mkv files are another matter. To get around this, [jg] first converts each .avi to .mkv using ffmpeg then splices in the chapter breaks with mkvmerge. One important element is that the reformatting between .avi and .mkv is done without completely re-encoding the video itself, so it’s a quick process. The result is a bunch of .mkv files with chapter breaks around the intro, wherever it may be!
Like any other video call, if you had the link you could enter the meeting. So when Netherlands Defence Minister Ank Bijleveld Tweeted a photo of a video call last Friday, the address bar of the browser gave away the secret to anyone with a keen eye. Dutch journalist Daniël Verlaan working for the broadcaster RTL saw the URL on the screen and deduced the login credentials for the meeting.
We say “deduced”, but in fact there were five of the six digits in the PIN in the clear in the URL, leaving him with the difficult task of performing a one-digit brute-force attack and joining with the username “admin”. He joined and revealed his presence, then was admonished for committing a criminal offence before he left.
On one level it’s an opportunity for a good laugh at the expense of the defence ministers, and we certainly wouldn’t want to be Ank Bijleveld or probably the EU’s online security people once the inevitable investigation into this gets under way. It seems scarcely credible that the secrecy on such a high-security meeting could have sat upon such a shaky foundation without for example some form of two-factor authentication using the kind of hardware available only to governments.
EU policy is decided not by individual ministries but by delicate round-table summits of all 27 countries. In a pandemic these have shifted to being half-online and half in-real-life, so this EU defence ministers’ meeting had the usual mosaic video feed of politicians and national flags. And one Zoom-bombing journalist.
[Jesse]’s modification doesn’t affect the laser beam itself; it is an improvement on the air assist, which is the name for a constant stream of air that blows away smoke and debris as the laser burns and vaporizes material. An efficient air assist is one of the keys to getting nice clean laser cuts, but [Jesse] points out that a good quality air assist isn’t just about how hard the air blows, it’s also about how smoothly it does so. A turbulent air assist can make scorch marks worse, not better.
As an experiment to improve the quality of the air flowing out the laser nozzle, [Jesse] researched ways to avoid turbulence by creating laminar flow. Laminar flow is the quality of a liquid having layers flowing past one another with little or no mixing. One way to do this is to force liquid through individual, parallel channels as it progresses towards a sharply-defined exit nozzle. While [Jesse] found no reference designs of laminar flow nozzles for air assists, there were definitely resources on making laminar flow nozzles for water. It turns out that interest in such a nozzle exists mainly as a means of modifying Lonnie Johnson’s brilliant invention, the Super Soaker.
Working from such a design, [Jesse] created a custom nozzle to help promote laminar flow. Sadly, a laser cutter head carries design constraints that make some compromises unavoidable; one is limited space, and another is the need to keep the laser’s path unobstructed. Still, after 3D printing it in rigid heat-resistant resin, [Jesse] found a dramatic improvement in the feel of the air exiting the nozzle. Some test cuts confirmed a difference in performance, which results in a noticeably cleaner kerf without scorching around the edges.
One of the things [Nervous System] does is make their own custom puzzles, so any improvement to laser cutting helps reliability and quality. When production is involved, just about everything matters; a lesson [Nervous System] shared when they discussed making the best plywood for creating their puzzles.
While SpaceX’s constellation of Starlink satellites is nowhere near its projected final size, the company has enough of the birds zipping around in low Earth orbit to start a limited testing period they call the Better Than Nothing Beta. If you’re lucky enough to get selected, you have to cough up $500 for the hardware and another $100 a month for the service. Despite the fairly high bar for getting your hands on one, [Kenneth Keiter] decided to sacrifice his Starlink dish to the teardown Gods.
We say sacrifice because [Kenneth] had to literally destroy the dish to get a look inside. It doesn’t appear that you can realistically get into the exceptionally thin antenna array without pulling it all apart, thanks in part to preposterous amount of adhesive that holds the structural back plate onto the PCB. The sky-facing side of the phased array, the key element that allows the antenna to track the rapidly moving Starlink satellites as they pass overhead, is also laminated to a stack-up comprised of plastic hexagonal mesh layers, passive antenna elements, and the outer fiberglass skin. In short, there are definitely no user-serviceable parts inside.
Beyond attempting to analyze the RF magic that’s happening inside the antenna, [Kenneth] also takes viewers through a tour of some of the more recognizable components of the PCB; picking out things like the Power over Ethernet magnetics, a GPS receiver, some flash storage, and the H-Bridge drivers used to control the pan and tilt motors in the base of the dish.
It also appears that the antenna is a self-contained computer of sorts, complete with ARM processor and RAM to run the software that aims the phased array. Speaking of which, it should come as no surprise to find that not only are the ICs that drive the dizzying array of antenna elements the most numerous components on the PCB, but that they appear to be some kind of custom silicon designed specifically for SpaceX.
The build starts with an old CRT, which [Vivian] promptly gutted to make room for her head. In place of the original tube, a thin polycarbonate sheet was installed with window tint applied. Behind this, rows of WS2812B are set up in a grid, spaced apart just enough to allow the wearer to see through. The setup is controlled by a Circuit Playground Express. A small PS/2 keyboard is used to control the light show, and the onboard accelerometer can be used for gravity reactive animations.
It was a trope all too familiar in the 1990s — law enforcement in movies and TV taking a pixellated, blurry image, and hitting the magic “enhance” button to reveal suspects to be brought to justice. Creating data where there simply was none before was a great way to ruin immersion for anyone with a modicum of technical expertise, and spoiled many movies and TV shows.
Of course, technology marches on and what was once an utter impossibility often becomes trivial in due time. These days, it’s expected that a sub-$100 computer can easily differentiate between a banana, a dog, and a human, something that was unfathomable at the dawn of the microcomputer era. This capability is rooted in the technology of neural networks, which can be trained to do all manner of tasks formerly considered difficult for computers.
With neural networks and plenty of processing power at hand, there have been a flood of projects aiming to “enhance” everything from low-resolution human faces to old film footage, increasing resolution and filling in for the data that simply isn’t there. But what’s really going on behind the scenes, and is this technology really capable of accurately enhancing anything?
While dreams are generally thought of as the unconscious wanderings of the mind, that’s not the full story. Lucid dreams are ones in which the individual is conscious or semi-conscious in the dream state, and may be able to control the dream environment. Over the years, various devices have been used to generate these dream states more reliably. [Ben] decided to have a go at building his own, inspired by designs from the 1990s.
To induce lucid dreaming, the aim is to first detect that the mask wearer is in REM sleep. This is commonly done with an infrared eye tracker, which detects the rapid twitching of the eye. [Ben] used the onboard IR proximity sensor on the Adafruit Circuit Playground Express to pull this off. The accelerometer hardware was then used to detect if the wearer was still, indicating they are indeed fully asleep. Once the user is in the correct state, the mask then flashes LEDs which are intended to be visible to the wearer while dreaming. This allows them to realise they are dreaming, and thus enter a conscious, lucid state.
[Ben] doesn’t report the success rate at using the mask, but we’d love to know more about how well the mask works. We’ve seen others do similar work before, and even a recent Hackaday Prize entry. Video after the break.