Chinese Subs May Be Propelled Silently By Lasers

If sharks with lasers on their heads weren’t bad enough, now China is working on submarines with lasers on their butts. At least, that’s what this report in the South China Morning Post claims, anyway.

According to the report, two-megawatt lasers are directed through fiber-optic cables on the surface of the submarine, vaporizing seawater and creating super-cavitation bubbles, which reduce drag on the submarine. The report describes it as an “underwater fiber laser-induced plasma detonation wave propulsion” system and claims that the system could generate up to 70,000 newtons of thrust, more than one of the turbofan engines on a 747.

The report (this proxy can get around the paywall) claims that the key to the system are the tiny metal spheres that direct the force of the cavitation implosion to propel the submarine. Similar to a magnetohydrodynamic drive (MHD), there’s no moving parts to make noise. Such a technology has the potential to make China’s submarines far harder to detect.

Looking for more details, we traced the report back to the original paper written by several people at Harbin Engineering University, entitled “Study on nanosecond pulse laser propulsion microspheres based on a tapered optical fiber in water environment“, but it’s still a pre-print. If you can get access to the full paper, feel free to chime in — we’d love to know if this seems like a real prospect or just exaggerated reporting by the local propaganda media.

[Image via Wikimedia Commons]

The Aimbot V3 Aims To Track & Terminate You

Some projects we cover are simple, while some descend into the sort of obsessive, rabbit-hole-digging-into-wonderland madness that hackers everywhere will recognize. That’s precisely where [Excessive Overload] has gone with the AimBot V3, a target-tracking BB-gun that uses three cameras, two industrial servos, and an indeterminate amount of computing power to track objects and fire up to 40 BB gun pellets a second at them.

The whole project is overkill, made of CNC-machined metal, epoxy-cast gears, and a chain-driven pan-tilt system that looks like it would take off a finger or two before you even get to the shooty bit. That’s driven by input from the three cameras: a wide-angle one that finds the target and a stereo pair that zooms in on the target and determines the distance from the gun, using several hundred frames per second of video. This is then used to aim the BB gun stock, a Polarstar mechanism that fires up to 40 pellets a second. That’s fed by a customized feeder that uses spring wire.

The whole thing comes together to form a huge gun that will automatically track the target. It even uses motion tracking to discern between a static object like a person and a dart fired by a toy gun, picking the dart out of the air at least some of the time.

The downside is that it only works on targets with a retroreflective patch: it includes a 15 watt IR LED on the front of the gun. The camera detects the bright reflection and uses it to track the target, so all you have to do to avoid this particular Terminator is make sure you aren’t wearing anything too shiny.

Continue reading “The Aimbot V3 Aims To Track & Terminate You”

WSPR To The Wind With A Pi Pico High Altitiude Balloon

They say that if you love something, you should set it free. That doesn’t mean that you should spend any more on it than you have to though, which is why [EngineerGuy314] put together this Raspberry Pi Pico high-altitude balloon tracker that should only set you back about $12 to build.

This simplified package turns a Pico into a tracking beacon — connect a cheap GPS module and solar panel, and the system will transmit the GPS location, system temperature, and other telemetry on the 20-meter band using the Weak Signal Propagation Reporter (WSPR) protocol. Do it right, and you can track your balloon as it goes around the world.

The project is based in part on the work of [Roman Piksayin] in his Pico-WSPR-TX package (which we covered before), which uses the Pico’s outputs to create the transmitted signal directly without needing an external radio. [EngineerGuy314] took this a step further by slowing down the Pico and doing some clever stuff to make it run a bit more reliably directly from the solar panel.

The system can be a bit fussy about power when starting up: if the voltage from the solar panel ramps up too slowly, the Pico can crash when it and the GPS chip both start when the sun rises. So, a voltage divider ties into the run pin of the Pico to keep it from booting until the voltage is high enough, and a single transistor stops the GPS from starting up until the Pico signals it to go.

It’s a neat hack that seems to work well: [EngineerGuy314] has launched three prototypes so far, the last of which traveled over 62,000 kilometers/ 38,000 miles.

Reverse Engineering The Behringer Ultranet Protocol

Ultranet is a protocol created by audio manufacturer Behringer to transmit up to 16 channels of 24-bit sound over a Cat-5 cable. It’s not an open standard, though: Behringer doesn’t offer an API or protocol description to build your own Ultranet devices. But that didn’t stop [Christian Nödig], thanks to a defective mixer, he poked into the signals and built his own Ultranet receiver.

Ultranet runs over Cat-5 ethernet cables but isn’t an ethernet-based protocol. The electrical protocols of Ultranet are identical to Ethernet, but the signaling is different, making it a Level 1 protocol. So, you can use any Cat-5 cable for Ultranet, but you can’t just plug an Ultranet device into an Ethernet one. Or rather, you can (and neither device should explode), but you won’t get anything out of it.

Instead, [Christian]’s exploration revealed that Ultranet is based on another standard: AES/EBU, the bigger professional brother of the SPD/IF socket on HiFi systems. This was designed to carry digital audio over an XLR cable, and Behringer has taken AES/EBU and tweaked it to run over a single twisted pair. With two twisted pairs in the cable carrying a 192 kbps signal, you get sixteen channels of 24-bit audio in total over two twisted pairs inside the Cat-5 cable.

That’s a bit fast for a microcontroller to decode reliably, so [Christian] uses the FPGA in an Arduino Vidor 4000 MKR in his receiver with an open-source AES decoder core to receive and decode the Ultranet signal into individual channels, which are passed to an ADC and analog output.

In effect, [Christian] has built a 16-channel mixer, although the mixing aspect is too primitive for actual use. It would be great for monitoring, though, and it’s a beautiful description of how to dig into protocols like Ultranet that look locked up but are based on other, more open standards.

Continue reading “Reverse Engineering The Behringer Ultranet Protocol”

Countdown To A Spaceship Simulator

[Jon Petter Skagmo] claims that the spaceship simulator he’s working on is for his daughter, but we think there’s an excellent chance he’s looking to fulfill a few childhood dreams of his own. But no matter what generation ends up getting the most enjoyment out of it, there’s no question it’s an impressive build so far, complete with a very realistic-looking instrument display and joystick.

This is only the first in a series of builds, but our inner child is already intensely jealous. So far, [Jon] has built the instrument panel and controller that lights all buttons and runs the displays, which shows telemetry from a Falcon 9 launch. The video below goes into a lot of detail about how he built this SPI-driven instrument panel and why he made the whole thing modular, so it can be easily expanded without turning into a spaghetti-like mess.

It’s a great intro to thinking before you build, showing how he planned and built the system for maximum expandability and flexibility. Before the end, we wouldn’t surprised if he’s got quite a Kerbal Space Program controller on his hands for when the kid goes to bed.

Continue reading “Countdown To A Spaceship Simulator”

AI Binoculars Know More About Birds Than You

2024 is the year of adding Artificial Intelligence to everything. Now, even a pleasant walk in the woods is getting a dose of AI: optics manufacturer Swarovski has announced the AX Visio, a binocular set with an AI bird identification feature. Not sure if that is a lesser or greater scaup on your pond? These binoculars will tell you, for the low, low price of  $4799.

While digital cameras built into binoculars have been around for a while, adding AI is new. That’s a cool thing, but a bit of digging into the specs reveals that there is a much cheaper way to do it.

  1. Buy a cheap digital camera, like the Kodak Pixpro AZ255, which has a higher resolution and longer zoom than these binoculars.
  2. Transfer the image to your cell phone with an $11 memory card reader.
  3. Run the free Cornell Merlin ID app to identify the bird.
  4. Send the $4500 you just saved to us, or your favorite charity.

These ludicrously overpriced binoculars use the same Cornell Merlin ID system that you can use for free from their app, which also has the advantage of being able to ID birds from their songs. This is helpful because birds are tricky creatures who will try and hide from the hideously overpriced gadget you just bought.

[Via DigitalCameraWorld]

Video Feedback Machine Creates Analog Fractals

One of the first things everyone does when they get a video camera is to point it at the screen displaying the image, creating video feedback. It’s a fascinating process where the delay from image capture to display establishes a feedback loop that amplifies the image noise into fractal patterns. This sculpture, modestly called The God Machine II takes it to the next level, though.

We covered the first version of this machine in a previous post, but the creator [Dave Blair] has done a huge amount of work on the device since that allows him to tweak and customize the output that the device produces. His new version is quite remarkable, allowing him to create intricate fractals that writhe and change like living things.

The God Machine II is a sophisticated build with three cameras, five HD monitors, three Roland video switchers, two viewing monitors, two sheets of beam splitter glass, and a video input. This setup means it can take an external video input, capture it, and use it as the source for video feedback, then tweak the evolution of the resulting fractal image, repeatedly feeding it back into itself. The system can also control the settings for the monitor, which further changes the feedback as it evolves. [Blair] refers to this as “trapping the images.”

Continue reading “Video Feedback Machine Creates Analog Fractals”