Welcome To The Year Of The Diagonal Linux Desktop

Sometimes you come across one of those ideas that at first appear to have to be some kind of elaborate joke, but as you dig deeper into it, it begins to make a disturbing kind of sense. This is where the idea of diagonally-oriented displays comes to the fore. Although not a feature that is generally supported by operating systems, [xssfox] used the xrandr (x resize and rotate) function in the Xorg display server to find the perfect diagonal display orientation to reach a happy balance between the pros and cons of horizontal and vertical display orientations.

As displays have gone wide-and-wider over the past decades, some people rotate their displays 90 degrees to get more height instead, which is beneficial when reading documents, yet terrible when watching most video content, barring vertical videos, so you either need more than one display, keep rotating, or settle on an optimal intermediate compromise. Interestingly, this wasn’t found at a straight 45°, but instead at 22° of rotation for [xssfox]’s 21:9 ratio ‘ultra-wide’ display. The xrandr settings for other display ratios can be easily calculated using the provided formula and associated JS-based tool.

So what are the advantages here? You get to keep long line lengths in IDEs, while gaining more vertical pixels in some areas. As disadvantages it only works with Xorg at this time, it’s a terrible setup for people prone to vertigo, and it’s decidedly hostile towards top-of-display mounted webcams. Yet with others picking up on this new trend, Linux might just corner the diagonal desktop.

Fast Paper Tape For The Nuclear Family

We’ve enjoyed several videos from [Chornobyl Family] about the computers that controlled the ill-fated nuclear reactor in Chornobyl (or Chernobyl, as it was spelled at the time of the accident). This time (see the video below) they are looking at a high-speed data storage device. You don’t normally think of high-speed and paper tape as going together, but this paper tape reader runs an astonishing 1,500 data units per second. Ok, so that’s not especially fast by today’s standards, but an ASR33, for example, did about 10 characters per second.

An IBM2400 tape drive, for reference, could transfer at least 10 times that amount of data in a second, and a 3400 could do even better. But this is paper tape. Magnetic tape had much higher density and used special tricks to get higher speeds mechanically using vacuum columns. It was still a pretty good trick to move 4 meters of paper tape a second through the machine.

Continue reading “Fast Paper Tape For The Nuclear Family”

Another OmniBot 2000 Upgrade

There were many toy robots back in the 80s that were — frankly — underwhelming by today’s standards. Back then, any old thing that rolled around with some blinking lights would impress, but the bar is higher today. Then again, some of the basic components won’t really change. You still need wheels, motors, batteries, and all that. But the computers we can bring to bear today are much better. Maybe that’s why so many people, including [mcvella], decide to give these venerable toys like the OmniBot 2000 a facelift or, maybe a better analogy, a brain transplant.

In this particular case, the brain in question is a Raspberry Pi. The robot will also sport new sensors, motor controllers, and a webcam. There is also a new battery pack in play. The project doesn’t cover working with the single powered gripper arm. The left arm isn’t motorized. There is also a cassette tape deck you could probably make do something interesting. Of course, with a Raspberry Pi, you get wireless control, and the project uses Viam to define and control the robot’s motion.

There is some retro cool factor to using a robot like Ominbot. However, we might be more tempted to just build our own. With a 3D printer, a laser cutter, and a few motors, you could make something that would be about equivalent or better with little effort.

We have seen OmniBot conversions before, particularly over on Hackaday.io. Maybe someone will convert one over to steam power.

Hackaday Links Column Banner

Hackaday Links: November 19, 2023

Two RUDs are better than one, right? That might be the line on Saturday morning’s briefly spectacular second attempt by SpaceX to launch their Starship vehicle atop a Super Heavy booster, which ended with the “rapid unscheduled disassembly” of both vehicles. The first attempt, back in April, had trouble from the get-go, including the rapid unscheduled partial disassembly of their Stage Zero launch pad, followed by rapid but completely predictable disassembly of a lot of camera gear and an unfortunate minivan thanks to flying chunks of concrete.

Starship’s first “hot” separation

Engineering changes helped keep Stage Zero more or less intact this time, and the Super Heavy booster performed flawlessly — for about three minutes. It was at that point, right after the start of the new “hot staging” process, where Starship’s six engines light before the booster actually drops away, that the problems started. The booster made a rapid flip maneuver to get into the correct attitude for burn-back and landing before disappearing in a massive ball of flame.

Reports are that the flight termination system did the deed, but it’s not yet exactly clear why. Ditto the Starship, which was also snuffed by the FTS after continuing to fly for about another five minutes. Still in all, the SpaceX crew seem to be ecstatic about the results, which is understandable for a company with a “move fast, break things” culture. Nailed it.

Continue reading “Hackaday Links: November 19, 2023”

D-POINT: A Digital Pen With Optical-Inertial Tracking

[Jcparkyn] clearly had an interesting topic for their thesis project, and was conscientious enough to write up a chunk of it and release it to the wild. The project in question is a digital pen that uses some neat sensor fusion to combine the inputs from a pen-mounted gyro/accelerometer with data from an optical tracking system provided by an off-the-shelf webcam.

A six degrees of freedom (6DOF) tracking system is achieved as a result, with the pen-mounted hardware tracking orientation and the webcam tracking the 3D position. The pen itself is quite neat, with an ALPS/Alpine HSFPAR003A load sensor measuring the contact pressure transmitted to it from the stylus tip. A Seeed Xaio nRF52840 sense is on duty for Bluetooth and hosting the needed IMU. This handy little module deals with all the details needed for such a high-integration project and even manages the charging of a single 10440 lithium cell via a USB-C connector.

Positional tracking uses Visual Pose Estimation (VPE) assisted with ArUco markers mounted on the end of the stylus. A consumer-grade (i.e. uncalibrated) webcam is all that is required on the hardware side. The software utilizes the familiar OpenCV stack to unroll the effects of the webcam rolling shutter, followed by Perspective-n-Point (PnP) to estimate the pose from the corrected image stream. Finally, a coordinate space conversion is performed to determine the stylus tip position relative to the drawing surface.

The sensor fusion is taken care of with a Kalman filter, smoothed with the typical Rauch-Tung-Striebel (RTS) algorithm before being passed onto the final application. This process is running in Python using the NumPy module, as you would expect, but accelerated using the Numba JIT compiler.

Motion tracking is not news to us, we’ve seen many an implementation over the years, such as this one. But digital input pens? Why aren’t they more of a thing?

Thanks to [Oliver] for the tip!

Full Self-Driving, On A Budget

Self-driving is currently the Holy Grail in the automotive world, with a number of companies racing to build general-purpose autonomous vehicles that can get from point A to point B with no user input. While no one has brought one to market yet, at least one has promised this feature and had customers pay for it, but continually moved the goalposts for delivery due to how challenging this problem turns out to be. But it doesn’t need to be that hard or expensive to solve, at least in some situations.

The situation in question is driving on a single stretch of highway, and only focuses on steering, so it doesn’t handle the accelerator or brake pedal input. The highway is driven normally, using a webcam to take images of the route and an Arduino to capture data about the steering angle. The idea here is that with enough training the Arduino could eventually steer the car. But first some math needs to happen on the training data since the steering wheel is almost always not turning the car, so the Arduino knows that actual steering events aren’t just statistical anomalies. After the training, the system does a surprisingly good job at “driving” based on this data, and does it on a budget not much larger than laptop, microcontroller, and webcam.

Admittedly, this project was a proof-of-concept to investigate machine learning, neural networks, and other statistical algorithms used in these sorts of systems, and doesn’t actually drive any cars on any roadways. Even the creator says he wouldn’t trust it himself, but that he was pleasantly surprised by the results of such a simple system. It could also be expanded out to handle brake and accelerator pedals with separate neural networks as well. It’s not our first budget-friendly self-driving system, either. This one makes it happen with the enormous computing resources of a single Android smartphone.

Continue reading “Full Self-Driving, On A Budget”

Bone-Shaking Haunted Mirror Uses Stable Diffusion

We once thought that the best houses on Halloween were the ones that gave out full-size candy bars. While that’s still true, these days we’d rather see a cool display of some kind on the porch. Although some might consider this a trick, gaze into [Tim]’s mirror and you’ll be treated to a spooky version of yourself.

Here’s how it works: At the heart of this build is a webcam, OpenCV, and a computer that’s running the Stable Diffusion AI image generator. The image is shown on a monitor that sits behind 2-way mirrored glass.

We really like the frame that [Tim] built for this. Unable to find something both suitable and affordable, they built one out of wood molding and aged it appropriately.

We also like the ping pong ball vanity globe lights and the lighting effect itself. Not only is it spooky, it lets the viewer know that something is happening in the background. All the code and the schematic are available if you’d like to give this a go.

There are many takes on the spooky mirror out there. Here’s one that uses a terrifying 3D print.