Hackaday Links Column Banner

Hackaday Links: June 18, 2023

Will it or won’t it? That’s the question much on the minds of astronomers, astrophysicists, and the astro-adjacent this week as Betelgeuse continued its pattern of mysterious behavior that might portend a supernova sometime soon. You’ll recall that the red giant star in the constellation Orion went through a “great dimming” event back in 2019, where its brightness dipped to 60% of its normal intensity. That was taken as a sign that perhaps the star was getting ready to explode — or rather, that the light from whatever happened to the star 548 years ago finally reached us — and was much anticipated by skywatchers, yours truly included. As it turned out, the dimming was likely caused by Betelgeuse belching forth an immense plume of dust, temporarily obscuring our view of its light. Disappointing.

Those who gave up on the hope of seeing a supernova might have done so too fast, though, because now, the star seems to be swinging the other way and brightening. It briefly became the brightest star in Orion, nearly outshining nearby Sirius, the brightest star in the sky. So what does all this on-again, off-again business mean? According to Dr. Becky, a new study — not yet peer-reviewed, so proceed with caution — suggests that the star could go supernova in the next few decades. The evidence for this is completely unrelated to the great dimming event, but by analyzing the star’s long history of variable brightness. The data suggest that Betelgeuse has entered the carbon fusion phase of its life, a period that only lasts on the scale of a hundred years for a star that size. So we could be in for the ultimate fireworks show, which would leave us with a star brighter than the full moon that’s visible even in daylight. And who doesn’t want to see something like that?

Continue reading “Hackaday Links: June 18, 2023”

Better Antennas Via Annealing (Simulated)

If you want to simulate a tic-tac-toe game, that’s easy. You can evaluate every possible move in a reasonable amount of time. Simulating antennas, however, is much harder. [Rosrislav] has been experimenting with using simulated annealing to iterate antenna designs, and he shares his progress in a recent blog post.

For many problems, it simply isn’t possible to try all possible inputs to determine what provides the “best” result. Instead of trying every single input or set of inputs, you can try random ones and discard all but the best guesses. Then you make small changes and try again. The only problem is that the algorithm may lock in on a “local maximum” — that is, a relatively high value that isn’t the highest because it forms a peak that isn’t the highest peak. Or, if you are looking for a minimum, you may lock on to a local minimum — same thing.

To combat that, simulated annealing works like annealing a metal. The simulation employs a temperature that cools over time. The higher the temperature, the more likely large changes to the input are to occur.

The Python program uses the PyNEC package to provide simulation. The program sets up random antenna lengths and finds the projected gain, attempting to optimize for maximum gain.

The post is long on code and short on details, so you will probably want to read the Python source to see exactly what it is doing. But it could probably serve as a template to do other simulated annealing simulations for other antennas or anything you had a simulation engine to evaluate.

Several techniques allow you to optimize things that are too hard to search exhaustively, and we’ve talked about simulated annealing and genetic algorithms before. However, lately, we’ve been more interested in annealing 3D prints.

A Simple Guide To Bit Banged I2C On The 6502

We covered [Anders Nielsen]’s 65duino project a short while ago, and now he’s back with an update video showing some more details of bit-banging I2C using plain old 6502 assembly language.

Obviously, with such a simple system, there is no dedicated I2C interface hardware, so the programmer must take care of all the details of the I2C protocol in software, bit-banging it out to the peripheral and reading back the response one bit at a time.

The first detail to concern us will be the I2C addresses of the devices being connected to the bus and how low-level bit manipulation is used to turn the 7-bit I2C address into the byte being bit-banged. As [Anders] shows, setting a bit is simply a logical-OR operation, and resetting a bit is a simple logical-AND operation using the inversion (or one’s complement) bit to reset to form a bitmask. As many will already know, this process is necessary to code for a read or a write I2C operation. A further detail is that I2C uses an open-collector connection scheme, which means that no device on the bus may drive the bus to logical high; instead, they must release the drive by going to the high impedance state, and an external pull-up resistor will pull the bus high. The 6532 RIOT chip (used for I/O on the 65unio) does not have tristate control but instead uses a data direction register (DDR) to allow a pin to be an input. This will do the job just fine, albeit with slightly odd-looking code, until you know what’s going on.

From there, it’s a straightforward matter to write subroutines that generate the I2C start, stop, and NACK conditions that are required to write to the SSD1306-based OLED to get it to do something we can observe. From these basic roots, through higher-level subroutines, a complete OLED library in assembly can be constructed. We shall sit tight and await where [Anders] goes next with this!

We see I2C-connected things all the time, like this neat ATtiny85-based I2C peripheral, and whilst we’re talking about the SSD1306 OLED display controller, here’s a hack that shows just how much you can push your luck with the I2C spec and get some crazy frame rates.

Continue reading “A Simple Guide To Bit Banged I2C On The 6502”

Detecting Meteors With SDR

The simplest way to look for meteors is to go outside at night and look up — but it’s not terribly effective. Fortunately, there’s a better way: radio. With a software-defined radio and a little know-how from [Tech Minds], you can easily find them, as you can see in the video below.

This uses the UK meteor beacon we’ve looked at before. The beacon pushes an RF signal out so you can read the reflections from meteors. If you are too far from the beacon, you may need a special antenna or you might have to find another beacon altogether. We know of the Graves radar in France and we have to wonder if you couldn’t use some commercial transmitter with a little experimentation.

[Tech Minds] has some practical tips to share if you want to try doing it yourself. If you want to see what a detected meteor looks like, you can visit the UK beacon’s gallery page.

We saw another presentation on the UK beacon earlier this year. Using commercial transmitters sounds like it might be easy, but apparently, it isn’t.

Continue reading “Detecting Meteors With SDR”

Learning X86_64 Assembly By Building A GUI From Scratch

Some professional coders are absolutely adamant that learning to program in assembly language in these modern times is simply a waste of time, and this post is not for them. This is for the rest of us, who still think there is value in knowing at a low level what is going on, a deeper appreciation can be developed. [Philippe Gaultier] was certainly in this latter camp and figured the best way to learn was to work on a substantial project.

Now, there are some valid reasons to write directly in assembler; for example hand-crafting unusual code sequences for creating software exploits would be hindered by an optimising compiler. Creating code optimised for speed and size is unlikely to be among those reasons, as doing a better job than a modern compiler would be a considerable challenge. Some folks would follow the tried and trusted route and work towards getting a “hello world!” output to the console or a serial port, but not [Philippe]. This project aimed to get a full-custom GUI application running as a client to the X11 server running atop Linux, but the theory should be good for any *nix OS.

Hello World! In X11!

The first part of the process was to create a valid ELF executable that Linux would work with. Using nasm to assemble and the standard linker, only a few X86_64 instructions are needed to create a tiny executable that just exits cleanly. Next, we learn how to manipulate the stack in order to set up a non-trivial system call that sends some text to the system STDOUT.

To perform any GUI actions, we must remember that X11 is a network-orientated system, where our executable will be a client connected via a socket. In the simple case, we just connect the locally created socket to the server path for the local X server, which is just a matter of populating the sockaddr_un data structure on the stack and calling the connect() system call.

Now the connection is made, we can follow the usual X11 route of creating client ids, then allocating resources using them. Next, fonts are opened, and a graphical context is created to use it to create a window. Finally, after mapping the window, something will be visible to draw into with subsequent commands. X11 is a low-level GUI system, so there are quite a few steps to create even the most simple drawable object, but this also makes it quite simple to understand and thus quite suited to such a project.

We applaud [Phillip] for the fabulous documentation of this learning hack and can’t wait to see what’s next in store!

Not too long ago, we covered Snowdrop OS, which is written entirely in X86 assembly, and we also found out a thing or two about some oddball X86 instructions. We’ve also done our own Linux assembly primer.

If Not Ethernet…

It is hard to imagine today, but there was a time when there were several competing network technologies. There was Ethernet, of course. But you could also find token ring, DEC Net, EcoNet, and ARCNet. If you’ve never dug into ARCNet, [Retrobytes] has a comprehensive history you can watch that will explain it all.

Like token ring, ARCNet used a token-passing scheme to allow each station on the network to take turns sending data. Unlike token ring and Ethernet, the hardware setup was much less expensive. Along the way, you get a brief history of the Intel 8008 CPU, which, arguably, started the personal computer revolution.

Like most networking products of the day, ARCNet was proprietary. However, by the late 1980s, open standards were the rage, and Ethernet took advantage. Up until Ethernet was able to ride on twisted pairs, however, it was more expensive and less flexible than ARCNet.

The standard used RG-62/U coax and either passive or active hubs in a star configuration. The coax could be up to 2,000 feet away, so very large networks were feasible. It was also possible to share the coax with analog videoconferencing.

Looking back, ARCNet had a lot to recommend it, but we know that Ethernet would win the day. But [Retrobytes] explains what happened and why.

If you missed “old-style Ethernet,” we can show you how it worked. Or, check out EcoNet, which was popular in British schools.