This Az-El Mount Is Worth Following

Communication with satellites often involves the use of high-gain directional antennas coupled with careful positioning to find and track the target. With a geostationary satellite the mount is either fixed or a single-axis polar mount, but when the craft is moving in a different orbit it becomes more of a challenge to stay locked on. An azimuth-elevation mount is needed to cover the whole sky, and [Ham Radio Passion] has one as a work in progress. It’s 3D printed and looks straightforward, making it a project to watch.

An az-el mount has two parts, the first being a turntable to set the azimuth, and the second being a horizontal rotating axis to set the elevation. He’s mounting the antenna to a piece of aluminium extrusion and driving it through a set of 3D printed gears driven from a 360 degree servo with a worm drive. He explains why the servo makes more sense to him here.

The result is not yet a finished project, but it shows enough promise to make it worth keeping an eye on. It’s by no means big enough for a huge antenna array, but we can imagine antennas for higher frequencies would be well within its capabilities. Meanwhile it’s certainly not the first az-el mount we’ve seen.

Continue reading “This Az-El Mount Is Worth Following”

How To Better Enjoy VR On Linux

Linux folks are used to having to roll many of their own solutions, and better Linux desktop usability is a goal of the WayVR project, which aims to provide desktop control and app launching from within a VR session.

VR applications can already stream from Linux to standalone headsets with projects like WiVRn, but what WayVR does is let one launch programs and access desktop screens within VR. Put another way, instead of the headset being limited to acting as a pseudo-monitor that only receives the output of an already-running VR application, the headset and controllers can now be used to interact with one’s computer as if one were physically sitting at it. Controls and user interface are highly flexible and help users to do anything they need — including clicking, typing, and launching applications. It’s a considerable step forward for convenience and general usability.

Naturally, when it comes to using a computer from within VR there is plenty of unexplored territory regarding user interfaces. It’s fertile ground for experimentation in everything from DIY headsets to ways to input text without a keyboard, so if you enjoy working on the frontiers of such things, it’s a good scene to dive into.

Learn Programming Without A Computer

Presumably aimed at children, NHK World’s Texico program teaches the main ideas about programming without actually using a computer. Instead, it uses items like a toy train, playing cards, and other gadgets to teach concepts such as analysis, combination, simulation, abstraction, and more.

There are ten episodes in English and French. Some of them are more about critical thinking, which, admittedly, is important for solving problems in general with or without a computer. For example, a “magic” trick relies on the observation that tearing a sheet of paper into nine rectangular pieces will mean each piece has at least one perfectly straight edge except for the center piece.

Continue reading “Learn Programming Without A Computer”

A 1947 Radio Gets A Face Lift

We’ve all done it. We spy an old radio at a garage sale or resale shop. We know someone should bring it back to life, but it looks like a project, so we pass it by. Not [Ken] from [Ken’s Shop]. He found an Arvin 664A AM radio from 1947 in what appears to be a home-built cabinet and decided to bring it back to life.

From what we could find, the original case was a white plastic, not the wood box it is in today. So the first challenge was simply getting inside to see what was going on. Continue reading “A 1947 Radio Gets A Face Lift”

Direct FDM Printing With Granules

The idea of FDM 3D printing using granules rather than filament is an appealing one: rather than having to wrangle spools of filament that need to adhere to strict dimensions and cannot be too flexible, you can instead just keep topping up a big hopper with fresh granules. This is what [HomoFaciens] has been tinkering with for a while now, with their Direct Granules Extruder V7.0 showing significant improvements.

There’s also an accompanying article, with details of previous granule extruder attempts detailed on the same site. Many of the improvements here focus on making sure the granules melt properly before they reach the end of the extruder, with the auger screw helping to push things along. While this seems straightforward, there are many details to get right, with the previous v6.2 version having issues like the hot plastic backing up into the cold section and clogging things up.

For the test bench a Prusa Mk4 FDM printer is used, with the standard extruder swapped for the experimental extruder. On the extruder the cold, top part is water cooled to ensure it stays cold, with each turn of the wood-screw-turned-auger providing the right extrusion speed. As can be seen with the print tests, the results look pretty good despite the extruder not having been tuned yet.

If you want to give it a shot yourself, the article page provides files for download.

Continue reading “Direct FDM Printing With Granules”

Taking Polyphony To A New Level

There are all manner of musical synthesis techniques, from the early electromechanical instruments through analogue tape systhesis, the all-electronic waveform synthesisers of the 1960s onwards, and Yamaha’s FM systhesis of the 1980s, to name but a few. One of the attributes of such a machine lies in how many voices it has, or in simple terms, how many notes it can play simultaneously. Electronic complexity limited those early synths, but what happens on an FPGA where vast numbers of circuits can be made with little extra cost? [Tsuneo.Ohnaka] is pushing the envelope a little, by cramming 10240 individually controllable oscillators onto a Terasic DE10-nano FPGA board.

While this thing can in theory generate 10240 different notes at once, in practice that doesn’t mean it has 10240 voices. Instead he calls it a spectrum engine, in that with such a large number of oscillators all with individually controllable frequency, phase, and amplitude, he’s made the part of all those Fourier transform maths where all the different frequencies are combined, in hardware. It’s as though you had a sound card which wasn’t based around a DAC fed with samples, instead all those spectrum points you’d derive from a Fourier transform. Because it’s a massive parallel array of real oscillators it all happens concurrently, instantaneously in real time, and is not held back by the processing constraints of a microprocessor. Think of it as something akin to a software defined radio transmitter, but for the world of audio synthesis.

In that light, it can emulate all those other forms of audio synthesis driven by software, but without the software overhead of generating the waveforms. It’s certainly a different approach to generating audio from a computer, and he’s posted a cacophonic demo video below of it as an 80-voice polyphonic synthesiser. We like it.

Continue reading “Taking Polyphony To A New Level”

AI On Every Machine: The LLM You Probably Didn’t Want

It’s been a story of the last week or so if you follow the kind of news channels a Hackaday scribe does, that Google have quietly installed an LLM as part of the Chrome browser. Reports vary as to when they did this because there’s a lot of confusion online with their online Gemini features also present in the browser, but it seems Chrome users are noticing its effect through slower performance and hefty disk access. Given that Chrome is by far the most popular web browser, this means that billions of users will have downloaded the four gigabyte Gemini Nano model, and now have an LLM they didn’t know about. It will be used to provide advanced auto-correct and other text suggestion features that their online version of Gemini would presumably be overburdened with, and since it’s available through a set of in-browser APIs we expect that it will find its way into a lot of websites, online applications, and plugins.

It’s caused a bit of a fuss in some circles, and we think, with some justification. When billions of computers unwittingly install an extremely energy intensive software component the effect on global power consumption will be significant, with a consequent uptick in the carbon footprint of computing. It’s not a phenomenon restricted to Chrome, as an example Siri has used a local LLM on Apple devices for a while now. We’ve seen rumblings of discontent and talk of getting European climate regulators involved, but perhaps instead it’s time to have a conversation about local AI models. The key is not whether or not they are a good thing to have, but when and how they operate.

While many of us are sick to death of AI slop and have not been lured into AI psychosis by an over-reinforcing chatbot, the fact remains that LLMs can do some useful things, they’re here to stay whether we like it or not, and having one under your control on your own computer doesn’t have to be a bad thing. Install Llama.cpp on your machine, and you’ve got an LLM of your very own, upon which your usage data isn’t going to be sold, and your content isn’t going to reinforce the finest plagiarism device the world has ever seen.

Opt-In and Opt-Out

The concerning development with the Chrome LLM is that not only has it been installed without the user’s consent, it runs without their consent too, and they can’t use it for anything except what Google Chrome wants it to be used for. Unlike the Llama.cpp mentioned above, it’s not under their control, instead it’s a compute-hungry monster ultimately controlled by Google. The prospect of a future in which multiple pieces of everyday software install their own similarly out-of-control multi-gigabyte CPU-munchers is a concerning one. Anyone who remembers Microsoft’s Clippy grabbing all the resources in a 1990s desktop as its stuttering animation played its course will know where this is going.

If local LLMs are an inevitability, what’s needed is a way to make them like any other application, one that the user chooses and installs themselves. Such an LLM could make its services available to applications such as a web browser if the user allows it to, but not run unless asked. It’s fairly obvious that installing Llama.cpp or similar is beyond many users, but it shouldn’t lie beyond the bounds of possibility to package something like it as an application they can install.

We know that the previous paragraph is pie-in-the-sky wishful thinking, and that as the person who knows computers in your family your next few Christmases will be spent wrestling with six different LLMs running on some elderly family member’s PC. But perhaps in Clippy lies the answer. If the consumer can learn to associate built-in AI features with their computer grinding to a halt just as they did with an office assistant thirty years ago, then perhaps they’ll demand change. We can hope.