Pi Pico Enhances RadioShack Computer Kit

While most of us now remember Radio Shack as a store that tried to force us to buy batteries and cell phones whenever we went to buy a few transistors and other circuit components, for a time it was an innovative and valuable store for electronics enthusiasts before it began its long demise. Among other electronics and radio parts and kits there were even a few DIY microcomputers, and even though it’s a bit of an antique now a Raspberry Pi Pico is just the thing to modernize this Radio Shack vintage microcomputer kit from the mid 80s.

The microcomputer kit itself is built around the 4-bit Texas Instruments TMS1100, one of the first mass-produced microcontrollers. The kit makes the processor’s functionality more readily available to the user, with a keypad and various switches for programming and a number of status LEDs to monitor its state. The Pi Pico comes into the equation programmed to act as a digital clock with an LED display to drive the antique computer. The Pi then sends a switching pulse through a relay to the microcomputer, which is programmed as a binary counter.

While the microcomputer isn’t going to win any speed or processing power anytime soon, especially with its clock signal coming from a slow relay module, the computer itself is still fulfilling its purpose as an educational tool despite being nearly four decades old. With the slow clock speeds it’s much more intuitive how the computer is stepping through its tasks, and the modern Pi Pico helps it with its tasks quite well. Relays on their own can be a substitute for the entire microcontroller as well, like this computer which has a satisfying mechanical noise when it’s running a program.

Continue reading “Pi Pico Enhances RadioShack Computer Kit”

Measuring Trees Via Satellite Actually Takes A Great Deal Of Field Work

Figuring out what the Earth’s climate is going to do at any given point is a difficult task. To know how it will react to given events, you need to know what you’re working with. This requires an accurate model of everything from ocean currents to atmospheric heat absorption and the chemical and literal behavior of everything from cattle to humans to trees.

In the latter regard, scientists need to know how many trees we have to properly model the climate. This is key, as trees play a major role in the carbon cycle by turning carbon dioxide into oxygen plus wood. But how do you count trees at a continental scale? You’ll probably want to get yourself a nice satellite to do the job.

Continue reading “Measuring Trees Via Satellite Actually Takes A Great Deal Of Field Work”

A Straightforward AI Voice Assistant, On A Pi

With AI being all the rage at the moment it’s been somewhat annoying that using a large language model (LLM) without significant amounts of computing power meant surrendering to an online service run by a large company. But as happens with every technological innovation the state of the art has moved on, now to such an extent that a computer as small as a Raspberry Pi can join the fun. [Nick Bild] has one running on a Pi 4, and he’s gone further than just a chatbot by making into a voice assistant.

The brains of the operation is a Tinyllama LLM, packaged as a llamafile, which is to say an executable that provides about as easy a one-step access to a local LLM as it’s currently possible to get. The whisper voice recognition sytem provides a text transcript of the input prompt, while the eSpeak speech synthesizer creates a voice output for the result. There’s a brief demo video we’ve placed below the break, which shows it working, albeit slowly.

Perhaps the most important part of this project is that it’s easy to install and he’s provided full instructions in a GitHub repository. We know that the quality and speed of these models on commodity single board computers will only increase with time, so we’d rate this as an important step towards really good and cheap local LLMs. It may however be a while before it can help you make breakfast.

Continue reading “A Straightforward AI Voice Assistant, On A Pi”

Gold Recovery From E-Waste With Food-Waste Amyloid Aerogels

A big part of the recycling of electronic equipment is the recovery of metals such as gold. Usually the printed circuit boards and other components are shredded, sorted, and then separated. But efficiently filtering out specific metals remains tricky and adds to the cost of recycling. A possible way to optimize the recovery of precious metals like gold could be through the use of aerogels composed out of protein amyloids to which one type of metal would preferentially adsorb. According to a recent research article in Advanced Materials by [Mohammad Peydayesh] and colleagues, such aerogels could be created from protein waste from the food industry.

The adsorption mechanism of the protein amyloids is a feature of these proteins which form chelants, which are structures that can effectively bond to metal ions. These are usually organic compounds, and are used in certain medical treatments where heavy metal poisoning is involved (chelation therapy). By having these protein amyloids in an aerogel structure, the surface area for adsorption is maximized, which in the research article is said to have an efficiency of 93.3% for gold recovery, while leaving the other metals in the aqua regia solution (nitric and hydrochloric acid) mostly untouched.

Of note here is that although the food waste protein angle is taken, the experiment used whey protein. This is also one of the most popular food supplements in the world, to the point that microbial production of whey is a thing now. Although this doesn’t invalidate the aerogel chelation approach to e-waste recycling, it’s a curious omission in the article that does not appear to be addressed.

Making The Commodore SX-64 Mini

When you find a portable TV from the 1980s, and it reminds you of the portable Commodore 64, there’s only one thing to be done. [Aaron Newcomb] brings us the story of taking an Emerson PC-6 and mating it to the guts of his THEC64 Mini. It’s a bit of a journey, as the process includes modding the TV to include a composite input and trimming some unused PCB off the TV’s mainboard. Then some USB ports and a three-and-a-half inch floppy drive were shoehorned into the chassis, with the rear battery compartment holding the parts from THEC64 Mini.

The build was not entirely without issue. It turns out the degaussing coil connector can plug perfectly into the service port, and Murphy’s law proved itself true again. But no harm was done, and the error was quickly discovered. All that was left was to button the chassis back up and add some paint and 3d-printed trim details. The build looks great! Come back after the break to watch the video from the [Retro Hack Shack] for yourself.

Continue reading “Making The Commodore SX-64 Mini”

ForceGen: Using A Diffusion Model To Help Design Novel Proteins

Although proteins are composed out of only a small number of distinct amino acids, this deceptive simplicity quickly vanishes when considering the many possible sequences across a protein, not to mention the many ways in which a single 1D protein sequence can fold into a 3D protein shape with a specific functionality. Although natural evolution has done much of the legwork here already, figuring out new sequences and their functionality is a daunting task where increasingly deep learning algorithms are being applied. As [Bo Ni] and colleagues report in a research article in Science Advances, the hardest challenge is designing a protein sequence based on the desired functionality. They then demonstrate a way to use a generative model to speed up this process.

They set out to design proteins with specific mechanical properties, for which they used the known unfolding characteristics of various protein sequences to train a diffusion model. This approach is thus more akin to the technology behind image generation algorithms like DALL-E than LLMs. Using the trained diffusion model it was then possible to generate likely sequences of which the properties could then be simulated, with favorable results.

As a large data set aid, such a diffusion model could conceivably be very useful in fields even beyond protein synthesis, automating tedious tasks and conceivably speeding up discoveries.

Watch The OpenScan DIY 3D Scanner In Action

[TeachingTech] has a video covering the OpenScan Mini that does a great job of showing the workflow, hardware, and processing method for turning small objects into high-quality 3D models. If you’re at all interested but unsure where or how to start, the video makes an excellent guide.

We’ve covered the OpenScan project in the past, and the project has progressed quite a bit since then. [TeachingTech] demonstrates scanning a number of small and intricate objects, including a key, to create 3D models with excellent dimensional accuracy.

[Thomas Megel]’s OpenScan project is a DIY project that, at its heart, is an automated camera rig that takes a series of highly-controlled photographs. Those photographs are then used in a process called photogrammetry to generate a 3D model from the source images. Since the quality of the source images is absolutely critical to getting good results, the OpenScan hardware platform plays a pivotal role.

Once one has good quality images, the photogrammetry process itself can be done in any number of ways. One can feed images from OpenScan into a program like Meshroom, or one may choose to use the optional cloud service that OpenScan offers (originally created as an internal tool, it is made available as a convenient processing option.)

It’s really nice to have a video showing how the whole workflow works, and highlighting the quality of the results as well as contrasting them with other 3D scanning methods. We’ve previously talked about 3D scanning and what it does (and doesn’t) do well, and the results from the OpenScan Mini are fantastic. It might be limited to small objects, but it does a wonderful job on them. See it all for yourself in the video below.

Continue reading “Watch The OpenScan DIY 3D Scanner In Action”