One of the biggest issues facing the solid-state lithium-based batteries we all depend upon is of the performance of the anode; the transport of lithium ions and minimization of dendrite formation are critical problems and are responsible for charge/discharge rates and cell longevity. A team of researchers at Harvard have demonstrated a method for using a so-called constriction-susceptible structure on a silicon anode material in order to promote direct metal lithium deposition, as opposed to the predominant alloying reaction. After the initial silicon-lithium alloy layer is formed, subsequent layers are pure lithium. Micrometre-scale silicon particles at the anode constrain the lithiation process (i.e. during charging) where free lithium ions are pushed by the charge current towards the anode area. Because the silicon particles are so small, there is limited surface area for alloying to occur, so direct metal plating of lithium is preferred, but crucially it happens in a very uniform manner and thus does not tend to promote the formation of damaging metal dendrites.
I found myself in Milton Keynes, UK, a little while ago, with a few hours to spare. What could I do but rock over to the National Museum of Computing and make a nuisance of myself? I have visited many times, but this time, I was armed with a voice recorder and a mission to talk to everybody who didn’t run away fast enough. There is so much to see and do, that what follows is a somewhat truncated whistle-stop tour to give you, the dear readers, a flavour of what other exhibits you can find once you’ve taken in the usual sights of the Colossus and the other famous early machines.
We expect you’ve heard of the classic text adventure game Zork. Well before that, there was the ingeniously titled “Adventure”, which is reported to be the first ‘interactive fiction’ text adventure game. Created initially by [Will Crowther], who at the time was a keen cave explorer and D & D player, and also the guy responsible for the firmware of the original Arpanet routers, the game contains details of the cave systems of Mammoth and Flint Ridge in Kentucky.
The first version was a text-based simulation of moving around the cave system, and after a while of its release onto the fledgling internet, it was picked up and extended by [Don Woods], and the rest is history. If you want to read more, the excellent site by [Rick Adams] is a great resource that lets you play along in your browser. Just watch out for the dwarfs. (Editor’s note: “plugh“.) During my visit, I believe the software was running on the room-sized ICL2966 via a VT01 terminal, but feel free to correct me, as I can’t find any information to the contrary.
A little further around the same room as the ICL system, there is a real rarity: a Marconi TAC or Transistorised Automatic Computer. This four-cabinet minicomputer was designed in the late 1950s as a ‘fast real-time computer’, is one of only five made, and this example was initially installed at Wylfa nuclear power station in Anglesey, intended as a monitoring and alarm system controller. These two machines were spare units for the three built for the Swedish air defence system, which were no longer required. Commissioned in 1968, this TAC ran continuously until 2004, which could make it one the longest continuously running computers in the world. The TAC has 4 kwords of 20-bit core memory, a paper tape reader for program loading and a magnetic drum storage memory. Unusually, for this period, the TAC has a micro-coded CISC architecture, utilising a whole cabinet worth of diode-matrix ROM boards to code the instruction set. This enabled the TAC to have a customizable instruction set. As standard, the TAC shipped with trigonometric and other transcendental functions as individual instructions. This strategy minimized the program size and allowed more complex programs to fit in the memory.
The motto of Sun Microsystems back in the day was “The Network Is The Computer” which might be kind of relevant when CPUs were slower and single-core affairs, but lately to get a faster compile, you’d simply throw more cores and memory at the problem. The thing is, most of us don’t do huge compilations all that often, we can’t remember the last time we even attempted a Linux kernel build. However if you do find yourself with a sudden need to do so, and have access to a pile of machines hooked to a network, then why not check out distcc: the fast distributed C/C++ Compiler? We’ve seen a few mentions in comments and a HaD links article referencing it, but never explicitly covered the tool. So here we go.
Back on the theme of learning to program by taking on a meaningful project — we have another raytracing demo — this time using Rust on the Raspberry Pi. [Unfastener] saw our previous article about writing a simple raytracer in spectrum BASIC and got inspired to try something similar. The plan was to recreate the famous juggler 3D demo, from the early days of 3D rendering on the Amiga.
The juggler story starts with an Amiga programmer called [Eric Graham] who created ssg, the first ray tracer application on a personal computer. A demo was shown to Commodore, who didn’t believe it was done on their platform, but a quick follow-up with the actual software used soon quelled their doubts. Once convinced, they purchased the rights to the demo for a couple of thousand dollars (in 1986 money, mind you) to use in promotional materials. [Eric] developed ssg into the popular Sculpt 3D, which became available also on Mac and Windows platforms, and kick-started a whole industry of personal 3D modelling and ray tracing.
Anyway, back to the point. [Unfastener] needed to get up the considerable Rust learning curve, and the best way to do that is to let someone else take care of some of the awkward details of dealing with GUI, and just concentrate on the application. To that end, they use the softbuffer and winit Rust crates that deal with the (important, yet frankly uninteresting) details of building frame buffers and pushing the pixels out to the window manager in a cross-platform way. Vecmath takes care of — you guessed it — the vector math. There’s no point reinventing that wheel either. Whilst [Unfastener] mentions the original Amiga demo took about an hour per frame to render, this implementation runs in real-time. To that end, the code performs a timed pre-render to determine the most acceptable resolution to get an acceptable frame rate, achieving a respectable 30 or so frames per second on a Pi 5, with the older Pis needing to drop the resolution a little. This goes to show how efficient Rust code can be and, how capable the new Pi is. How far we have come.
Soldering is one of those jobs that are conceptually simple enough, but there’s quite a bit of devil in the detail and having precisely the right tool for the job in hand is essential for speed and quality of results. The higher-quality soldering stations have many options for the hot end, but switching from a simple pencil to hot tweezers often means unplugging one and reattaching the other, and hoping the station recognises the change and does the right thing. [Lajt] had three soldering options and a single output station. Their solution was a custom-built three-way frontend box that provides a push-button selection of the tool to be connected to the station sitting atop.
[Lajt] shows in the blog post how each of their target hot ends is wired and the connectivity the control station expects to determine what is plugged in. Failing to recognise a connected 50 W heating element as if the smaller 25 W unit was still connected would suck, with a huge amount of lag as the temperature of the hot end would fail to keep up with the thermal load during use. When connections are made, it is important to ensure the unit has sufficient time to detect the change in output and configure itself appropriately. An Arduino Pro mini handles the selection between outputs by driving a selection of relays with appropriate timing. An interesting detail here is what [Lajt] calls a ‘sacrificial relay’ in the common ground path, which has a greater contact rating than the others and acts as a secondary switch to save wear on the other relay contacts that would otherwise be hot-switched. All in all, a nicely executed project, which should offer years of service.
Pinokio is billed as an autonomous virtual computer, which could mean anything really, but don’t click away just yet, because this is one heck of a project. AI enthusiast [cocktail peanut] (and other undisclosed contributors) has created a browser-style application which enables a virtual Unix-like environment to be embedded, regardless of the host architecture. A discover page loads up registered applications from GitHub, allowing a one-click install process, which is ‘simply’ a JSON file describing the dependencies and execution flow. The idea is rather than manually running commands and satisfying dependencies, it’s all wrapped up for you, enabling a one-click to download and install everything needed to run the application.
But what applications? we hear you ask, AI ones. Lots of them. The main driver seems to be to use the Pinokio hosting environment to enable easy deployment of AI applications, directly onto your machine. One click to install the app, then another one to download models, and whatever is needed, from the likes of HuggingFace and friends. A final click to launch the app, and a browser window opens, giving you a web UI to control the locally running AI backend. Continue reading “On-click Install local AI Applications Using Pinokio”→
When working with an FDM 3D printer your first prints are likely trinkets where strength is less relevant than surface quality. Later on when attempting more structural prints, the settings become very important, and quite frankly rather bewildering. A few attempts have been made over the years to determine in quantifiable terms, how these settings affect results and here is another such experiment, this time from Youtuber 3DPrinterAcademy looking specifically at the effect of wall count, infill density and the infill pattern upon the strength of a simple beam when subjected to a midpoint load.
When setting up a print, many people will stick to the same few profiles, with a little variety in wall count and infill density, but generally keep things consistent. This works well, up to a point, and that point is when you want to print something significantly different in size, structure or function. The slicer software is usually very helpful in explaining the effect of tweaking the numbers upon how the print is formed, but not too great at explaining the result of this in real life, since it can’t know your application. As far as the slicer is concerned your object is a shape that will be turned into slices, internal spaces, outlines and support structures. It doesn’t know whether you’re making a keyfob or a bearing holder, and cannot help you get the settings right for each application. Perhaps upcoming AI applications will be trained upon all these experimental results and be fed back into the slicing software, but for now, we’ll just have to go with experience and experiment. Continue reading “One Object To Print, But So Many Settings!”→