Today, we take ice for granted. But having ice produced in your home is a relatively modern luxury. As early as 1750 BC, ancient people would find ice on mountains or in cold areas and would harvest it. They’d store it, often underground, with as much insulation as they could produce given their level of technology.
A yakhchāls in Yazd province (by [Pastaitkaen] CC BY-SA 3.0).By 500 BC, people around Egypt and what is now India would place water in porous clay pots on beds of straw when the night was cold and dry. Even if the temperature didn’t freeze, the combination of evaporation and radiative cooling could produce some ice. However, this was elevated to a high art form around 400 BC by the Persians, who clearly had a better understanding of physics and thermodynamics than you’d think.
The key to Persian icemaking was yakhchāls. Not all of them were the same, but they typically consisted of an underground pit with a conical chimney structure. In addition, they often had shade walls and ice pits as well as access to a water supply.
Solar Chimney
The conical shape optimizes the solar chimney effect, where the sun heats air, which then rises. The top was typically not open, although there is some thought that translucent marble may have plugged the top to admit light while blocking airflow. yakhchālThe solar chimney produces an updraft that tends to cool the interior. The underground portion of the yakhchāl has colder air, as any hot air rises above the surface.
Imagine a line of affordable toys controlled by the player’s brainwaves. By interpreting biosignals picked up by the dry electroencephalogram (EEG) electrodes in an included headset, the game could infer the wearer’s level of concentration, through which it would be possible to move physical objects or interact with virtual characters. You might naturally assume such devices would be on the cutting-edge of modern technology, perhaps even a spin-off from one of the startups currently investigating brain-computer interfaces (BCIs).
But the toys in question weren’t the talk of 2025’s Consumer Electronics Show, nor 2024, or even 2020. In actual fact, the earliest model is now nearly as old as the original iPhone. Such is the fascinating story of a line of high-tech toys based on the neural sensor technology developed by a company called Neurosky, the first of which was released all the way back in 2009.
Yet despite considerable interest leading up to their release — fueled at least in part by the fact that one of the models featured Star Wars branding and gave players the illusion of Force powers — the devices failed to make any lasting impact, and have today largely fallen into obscurity. The last toy based on Neurosky’s technology was released in 2015, and disappeared from the market only a few years later.
I had all but forgotten about them myself, until I recently came across a complete Mattel Mindflex at a thrift store for $8.99. It seemed a perfect opportunity to not only examine the nearly 20 year old toy, but to take a look at the origins of the product, and find out what ultimately became of Neurosky’s EEG technology. Was the concept simply ahead of its time? In an era when most people still had flip phones, perhaps consumers simply weren’t ready for this type of BCI. Or was the real problem that the technology simply didn’t work as advertised?
D-engine of the Claymills Pumping Station. (Credit: John M)
Although infrastructure like a 19th-century pumping station generally tends to be quietly decommissioned and demolished, sometimes you get enough people looking at such an object and wondering whether maybe it’d be worth preserving. Such was the case with the Claymills Pumping Station in Staffordshire, England. After starting operations in the late 19th century, the pumping station was in active use until 1971. In a recent documentary by the Claymills Pumping Station Trust, as the start of their YouTube channel, the derelict state of the station at the time is covered, as well as its long and arduous recovery since they acquired the site in 1993.
After its decommissioning, the station was eventually scheduled for demolition. Many parts had by that time been removed for display elsewhere, discarded, or outright stolen for the copper and brass. Of the four Woolf compounding rotative beam engines, units A and B had been shut down first and used for spare parts to keep the remaining units going. Along with groundwater intrusion and a decaying roof, it was in a sorry state after decades of neglect. Restoring it was a monumental task.
We live in an age where engineering marvels are commonplace: airplanes crisscross the sky, skyscrapers grow like weeds, and spacecraft reach for the stars. But every so often, we see something unusual that makes us take a second look. The Falkirk Wheel is a great example, and, even better, it is functional art, as well.
The Wheel links two canals in Scotland. Before you click away, here’s the kicker: One canal is 35 meters higher than the other. Before 1933, the canals were connected with 11 locks. It took nearly a day to operate the locks to get a boat from one canal to the other. In the 1930s, there wasn’t enough traffic to maintain the locks, and they tore them out.
Fast Forward
In the 1990s, a team of architects led by [Tony Kettle] proposed building a wheel to transfer boats between the two canals. The original model was made from [Tony’s] daughter’s Lego bricks.
The idea is simple. Build a 35-meter wheel with two caissons, 180 degrees apart. Each caisson can hold 250,000 liters of water. To move a boat, you fill the caissons with 500 tonnes of water. Then you let a boat into one of them with its weight displacing an equal amount of water, so the caissons stay at the same weight.
Once you have a balanced system, you just spin the wheel to make a half turn. There are 10 motors that require 22.5 kilowatts, and each half-turn consumes about 1.5 kilowatt-hours.
Today, we take office software suites for granted. But in the 1970s, you were lucky to have a typewriter and access to a photocopier. But in the early 1980s, IBM rolled out PROFS — the Professional Office System — to try to revolutionize the office. It was an offshoot of an earlier internal system. The system would hardly qualify as an office suite today, but for the time it was very advanced.
The key component was an editor you could use to input notes and e-mail messages. PROFS also kept your calendar and could provide databases like phonebooks. There were several key features of PROFS that would make it hard to recognize as productivity software today. For one thing, IBM terminals were screen-oriented. The central computer would load a form into your terminal, which you could fill out. Then you’d press send to transmit it back to the mainframe. That makes text editing, for example, a very different proposition since you work on a screen of data at any one time. In addition, while you could coordinate calendars and send e-mail, you could only do that with certain people.
A PROFS message from your inbox
In general, PROFS connected everyone using your mainframe or, perhaps, a group of mainframes. In some cases, there might be gateways to other systems, but it wasn’t universal. However, it did have most of the major functions you’d expect from an e-mail system that was text-only, as you can see in the screenshot from a 1986 manual. PF keys, by the way, are what we would now call function keys.
The calendar was good, too. You could grant different users different access to your calendar. It was possible to just let people see when you were busy or mark events as confidential or personal.
You could actually operate PROFS using a command-line interface, and the PF keys were simply shorthand. That was a good thing, too. If you wanted to erase a file named Hackaday, for example, you had to type: ERASE Hackaday AUT$PROF.
Styles
PROFS messages were short and were essentially ephemeral chat messages. Of course, because of the block-mode terminals, you could only get messages after you sent something to the mainframe, or you were idle in a menu. A note was different. Notes were what we could call e-mail. They went into your inbox, and you could file them in “logs”, which were similar to folders.
When looking back on classic gaming, there’s plenty of room for debate. What was the best Atari game? Which was the superior 16-bit console, the Genesis or the Super NES? Would the N64 have been more commercially successful if it had used CDs over cartridges? It goes on and on. Many of these questions are subjective, and have no definitive answer.
But even with so many opinions swirling around, there’s at least one point that anyone with even a passing knowledge of gaming history will agree with — the Virtual Boy is unquestionably the worst gaming system Nintendo ever produced. Which is what makes its return in 2026 all the more unexpected.
Released in Japan and North America in 1995, the Virtual Boy was touted as a revolution in gaming. It was the first mainstream consumer device capable of showing stereoscopic 3D imagery, powered by a 20 MHz 32-bit RISC CPU and a custom graphics processor developed by Nintendo to meet the unique challenges of rendering gameplay from two different perspectives simultaneously.
In many ways it’s the forebear of modern virtual reality (VR) headsets, but its high cost, small library of games, and the technical limitations of its unique display technology ultimately lead to it being pulled from shelves after less than a year on the market.
Now, 30 years after its disappointing debut, this groundbreaking system is getting a second chance. Later this month, Nintendo will be releasing a replica of the Virtual Boy into which players can insert their Switch or Switch 2 console. The device essentially works like Google Cardboard, and with the release of an official emulator, users will be able to play Virtual Boy games complete with the 3D effect the system was known for.
This is an exciting opportunity for those with an interest in classic gaming, as the relative rarity of the Virtual Boy has made it difficult to experience these games in the way they were meant to be played. It’s also reviving interest in this unique piece of hardware, and although we can’t turn back the clock on the financial failure of the Virtual Boy, perhaps a new generation can at least appreciate the engineering that made it possible.
Thomas Edison is well known for his inventions (even if you don’t agree he invented all of them). However, he also occasionally invented things he didn’t understand, so they had to be reinvented again later. The latest example comes from researchers at Rice University. While building a replica light bulb, they found that Thomas Edison may have accidentally created graphene while testing the original article.
Today, we know that applying a voltage to a carbon-based resistor and heating it up to over 2,000 °C can create turbostratic graphene. Edison used a carbon-based filament and could heat it to over 2,000 °C.
This reminds us of how, in the 1880s, Edison observed current flowing in one direction through a test light bulb that included a plate. However, he thought it was just a curiosity. It would be up to Fleming, in 1904, to figure it out and understand what could be done with it.
Naturally, Edison wouldn’t have known to look for graphene, how to look for it, or what to do with it if he found it. But it does boggle the mind to think about graphene appearing many decades earlier. Or maybe it would still be looking for a killer use. Certainly, as the Rice researchers note, this is one of the easier ways to make graphene.