Glass is one of humanity’s oldest materials, and it is still used widely for everything from drinking vessels and packaging to optics and communications. Unfortunately, the methods for working with glass are stuck in the past. Most methods require a lot of high heat in the range of 1500 °C to 2000 °C, and they’re all limited in the complexity of shapes that can be made.
As far as making shapes goes, glass can be blown and molten glass pressed into molds. Glass can also be ground, etched, or cast in a kiln. Glass would be fantastic for many applications if it weren’t for the whole limited geometry thing. Because of the limitations of forming glass, some optic lenses are made with polymers, even though glass has better optical characteristics.
Ideally, glass could be injection molded like plastic. The benefits of this would be twofold: more intricate shapes would be possible, and they would have a much faster manufacturing time. Well, the wait is over. Researchers at Germany’s University of Freiburg have figured out a way to apply injection molding to glass. And it’s not just any glass — they’ve made highly-quality, transparent fused quartz glass, and they did it at lower temperatures than traditional methods. The team used x-ray diffraction to verify that the glass is amorphous and free of crystals, and were able to confirm its optical transparency three ways — light microscopy, UV-visible, and infrared measurements. All it revealed was a tiny bit of dust, which is to be expected outside of a clean room.
As nation states grapple with the spectre of environmental and economic losses due to climate change, we’ve seen an ever greater push towards renewable energy sources to replace heavier polluters like coal and natural gas. One key drawback of these sources has always been their intermittent availability, spurring interest in energy storage technologies that can operate at the grid level.
With the rise in distributed energy generation with options like home solar power, there’s been similar interest in the idea of distributed home battery storage. However, homeowners can be reluctant to make investments in expensive batteries that take years to pay themselves off in energy savings. But what if they had a giant battery already, just sitting outside in the driveway? Could electric vehicles become a useful source of grid power storage? As it turns out, Ford wants to make their electric trucks double as grid storage batteries for your home.
We’ve had nuclear fission reactors in operation all over the world for ages, but nuclear fusion always seems to be a decade or two away. While one cannot predict when we’ll reach the goal of sustained nuclear fusion, the cutting edge in test hardware is advancing at a rapid pace that makes us optimistic. Beginning as soon as this month and extending over a few years, we’re living through a very exciting time for nuclear fusion and plasma physics.
The Mega Ampere Spherical Tokamak (MAST) got a big upgrade to test a new cooled divertor design. JET (Joint European Torus) will be testing the deuterium-tritium fuel mixture that will be powering the ITER (the research project whose name began as an acronym for International Thermonuclear Experimental Reactor but has since been changed to just ITER). And the Wendelstein 7-X stellarator is coming back online with upgraded cooled divertors by next year.
Here the MAST Upgrade’s Super-X divertors have so far shown a ten-fold decrease in the temperature which the divertor is exposed to while carrying thermal energy out of the tokamak reactor. This means a divertor design and ultimately a fusion reactor that will last longer between maintenance sessions. On the stellarator side of things, Wendelstein 7-X’s new divertors may allow it to demonstrate the first continuous operation of a stellarator fusion reactor. Meanwhile, JET’s fuel experiments should allow us to test the deuterium-tritium fuel while ITER is working towards first plasma by 2025.
To hear founder Richard Branson tell it, the first operational flight of Virgin Galactic’s SpaceShipTwo has been 18 months out since at least 2008. But a series of delays, technical glitches, and several tragic accidents have continually pushed the date back to the point that many have wondered if it will ever happen at all. The company’s glacial pace has only been made more obvious when compared with their rivals in the commercial spaceflight field such as SpaceX and Blue Origin, which have made incredible leaps in bounds in the last decade.
But now, at long last, it seems like Branson’s suborbital spaceplane might finally start generating some income for the fledgling company. Their recent successful test flight, while technically the company’s third to reach space, represents an important milestone on the road to commercial service. Not only did it prove that changes made to Virgin Space Ship (VSS) Unity in response to issues identified during last year’s aborted flight were successful, but it was the first full duration mission to fly from Spaceport America, the company’s new operational base in New Mexico.
The data collected from this flight, which took pilots Frederick “CJ” Sturckow and Dave Mackay to an altitude of 89.23 kilometers (55.45 miles), will be thoroughly reviewed by the Federal Aviation Administration as part of the process to get the vehicle licensed for commercial service. The next flight will have four Virgin Galactic employees join the pilots, to test the craft’s performance when loaded with passengers. Finally, Branson himself will ride to the edge of space on Unity’s final test flight as a public demonstration of his faith in the vehicle.
If all goes according to plan, the whole process should be wrapped up before the end of the year. At that point, between the government contracts Virgin Galactic has secured for testing equipment and training astronauts in a weightless environment, and the backlog of more than 600 paying passengers, the company should be bringing in millions of dollars in revenue with each flight.
Some friends of mine are designing a new board around the STM32F103 microcontroller, the commodity ARM chip that you’ll find in numerous projects and on plenty of development boards. When the time came to order the parts for the prototype, they were surprised to find that the usual stockholders don’t have any of these chips in stock, and more surprisingly, even the Chinese pin-compatible clones couldn’t be found. The astute among you may by now have guessed that the culprit behind such a commodity part’s curious lack of availability lies in the global semiconductor shortage.
The fall-out from all this drama in the world’s car factories has filtered down through all levels that depend upon semiconductors; as the carmakers bag every scrap of chip fab capacity that they can, so in turn have other chip customers scrambled to keep their own supply lines in place. A quick scan for microcontrollers through distributors like Mouser or Digi-Key finds pages and pages of lines on back-order or out of stock, with those lines still available being largely either for niche applications, unusual package options, or from extremely outdated product lines. The chances of scoring your chosen chip seem remote and most designers would probably baulk at trying to redesign around an ancient 8-bit part from the 1990s, so what’s to be done?
Such things typically involve commercially sensitive information so we understand not all readers will be able to respond, but we’d like to ask the question: how has the semiconductor shortage affected you? We’ve heard tales of unusual choices being made to ship a product with any microcontroller that works, of hugely overpowered chips replacing commodity devices, and even of specialist systems-on-chip being drafted in to fill the gap. In a few years maybe we’ll feature a teardown whose author wonders why a Bluetooth SoC is present without using the radio functions and with a 50R resistor replacing the antenna, and we’ll recognise it as a desperate measure from an engineer caught up in 2021’s chip shortage.
So tell us your tales from the coalface in the comments below. Are you that desperate engineer scouring the distributors’ stock lists for any microcontroller you can find, or has your chosen device remained in production? Whatever your experience we’d like to know what the real state of the semiconductor market is, so over to you!
It’s no secret that Internet Relay Chat (IRC) has lost some of its appeal in recent years. These days there’s plenty of free chat platforms boasting slick web interfaces and smartphone push notifications, to say nothing of social networks like Facebook and Twitter. The ability to communicate with like minded individuals from all over the planet in real-time is now something we take for granted, so it’s little surprise that newer and flashier protocols and services have steadily eroded the IRC user base.
But there’s often a hidden cost to using these more modern communication protocols. A lack of operational transparency naturally leads to concerns over monitoring and censorship, which makes such services a poor match for the free and open source community. As such, many open projects have eschewed these newer and more popular services for IRC networks that were developed and maintained by the community itself. Among these, the best-known and most respected is Freenode. Originally started as a Linux support channel in 1995, Freenode grew to become the defacto communication and support tool for free and open source projects of all shapes and sizes, and by 2013 had officially become the largest and most active IRC network in the world.
Unfortunately, the incredible legacy of Freenode is now being jeopardized by what former staff members are describing as nothing short of a hostile takeover. Through a complex series of events which actually started several years ago, control of Freenode has been taken from the community and put into the hands of an enigmatic and wealthy entrepreneur who claims his ultimate goal is to revolutionize IRC and return it to the forefront of online communication. Here’s where it gets weird.
Starting an open source project is easy: write some code, pick a compatible license, and push it up to GitHub. Extra points awarded if you came up with a clever logo and remembered to actually document what the project is supposed to do. But maintaining a large open source project and keeping its community happy while continuing to evolve and stay on the cutting edge is another story entirely.
Just ask the maintainers of Audacity. The GPLv2 licensed multi-platform audio editor has been providing a powerful and easy to use set of tools for amateurs and professionals alike since 1999, and is used daily by…well, it’s hard to say. Millions, tens of millions? Nobody really knows how many people are using this particular tool and on what platforms, so it’s not hard to see why a pull request was recently proposed which would bake analytics into the software in an effort to start answering some of these core questions.
Now, the sort of folks who believe that software should be free as in speech tend to be a prickly bunch. They hold privacy in high regard, and any talk of monitoring their activity is always going to be met with strong resistance. Sure enough, the comments for this particular pull request went south quickly. The accusations started flying, and it didn’t take long before the F-word started getting bandied around: fork. If Audacity was going to start snooping on its users, they argued, then it was time to take the source and spin it off into a new project free of such monitoring.
The situation may sound dire, but truth be told, it’s a common enough occurrence in the world of free and open source software (FOSS) development. You’d be hard pressed to find any large FOSS project that hasn’t been threatened with a fork or two when a subset of its users didn’t like the direction they felt things were moving in, and arguably, that’s exactly how the system is supposed to work. Under normal circumstances, you could just chalk this one up to Raymond’s Bazaar at work.
But this time, things were a bit more complicated. Proposing such large and sweeping changes with no warning showed a troubling lack of transparency, and some of the decisions on how to implement this new telemetry system were downright concerning. Combined with the fact that the pull request was made just days after it was announced that Audacity was to be brought under new management, there was plenty of reason to sound the alarm.