Leaked Internal Google Document Claims Open Source AI Will Outcompete Google And OpenAI

In the world of large language models (LLM), the focus has for the longest time been on proprietary technologies from companies such as OpenAI (GPT-3 & 4, ChatGPT, etc.) as well as increasingly everyone from Google to Meta and Microsoft. What’s remained underexposed in this whole discussion about which LLM will do more things better are the efforts by hobbyists, unaffiliated researchers and everyone else you may find in Open Source LLM projects. According to a leaked document from a researcher at Google (anonymous, but apparently verified), Google is very worried that Open Source LLMs will wipe the floor with both Google’s and OpenAI’s efforts.

According to the document, after the open source community got their hands on the leaked LLaMA foundation model, motivated and highly knowledgeable individuals set to work to take a fairly basic model to new levels where it could begin to compete with the offerings by OpenAI and Google. Major innovations are the scaling issues, allowing these LLMs to work on far less powerful systems (like a laptop or even smartphone).

An important factor here is Low-Rank adaptation (LoRa), which massively cuts down the effort and resources required to train a model. Ultimately, as this document phrases it, Google and in extension OpenAI do not have a ‘secret sauce’ that makes their approaches better than anything the wider community can come up with. Noted is also that essentially Meta has won out here by having their LLM leak, as it has meant that the OSS community has been improving on the Meta foundations, allowing Meta to benefit from those improvements in their products.

The dire prediction is thus that in the end the proprietary LLMs by Google, OpenAI and others will cease to be relevant, as the open source community will have steamrolled them into fine, digital dust. Whether this will indeed work out this way remains to be seen, but things are not looking up for proprietary LLMs.

(Thanks to [Mike Szczys] for the tip)

Metallurgist working by the blast furnaces in Třinec Iron and Steel Works. (Credit: Třinecké železárny)

We Already Live In A Hydrogen Economy: Steel Production, Generator Cooling, And Welding Gas

Although generally hydrogen is only mentioned within the context of transportation and energy storage, by far the most useful applications are found in industrial applications, including for the chemical industry, the manufacturing of steel, as well as that of methanol and fertilizer. This is illustrated by how today most of all hydrogen produced today is used for these industrial applications, as well as for applications such as cooling turbo generators, with demand for hydrogen in these applications rapidly increasing.

Currently virtually all hydrogen produced today comes from natural gas, via steam methane reformation (SMR), with potentially methane pyrolysis making natural gas-derived hydrogen a low-carbon source. The remainder of hydrogen comes from coal gasification and a small fraction from electrolysis of water. The hydrogen is often produced on-site, especially at industrial plants and thermal power plants. So aside from any decarbonization efforts, there are many uses for hydrogen which the public appears to be generally unaware of.

This leads us to the somewhat controversial hydrogen ladder.

Continue reading “We Already Live In A Hydrogen Economy: Steel Production, Generator Cooling, And Welding Gas”

How To Install Mac OS On The Nintendo Wii

What if you could run Mac OS on a Nintendo Wii game console? That’s probably not a thought that has occurred to many Wii owners or Mac OS users, but that is no excuse not to give it a try, as [Michael] handily demonstrates in a recent video by running Mac OS 9 on a Nintendo’s legendary console. The first major issue is what anyone who has ever tried to put a Hackintosh together knows: just because a target system runs the same CPU architecture can you necessarily install Mac OS (or OS X) for Intel x86 on any Intel x86 system. The same is true for the Wii with its PowerPC CPU and running Mac OS 9 for PowerPC on it.

In order to make this work, a workaround is employed, which uses the fossilized Mac-on-Linux project to run PowerPC Mac OS essentially on Linux for the Wii. This is a kernel module which allows Mac OS to run at basically native speeds on Linux, but it being a Linux kernel module, it meant that [Michael] had to hunt down the correct kernel to go with it. After creating an SD card with a functioning bootloader, he was able to boot into Wii Linux with MoL enabled, and try to install Mac OS.

OS X didn’t work for some reason, but Mac OS 9 did work, albeit with severe font rendering and audio glitches. All of which seems to come down to that while it is possible to get Mac OS running on the Wii, doing so is definitely more for the challenge and experience. By the way, if all this sounds a bit familiar, it’s because [Michael] referenced the Mac-on-Wii work that [Dandu] did last year to make this latest iteration happen.

Continue reading “How To Install Mac OS On The Nintendo Wii”

NASA’s Voyager Space Probe’s Reserve Power, And The Intricacies Of RTG-Based Power Systems

Launched in 1977, the Voyager 1 and 2 space probes have been operating non-stop for over 45 years, making their way from Earth to our solar system’s outer planets and beyond. Courtesy of the radioisotope thermoelectric generators (RTGs) which provided 470 W at launch, they are able to function in the darkness of Deep Space as well as they did within the confines of our Sun-lit solar system. Yet as nothing in the Universe is really infinite, so too do these RTGs wear out over time, both from natural decay of their radioactive source and from the degradation of the thermocouples.

Despite this gradual drop in power, NASA recently announced that Voyager 2 has a hitherto seemingly unknown source of reserve power that will postpone the shutdown of more science instruments for a few more years. The change essentially bypasses a voltage regulator circuit and associated backup power system, freeing up the power consumed by this for the scientific instruments which would otherwise have begun to shut down years sooner.

While this is good news in itself, it’s also noteworthy because the Voyager’s 45+ year old Multi-Hundred Watt (MHW) RTGs are the predecessor to the RTGs that are still powering the New Horizons probe after 17 years, and the Mars Science Laboratory (Curiosity) for over 10 years, showing the value of RTGs in long-term exploration missions.

Although the basic principle behind an RTG is quite simple, their design has changed significantly since the US put a SNAP-3 RTG on the Transit 4B satellite in 1961.

Continue reading “NASA’s Voyager Space Probe’s Reserve Power, And The Intricacies Of RTG-Based Power Systems”

China's Chang'e-4 mission made the first-ever landing on the far side of the Moon in 2019. (Credit: Xinhua/Alamy)

Moon Mission Failures, Or Why Are Lunar Landings So Hard?

Given the number of spacecraft (both crewed and uncrewed) that touched down on the Moon during the Space Race it’s sometimes hard to imagine why today, with all our modern technology, our remotely operated vehicles seem to have so much trouble not smashing themselves to bits on the regolith surface.

This is the focus of a recent article in Nature that explores the aspects which still make soft landings on our closest space body so much harder than the tragic lithobraking as most recently demonstrated by ispace’s M1 lander.

So far only three entities have successfully landed a craft on the Moon’s surface: the government-funded space agencies of the US, USSR, and China. Of them, only China managed to do so on their first try in 2013 (Chang’e-3), and again in 2019 on the far side of the Moon (Chang’e-4). What is the toughest part about a Moon landing is not to get near the Moon, but it’s about getting close to the surface without getting lost. Since there are no navigation satellites beyond those you put up before the landing, and a lot of Moon dust that will be kicked up by any landing rocket engines, it can be tough to gauge one’s exact location and distance to the surface.

In the case of the ispace lander it would appear that it tragically ran out of propellant before it could safely touch down, which is another major concern. Both the US and USSR would smash Moon landers into its surface until the first successful landing in 1966, which makes the manned touchdown by Apollo 11 in 1969 even more impressive.

NASA’s Curiosity Mars Rover Gets A Major Software Upgrade

Although the Curiosity rover has been well out of the reach of human hands since it touched down on Mars’ surface in 2012, this doesn’t mean that it isn’t getting constant upgrades. Via its communication link with Earth it receives regular firmware updates, with the most recent one being the largest one since 2016. In addition to code clean-up and small tweaks to message formats, this new change should make Curiosity both smarter and have its wheels last longer.

The former helps to avoid the long idle times between navigating, as unlike its younger sibling, Curiosity does not have the dedicated navigation computer for more autonomous driving. Although it won’t make the 11-year old rover as nimble as its sibling, it should shorten these pauses and allow for more navigating and science to be done. Finally, the change to reduce wear on the wheels is fairly simple, but should be rather effective: this affects the amount of steering that Curiosity needs to do while driving in an arc.

With these changes in place, Curiosity should be all ready to receive its newest sibling as it arrives in a few years along with even more Mars helicopters.

A Microneedle Vaccine Patch Printer For Thermostable MRNA Vaccines

What if you could get vaccinated with the ease of putting on an adhesive bandage? This is the promise of microneedle patches (MNP), which are essentially what they sound like. These would also have uses in diagnostics that might one day obliviate the need for drawing blood. The one major issue with MNPs is their manufacturing, which has been a laborious and highly manual process. In a recent paper in Nature Biotechnology researchers detail the construction and testing of a MNP printer, or microneedle vaccine printer (MVP) that can print dissolving polymers containing stabilized mRNA vaccine.

These mRNA strands are as usual encapsulated in a liquid nanoparticle container, which is mixed with the soluble and biocompatible polymer. This mixture is then added to a mold and dried, after which it retains the microneedle structure of the mold. On tests involving pig skin, the MNPs were capable of penetrating the skin and delivering the vaccine contained in the needles. Produced patches were shown to be shelf-stable for at least six months, which would make these ideal for vaccine distribution in areas where refrigeration and similar are problematic.

Using MNPs for delivering vaccines has previously been researched for e.g. delivering rotavirus and poliovirus vaccine, and a 2021 study in Nature Biomedical Engineering looked at the viability of using MNPs to rapidly sample protein biomarkers in interstitial fluid, which could make diagnostics for certain biomarkers as uncomplicated as putting on the patch, removing it and examining it, removing the need for drawing blood or sampling large amounts of interstitial fluid for external analysis.

If the concept of the MVP and similar MNP printers can be commercialized, it might make it possible to strongly shorten the supply chain for vaccines in less developed regions, while also enabling diagnostics that are very costly and cumbersome today.