Natural Language AI In Your Next Project? It’s Easier Than You Think

Want your next project to trash talk? Dynamically rewrite boring log messages as sci-fi technobabble? Happily (or grudgingly) answer questions? Doing that sort of thing and more can be done with OpenAI’s GPT-3, a natural language prediction model with an API that is probably a lot easier to use than you might think.

In fact, if you have basic Python coding skills, or even just the ability to craft a curl statement, you have just about everything you need to add this ability to your next project. It’s not free in the long run, although initial use is free on signup, but for personal projects the costs will be very small.

Basic Concepts

OpenAI has an API that provides access to GPT-3, a machine learning model with the ability to perform just about any task that involves understanding or generating natural-sounding language.

OpenAI provides some excellent documentation as well as a web tool through which one can experiment interactively. First, however, one must create an account and receive an API key. After that is done, the doors are open.

Creating an account also gives one a number of free credits that can be used to experiment with ideas. Once the free trial is used up or expires, using the API will cost money. How much? Not a lot, frankly. Everything sent to (and received from) the API is broken into tokens, and pricing is from $0.0008 to $0.06 per thousand tokens. A thousand tokens is roughly 750 words, so small projects are really not a big financial commitment. My free trial came with 18 USD of credits, of which I have so far barely managed to spend 5%.

Let’s take a closer look at how it works, and what can be done with it!

Continue reading “Natural Language AI In Your Next Project? It’s Easier Than You Think”

When Hams Helped Polar Researchers Come In From The Cold

We always enjoy [The History Guy] videos, although many of them aren’t much about technology. However, when he does cover tech topics, he does it well and his recent video on how ham radio operators assisted in operation Deep Freeze is a great example. You can watch the video, below.

The backdrop is the International Geophysical Year (IGY) where many nations cooperated to learn more about the Earth. In particular, from 1957 to 1958 there was a push to learn more about the last unexplored corner of our planet: Antarctica. Several of the permanent bases on the icy continent today were started during the IGY.

It’s hard for modern audiences to appreciate what the state of personal communication was in 1957. There were no cell phones and if you are thinking about satellites, don’t forget that Sputnik didn’t launch until late 1957, so that wasn’t going to happen, either.

Operation Deep Freeze had ten U. S. Navy vessels that brought scientists, planes, and Seabees (slang for members of the Naval Construction Batallion) — about 1,800 people in all over several years culminating in the IGY. Of course, the Navy had radio capabilities, but it wasn’t like the Navy to let you just call home to chat. Not to mention, a little more than 100 people were left for each winter and the Navy ships went home. That’s where ham radio operators came in.

Hams would do what is called a phone patch for the people stationed in Antarctica. Some hams also send radiograms to and from the crew’s families. One teen named Jules was especially dedicated to making connections to Antarctica. We can’t verify it, but one commenter says that Jules was so instrumental in connecting his father in Antarctica to his fiancee that when his parents married, Jules was their best man.

Jules and his brother dedicated themselves to keeping a morale pipeline from New Jersey to the frozen stations. He figures prominently in recollections of many of the written accounts from people who wintered at the nascent bases. Apparently, many of the men even traveled to New Jersey later to visit Jules. What happened to him? Watch the end of the video and you’ll find out.

While being a ham today doesn’t offer this kind of excitement, hams still contribute to science. Want to get in on the action? [Dan Maloney] can tell you how to get started on the cheap.

Continue reading “When Hams Helped Polar Researchers Come In From The Cold”

Kindle, EPUB, And Amazon’s Love Of Reinventing Wheels

Last last month, a post from the relatively obscure Good e-Reader claimed that Amazon would finally allow the Kindle to read EPUB files. The story was picked up by all the major tech sites, and for a time, there was much rejoicing. After all, it was a feature that owners have been asking for since the Kindle was first released in 2007. But rather than supporting the open eBook format, Amazon had always insisted in coming up with their own proprietary formats to use on their readers. Accordingly, many users have turned to third party programs which can reliably convert their personal libraries over to whatever Amazon format their particular Kindle is most compatible with.

Native support for EPUB would make using the Kindle a lot less of a hassle for many folks, but alas, it was not to be. It wasn’t long before the original post was updated to clarify that Amazon had simply added support for EPUB to their Send to Kindle service. Granted this is still an improvement, as it represents a relatively low-effort way to get the open format files on your personal device; but in sending the files through the service they would be converted to Amazon’s KF8/AZW3 format, the result of which may not always be what you expected. At the same time the Send to Kindle documentation noted that support for AZW and MOBI files would be removed later on this year, as the older formats weren’t compatible with all the features of the latest Kindle models.

If you think this is a lot of unnecessary confusion just to get plain-text files to display on the world’s most popular ereader, you aren’t alone. Users shouldn’t have to wade through an alphabet soup of oddball file formats when there’s already an accepted industry standard in EPUB. But given that it’s the reality when using one of Amazon’s readers, this seems a good a time as any for a brief rundown of the different ebook formats, and a look at how we got into this mess in the first place.

Continue reading “Kindle, EPUB, And Amazon’s Love Of Reinventing Wheels”

Things Are Getting Rusty In Kernel Land

There is gathering momentum around the idea of adding Rust to the Linux kernel. Why exactly is that a big deal, and what does this mean for the rest of us? The Linux kernel has been just C and assembly for its entire lifetime. A big project like the kernel has a great deal of shared tooling around making its languages work, so adding another one is quite an undertaking. There’s also the project culture developed around the language choice. So why exactly are the grey-beards of kernel development even entertaining the idea of adding Rust? To answer in a single line, it’s because C was designed in 1971, to run on the minicomputers at Bell Labs. If you want to shoot yourself in the foot, C will hand you the loaded firearm.

On the other hand, if you want to write a kernel, C is a great language for doing low-level coding. Direct memory access? Yep. Inline assembly? Sure. Runs directly on the metal, with no garbage collection or virtual machines in the way? Absolutely. But all the things that make C great for kernel programming also make C dangerous for kernel programming.

Now I hear your collective keyboards clacking in consternation: “It’s possible to write safe C code!” Yes, yes it is possible. It’s just very easy to mess up, and when you mess up in a kernel, you have security vulnerabilities. There’s also some things that are objectively terrible about C, like undefined behavior. C compilers do their best to do the right thing with cursed code like i++ + i++; or a[i] = i++;. But that’s almost certainly not going to do what you want it to, and even worse, it may sometimes do the right thing.

Rust seems to be gaining popularity. There are some ambitious projects out there, like rewriting coreutils in Rust. Many other standard applications are getting a Rust rewrite. It’s fairly inevitable that the collection of Rust developers started to ask, could we invade the kernel next? This was pitched for a Linux Plumbers Conference, and the mailing list response was cautiously optimistic. If Rust could be added without breaking things, and without losing the very things that makes Rust useful, then yes it would be interesting. Continue reading “Things Are Getting Rusty In Kernel Land”

Silence Of The IPods: Reflecting On The Ever-Shifting Landscape Of Personal Media Consumption

On October 23rd of 2001, the first Apple iPod was launched. It wasn’t the first Personal Media Player (PMP), but as with many things Apple the iPod would go on to provide the benchmark for what a PMP should do, as well as what they should look like. While few today remember the PMP trailblazers like Diamond’s Rio devices, it’s hard to find anyone who doesn’t know what an ‘iPod’ is.

Even as Microsoft, Sony and others tried to steal the PMP crown, the iPod remained the irrefutable market leader, all the while gaining more and more features such as video playback and a touch display. Yet despite this success, in 2017 Apple discontinued its audio-only iPods (Nano and Shuffle), and as of May 10th, 2022, the Apple iPod Touch was discontinued. This marks the end of Apple’s foray into the PMP market, and makes one wonder whether the PMP market of the late 90s is gone, or maybe just has transformed into something else.

After all, with everyone and their pet hamster having a smartphone nowadays, what need is there for a portable device that can ‘only’ play back audio and perhaps video?

Continue reading “Silence Of The IPods: Reflecting On The Ever-Shifting Landscape Of Personal Media Consumption”

NVIDIA Releases Drivers With Openness Flavor

This year, we’ve already seen sizeable leaks of NVIDIA source code, and a release of open-source drivers for NVIDIA Tegra. It seems NVIDIA decided to amp it up, and just released open-source GPU kernel modules for Linux. The GitHub link named open-gpu-kernel-modules has people rejoicing, and we are already testing the code out, making memes and speculating about the future. This driver is currently claimed to be experimental, only “production-ready” for datacenter cards – but you can already try it out!

The Driver’s Present State

Of course, there’s nuance. This is new code, and unrelated to the well-known proprietary driver. It will only work on cards starting from RTX 2000 and Quadro RTX series (aka Turing and onward). The good news is that performance is comparable to the closed-source driver, even at this point! A peculiarity of this project – a good portion of features that AMD and Intel drivers implement in Linux kernel are, instead, provided by a binary blob from inside the GPU. This blob runs on the GSP, which is a RISC-V core that’s only available on Turing GPUs and younger – hence the series limitation. Now, every GPU loads a piece of firmware, but this one’s hefty!

Barring that, this driver already provides more coherent integration into the Linux kernel, with massive benefits that will only increase going forward. Not everything’s open yet – NVIDIA’s userspace libraries and OpenGL, Vulkan, OpenCL and CUDA drivers remain closed, for now. Same goes for the old NVIDIA proprietary driver that, I’d guess, would be left to rot – fitting, as “leaving to rot” is what that driver has previously done to generations of old but perfectly usable cards. Continue reading “NVIDIA Releases Drivers With Openness Flavor”

With Rocket Lab’s Daring Midair Catch, Reusable Rockets Go Mainstream

We’ve all marveled at the videos of SpaceX rockets returning to their point of origin and landing on their spindly deployable legs, looking for all the world like something pulled from a 1950s science fiction film.  On countless occasions founder Elon Musk and president Gwynne Shotwell have extolled the virtues of reusable rockets, such as lower operating cost and the higher reliability that comes with each booster having a flight heritage. At this point, even NASA feels confident enough to fly their missions and astronauts on reused SpaceX hardware.

Even so, SpaceX’s reusability program has remained an outlier, as all other launch providers have stayed the course and continue to offer only expendable booster rockets. Competitors such as United Launch Alliance and Blue Origin have teased varying degrees of reusability for their future vehicles, but to date have nothing to show for it beyond some flashy computer-generated imagery. All the while SpaceX continues to streamline their process, reducing turnaround time and refurbishment costs with each successful reuse of a Falcon 9 booster.

But that changed earlier this month, when a helicopter successfully caught one of Rocket Lab’s Electron boosters in midair as it fell back down to Earth under a parachute. While calling the two companies outright competitors might be a stretch given the relative sizes and capabilities of their boosters, SpaceX finally has a sparing partner when it comes to the science of reusability. The Falcon 9 has already smashed the Space Shuttle’s record turnaround time, but perhaps Rocket Lab will be the first to achieve Elon Musk’s stated goal of re-flying a rocket within 24 hours of its recovery.

Continue reading “With Rocket Lab’s Daring Midair Catch, Reusable Rockets Go Mainstream”