Quantum Computing: The First Taste Is Free

There are a few ways to access real quantum computers — often for free — over the Internet. However, most of these are previous-generation machines that have limited capabilities. Great for learning, perhaps, but not something you could do anything practical with.  Xanadu, however, has announced what they claim to be a computer capable of reaching quantum advantage that is free for anyone to use, within limits. Borealis — the computer in question — uses photonic states and has the capability of working with over 216 squeezed-state qubits.

The company is selling time on the computer, but the free tier includes 5 million free shots on Borealis and 10 million shots on an earlier series of quantum computers. You can also buy pay-as-you go service for about $100 per million shots on Borealis.

While a few million shots may sound like a lot, we noticed that the quickstart demo consumes 10,000 shots and that’s presumably something simple. That’s still about 500 runs of that on Borealis — not bad for free on a state-of-the-art quantum computer. You will be wanting to debug with a simulator, though.

We presume the developers are Beatles fans given that you use software called Penny Lane and Strawberry Fields to access the machines. Your job is controlled by Python and there is a cloud simulator to save your shots.

We won’t pretend to understand all there is about squeezed light qubits and the Borealis architecture. But you can get some general practice in our series on quantum computing. Or there are a few lectures around including one that aims at different levels of experience.

Continue reading “Quantum Computing: The First Taste Is Free”

Optimizing Linux Pipes

In CPU design, there is Ahmdal’s law. Simply put, it means that if some process is contributing to 10% of your execution, optimizing it can’t improve things by more than 10%. Common sense, really, but it illustrates the importance of knowing how fast or slow various parts of your system are. So how fast are Linux pipes? That’s a good question and one that [Mazzo] sets out to answer.

The inspiration was a highly-optimized fizzbuzz program that clocked in at over 36GB/s on his laptop. Is that a common speed? Nope. A simple program using pipes on the same machine turned in not quite 4 GB/s. What accounts for the difference?

Continue reading “Optimizing Linux Pipes”

Building Faster Rsync From Scratch In Go

For a quick file transfer between two computers, SCP is a fine program to use. For more complex, large, or regular backups, however, the go-to tool is rsync. It’s faster, more efficient, and usable in a wider range of circumstances. For all its perks, [Michael Stapelberg] felt that it had one major weakness: it is a tool written in C. [Michael] is philosophically opposed to programs written in C, so he set out to implement rsync from scratch in Go instead.

[Michael]’s path to deciding to tackle this project is a complicated one. His ISP upgraded his internet connection to 25 Gbit/s recently, which means that his custom router was the bottleneck in his network. To solve that problem he migrated his router to a PC with several 25 Gbit/s network cards. To take full advantage of the speed now theoretically available, he began using a tool called gokrazy, which turns applications written in Go into their own appliance. That means that instead of installing a full Linux distribution to handle specific tasks (like a router, for example), the only thing loaded on the computer is essentially the Linux kernel, the Go compiler and libraries, and then the Go application itself.

With a new router with hardware capable of supporting these fast speeds and only running software written in Go, the last step was finally to build rsync to support his tasks on his network. This meant that rsync itself needed to be built from scratch in Go. Once [Michael] completed this final task, he found that his implementation of rsync is actually much faster than the version built in C, thanks to the modernization found in the Go language and the fact that his router isn’t running all of the cruft associated with a standard Linux distribution.

For a software project of this scope, we find [Michael]’s step-by-step process worth taking note of for any problem any of us attempt to tackle. Not only that, refactoring a foundational tool like rsync is an involved task on its own, let alone its creation simply to increase network speeds beyond what most of us would already consider blazingly fast. We’re leaving out a ton of details on this build so we definitely recommend checking out his talk in the video below.

Thanks to [sarinkhan] for the tip!

Continue reading “Building Faster Rsync From Scratch In Go”

Making The Case For COBOL

Perhaps rather unexpectedly, on the 14th of March this year the GCC mailing list received an announcement regarding the release of the first ever COBOL front-end for the GCC compiler. For the uninitiated, COBOL saw its first release in 1959, making it with 63 years one of the oldest programming language that is still in regular use. The reason for its persistence is mostly due to its focus from the beginning as a transaction-oriented, domain specific language (DSL).

Its acronym stands for Common Business-Oriented Language, which clearly references the domain it targets. Even with the current COBOL 2014 standard, it is still essentially the same primarily transaction-oriented language, while adding support for structured, procedural and object-oriented programming styles. Deriving most of its core from Admiral Grace Hopper‘s FLOW-MATIC  language, it allows for efficiently describing business logic as one would encounter at financial institutions or businesses, in clear English.

Unlike the older GnuCOBOL project – which translates COBOL to C – the new GCC-COBOL front-end project does away with  that intermediate step, and directly compiles COBOL source code into binary code. All of which may raise the question of why an entire man-year was invested in this effort for a language which has been declared ‘dead’ for  probably at least half its 63-year existence.

Does it make sense to learn or even use COBOL today? Do we need a new COBOL compiler?

Continue reading “Making The Case For COBOL”

AI Attempts Converting Python Code To C++

[Alexander] created codex_py2cpp as a way of experimenting with Codex, an AI intended to translate natural language into code. [Alexander] had slightly different ideas, however, and created codex_py2cpp as a way to play with the idea of automagically converting Python into C++. It’s not really intended to create robust code conversions, but as far as experiments go, it’s pretty neat.

The program works by reading a Python script as an input file, setting up a few parameters, then making a request to OpenAI’s Codex API for the conversion. It then attempts to compile the result. If compilation is successful, then hopefully the resulting executable actually works the same way the input file did. If not? Well, learning is fun, too. If you give it a shot, maybe start simple and don’t throw it too many curveballs.

Codex is an interesting idea, and this isn’t the first experiment we’ve seen that plays with the concept of using machine learning in this way. We’ve seen a project that generates Linux commands based on a verbal description, and our own [Maya Posch] took a close look at GitHub Copilot, a project high on promise and concept, but — at least at the time — considerably less so when it came to actual practicality or usefulness.

Linux And C In The Browser

There was a time when trying to learn to write low-level driver or kernel code was hard. You really needed two machines: one to work with, and one to screw up over and over again until you got it right. These days you can just spin up a virtual machine and roll it back every time you totally screw up. Much easier! We don’t think it is all that practical, but [nsommer] has an interesting post about loading up a C compiler and compiling Linux for a virtual machine. What’s different? Oh, the virtual machine is in your browser.

The v86 CPU emulator runs in the browser and looks like a Pentium III computer with the usual hardware. You might think it is slow and it certainly isn’t going to be fast as a rocket, but it does translate machine code into WebAssembly, so performance isn’t as bad as you might think.

The post goes into detail about how to build and create a simple machine web page that hosts v86. Once you cross-compile the kernel you can boot the machine up virtually. The other interesting part is the addition of tcc which is a pretty capable C compiler and much smaller and faster than the very traditional gcc.

The tcc build is tricky because the normal build process compiles the compiler and then uses the same compiler to build the default libraries. When cross-compiling, this doesn’t work well because the library you want for the host compile is different from the library you want to target for the second pass. You’ll see how to work around that in the post. The post continues to show how to do remote debugging and even gets QEMU into the mix. Debugging inside v86 doesn’t seem to work so far. There are more posts on this topic promised.

Honestly, this is one of those things like teaching a chicken to play checkers. It can be done, there’s little practical value, but it is still something to see. On the other hand, if you spend the weekend working through this, your next Linux porting project ought to seem easy by comparison.

Amazing what you can pull off with WebAssembly. If you need a quick introduction, check this one out from [Ben James].

Why Get Dressed When There Are Software Pants?

With so many of us working from home over the last two years, it’s really become apparent that people generally dislike sitting all day with pants on. Until such a utopian time when all clothing is considered unisex, and just as many men as women are kicking it in loose, flowing skirts and dresses, you may want to remember to actually wear something on your lower half, uncomfortable though pants may be. But there is another way — you could build [Everything Is Hacked]’s pants filter and continue to be a chaos agent. Check out the video after the break.

These pants go as wide as you please.

That’s right, whether you forego or just forget to dress yourself below the equator, the pants filter has you covered. It works like you might expect — machine learning tracks body landmarks and posture to figure out where your NSFW region is and keep it under wraps.

By default, it blurs everything below the belt, or you can draw on pants if you’re inclined to be in revealing tighty-whities and prefer more coverage. You can adjust the width of the pants to cover the covid-19 you may have put on since 2020, and even change the pants to match your shirt.

We love that [Everything Is Hacked] had the um, gumption to test the pants filter in public at what appears to be a local taco joint. After the first few rounds of weird looks, he switched to a pants moustache to save face.

Want to add even more fun to those boring video calls? Try connecting up some vintage hardware, or install a pull chain to end those sessions with a gesture that won’t get you fired.

Continue reading “Why Get Dressed When There Are Software Pants?”