Motion Canvas Helps Get Your Point Across

Generating videos for projects can be difficult. Not only do you have to create the thing, but you film the process and cut it together in a story that a viewer can follow. Explaining complex topics to the viewer often involves a whiteboard of some sort, but as we all know, it’s not always a perfect solution. [Jacob] was working on a video game and making videos to document the progress and built a tool called Motion Canvas to help visualize topics like custom shaders. A few months ago, he decided to release it as an open source project.

Since then, it has seen quite a few forks and GitHub forks with a lively showcase on the community Discord. Looking at the docs, it is pretty easy to see why. The interface allows you to write procedural animations using the async semantics of TypeScript while still offering the GUI interface we expect from our video editors. In particular, the signal system allows dependencies to be defined between values. The system runs in Node, and the GUI runs in your browser locally while you edit the files in your terminal/notepad/IDE. CSS and Flexbox are available as the video is rendered to a web canvas and then compiled into a video via FFMPEG. The documentation is quite extensive, and it’s a great example of a tool someone built to fit a need they had going on to become something a little more fantastic.

This isn’t the first time we’ve discussed how to share your projects with the world, and we’ll freely admit we have a bit of bias toward encouraging folks to document their projects.

Continue reading “Motion Canvas Helps Get Your Point Across”

C++17’s Useful Features For Embedded Systems

Although the world of embedded software development languages seem to span somewhere between ASM and C89 all the way to MicroPython, there is a lot to be said for a happy medium between ease of development and features that makes the software more robust without adding overhead or bloat to the final firmware image.

This is where C++ has objectively many advantages over even C99, and as [Çağlayan Dökme] argues in a recent blog post C++17 adds many developer critter comforts to C++98 and the more recent C++11 C++14 standards.

First stepping back a generation (technically two, with C++20 also being a thing already), the addition of binary literals (e.g. 0b1010'1100) in C++14 and the expanded use of constexpr is addressed, with the latter foreshadowing C++17’s increased focus on compile time optimizations. A new attribute in C++17 that is part of this is [[nodiscard]], which when added before to the return type of a function or method requires the return value to be used in some manner, much like with functions in Ada (contrasted with procedures).

As [Çağlayan] notes, the biggest strength of compile-time checks is that it can save a lot of deploy-test-fix round-trips, with the total number of issues caught after deployment that could have been caught during compilation ideally being zero. Here C++17 streamlines the static_assert() mechanism and simplifies using if constexpr to instantiate code depending on compile-time conditions. Beyond compile-time optimizations there are a few other niceties, such as C++17 guaranteeing copy elision (return value optimization) when an object is returned directly, which is a welcome feature in hard real-time environments.

With today even MCUs having enough grunt to run multi-threaded applications and potentially firmware compiled from a many-thousand LoC codebase, picking a programming language that assists the developer with such an arduous task is very important, with Ada being the primary choice for high-reliability embedded platforms, but C++ along with C enjoying the most widespread (free) compiler support. Even if C++ isn’t supported on every single MCU out there (8051-based and most PIC MCUs mostly), whenever it is an option, it’s a pretty solid choice, especially with knowledge of these new language features.

The Glitch That Brought Down Japan’s Lunar Lander

When a computer crashes, it usually doesn’t leave debris. But when a computer happens to be descending towards the lunar surface and glitches out, that’s a very different story. Turns out that’s what happened on April 26th, as the Japanese Hakuto-R Lunar lander made its mark on the Moon…by crashing into it. [Scott Manley] dove in to try and understand the software bug that caused an otherwise flawless mission to go splat.

The lander began the descent sequence as expected at 100 km above the surface. However, as it descended, the altitude sensor reported the altitude as much lower than it was. It thought it was at zero altitude once it reached about 5 km above the surface. Confused by the fact it hadn’t yet detected physical contact with the surface, the craft continued to slowly descend until it ran out of fuel and plunged to the surface.

Ultimately it all came down to sensor fusion. The lander merges several noisy sensors, such as accelerometers, gyroscopes, and radar, into one cohesive source of truth. The craft passed over a particularly large cliff that caused the radar altimeter to suddenly spike up 3 km. Like good filtering software, the craft reasons that the sensor must be getting spurious data and filters it out. It was now just estimating its altitude by looking at its acceleration. As anyone who has tried to track an object through space using just gyros and accelerometers alone can attest, errors accumulate, and suddenly you’re not where you think you are.

We know what you’re thinking: surely they would have run landing simulations to catch errors like these? Ironically they did, it’s just that after the simulations were run, the landing site for Hakuto-R was changed. Unfortunately, nobody thought to re-run the simulations, and now the Moon has a new lawn ornament,

We’ve previously written about why lunar landings are so hard. While knowing what led to the crash will hopefully prevent a similar fate for future missions, the reality is that remotely landing a robot on a dusty world without the help of GPS is fiendishly difficult and likely will be for some time.

Continue reading “The Glitch That Brought Down Japan’s Lunar Lander”

Simple Cubes Show Off AI-Driven Runtime Changes In VR

AR and VR developer [Skarredghost] got pretty excited about a virtual blue cube, and for a very good reason. It marked a successful prototype of an augmented reality experience in which the logic underlying the cube as a virtual object was changed by AI in response to verbal direction by the user. Saying “make it blue” did indeed turn the cube blue! (After a little thinking time, of course.)

It didn’t stop there, of course, and the blue cube proof-of-concept led to a number of simple demos. The first shows off a row of cubes changing color from red to green in response to musical volume, then a bundle of cubes change size in response to microphone volume, and cubes even start moving around in space.

The program accepts spoken input from the user, converts it to text, sends it to a natural language AI model, which then creates the necessary modifications and loads it into the environment to make runtime changes in Unity. The workflow is a bit cumbersome and highlights many of the challenges involved, but it works and that’s pretty nifty.

The GitHub repository is here and a good demonstration video is embedded just under the page break. There’s also a video with a much more in-depth discussion of what’s going on and a frank exploration of the technical challenges.

If you’re interested in this direction, it seems [Skarredghost] has rounded up the relevant details. And should you have a prototype idea that isn’t necessarily AR or VR but would benefit from AI-assisted speech recognition that can run locally? This project has what you need.

Continue reading “Simple Cubes Show Off AI-Driven Runtime Changes In VR”

Network Programming

If you want a book on network programming, there are a few classic choices. [Comer’s] TCP/IP books are a great reference but sometimes is too low level. “Unix Networking Programming” by [Stevens] is the usual choice, but it is getting a little long in the tooth, as well. Now we have “Beej’s Guide to Network Programming Using Internet Sockets.” While the title doesn’t exactly roll off the tongue, the content is right on and fresh. Best part? You can read it now in your browser or in PDF format.

All the topics you’d expect are there in ten chapters. Of course, there’s the obligatory description of what a socket is and the types of sockets you commonly encounter. Then there’s coverage of addressing and portability. There’s even a section on IPV6.

Continue reading “Network Programming”

A Literate Assembly Language

A recent edition of [Babbage’s] The Chip Letter discusses the obscurity of assembly language. He points out, and I think correctly, that assembly language is more often read than written, yet nearly all of them are hampered by obscurity left over from the days when punched cards had 80 columns and a six-letter symbol was all you could manage in the limited memory space of the computer. For example,  without looking it up, what does the ARM instruction FJCVTZS do? The instruction’s full name is Floating-point Javascript Convert to Signed Fixed-point Rounding Towards Zero. Not super helpful.

But it did occur to me that nothing is stopping you from writing a literate assembler that is made to be easier to read. First, most C compilers will accept some sort of asm statement, and you could probably manage that with compile-time string construction and macros. However, I think there is a better possibility.

Reuse, Recycle

Since I sometimes develop new CPU architectures, I have a universal cross assembler that is, honestly, an ugly hack, but it works quite well. I’ve talked about it before, but if you don’t want to read the whole post about it, it uses some simple tricks to convert standard-looking assembly language formats into C code that is then compiled. Executing the resulting program outputs the desired machine language into a desired file format. It is very easy to set up, and in the middle, there’s a nice C program that emits machine code. It is not much more readable than the raw assembly, but you shouldn’t have to see it. But what if we started the process there and made the format readable?

At the heart of the system is a C program that lives in soloasm.c. It handles command line options and output file generation. It calls an external function, genasm with a single integer argument. When that argument is set to 1, it indicates the assembler is in its first pass, and you only need to fill in label values with real numbers. If the pass is a 2, it means actually fill in the array that holds the code.

That array is defined in the __solo_info instruction (soloasm.h). It includes the size of the memory, a pointer to the code, the processor’s word size, the beginning and end addresses, and an error flag. Normally, the system converts your assembly language input into a bunch of function calls it writes inside the genasm function. But in this case, I want to reuse soloasm.c to create a literate assembly language. Continue reading “A Literate Assembly Language”

Leaked Internal Google Document Claims Open Source AI Will Outcompete Google And OpenAI

In the world of large language models (LLM), the focus has for the longest time been on proprietary technologies from companies such as OpenAI (GPT-3 & 4, ChatGPT, etc.) as well as increasingly everyone from Google to Meta and Microsoft. What’s remained underexposed in this whole discussion about which LLM will do more things better are the efforts by hobbyists, unaffiliated researchers and everyone else you may find in Open Source LLM projects. According to a leaked document from a researcher at Google (anonymous, but apparently verified), Google is very worried that Open Source LLMs will wipe the floor with both Google’s and OpenAI’s efforts.

According to the document, after the open source community got their hands on the leaked LLaMA foundation model, motivated and highly knowledgeable individuals set to work to take a fairly basic model to new levels where it could begin to compete with the offerings by OpenAI and Google. Major innovations are the scaling issues, allowing these LLMs to work on far less powerful systems (like a laptop or even smartphone).

An important factor here is Low-Rank adaptation (LoRa), which massively cuts down the effort and resources required to train a model. Ultimately, as this document phrases it, Google and in extension OpenAI do not have a ‘secret sauce’ that makes their approaches better than anything the wider community can come up with. Noted is also that essentially Meta has won out here by having their LLM leak, as it has meant that the OSS community has been improving on the Meta foundations, allowing Meta to benefit from those improvements in their products.

The dire prediction is thus that in the end the proprietary LLMs by Google, OpenAI and others will cease to be relevant, as the open source community will have steamrolled them into fine, digital dust. Whether this will indeed work out this way remains to be seen, but things are not looking up for proprietary LLMs.

(Thanks to [Mike Szczys] for the tip)