Too Much Git? Try Gitless

Git has been a powerful tool for software development and version control since the mid ’00s, gaining widespread popularity since then. Originally built by none other than Linus Torvalds for handling Linux kernel development, it’s branched out for use with all kinds of other projects. That being said, it is not the easiest thing to learn how to use, with tons of options, abstract ideas, and non-linear workflows to keep track of. So if you’re new to the system or don’t need all of its vast swath of features, you might want to try out an alternative like Gitless.

Thanks to the fact that the original Git is open source, it’s free to modify and use as any user sees fit, and there are plenty of options available. This one aims to simplify many of the features found in the original Git, implementing a tracking system which somewhat automates commits. It also includes a simplified branching system, making it easier to switch between branches and keep better track of all that’s happening in a project. The command line interface is simplified as well, and the entire system is backwards-compatible with Git which means that if you find yourself needing some of the more advanced tools it’s possible to switch between them with relative ease.

For those of us keeping track of our own software projects, who don’t necessarily need the full feature set that the original Git has to offer, this could be a powerful tool that decreases the steep learning curve that Git is known for. It’s definitely a system work diving into, though, regardless of whichever implementation you choose. It’s an effective tool for everything from complex, professional projects to small hobby projects on the Arduino.

The First Search Engines, Built By Librarians

Before the Internet became the advertisement generator we know and love today, interspersed with interesting information here and there, it was originally a network of computers largely among various universities. This was even before the world-wide web and HTML which means that the people using these proto-networks, mostly researchers and other academics, had to build things we might take for granted from the ground up. One of those was one of the first search engines, built by the librarians who were cataloging all of the research in their universities, and using their relatively primitive computer networks to store and retrieve all of this information.

This search engine was called SUPARS, the Syracuse University Psychological Abstracts Retrieval Service. It was originally built for psychology research papers, and perhaps unsurprisingly the psychologists at the university also used this new system as the basis for understanding how humans would interact with computers. This was the 1970s after all, and most people had never used a computer, so documenting how they used search engine led to some important breakthroughs in the way we think about the best ways of designing systems like these.

The search engine was technically revolutionary for the time as well. It was among the first to allow text to be searched within documents and saved previous searches for users and researchers to access and learn from. The experiment was driven by the need to support researchers in a future where reference librarians would need assistance dealing with more and more information in their libraries, and it highlighted the challenges of vocabulary control in free-text searching.

The visionaries behind SUPARS recognized the changing landscape of research and designed for the future that would rely on networked computer systems. Their contributions expanded the understanding of how technology could shape human communication and effectiveness, and while they might not have imagined the world we are currently in, they certainly paved the way for the advances that led to its widespread adoption even outside a university setting. There were some false starts along that path, though.

What Do You Want In A Programming Assistant?

The Propellerheads released a song in 1998 entitled “History Repeating.” If you don’t know it, the lyrics include: “They say the next big thing is here. That the revolution’s near. But to me, it seems quite clear. That it’s all just a little bit of history repeating.” The next big thing today seems to be the AI chatbots. We’ve heard every opinion from the “revolutionize everything” to “destroy everything” camp. But, really, isn’t it a bit of history repeating itself? We get new tech. Some oversell it. Some fear it. Then, in the end, it becomes part of the ordinary landscape and seems unremarkable in the light of the new next big thing. Dynamite, the steam engine, cars, TV, and the Internet were all predicted to “ruin everything” at some point in the past.

History really does repeat itself. After all, when X-rays were discovered, they were claimed to cure pneumonia and other infections, along with other miracle cures. Those didn’t pan out, but we still use them for things they are good at. Calculators were going to ruin math classes. There are plenty of other examples.

This came to mind because a recent post from ACM has the contrary view that chatbots aren’t able to help real programmers. We’ve also seen that — maybe — it can, in limited ways. We suspect it is like getting a new larger monitor. At first, it seems huge. But in a week, it is just the normal monitor, and your old one — which had been perfectly adequate — seems tiny.

But we think there’s a larger point here. Maybe the chatbots will help programmers. Maybe they won’t. But clearly, programmers want some kind of help. We just aren’t sure what kind of help it is. Do we really want CoPilot to write our code for us? Do we want to ask Bard or ChatGPT/Bing what is the best way to balance a B-tree? Asking AI to do static code analysis seems to work pretty well.

So maybe your path to fame and maybe even riches is to figure out — AI-based or not — what people actually want in an automated programming assistant and build that. The home computer idea languished until someone figured out what people wanted to do with them. Video cassette didn’t make it into the home until companies figured out what people wanted most to watch on them.

How much and what kind of help do you want when you program? Or design a circuit or PCB? Or even a 3D model? Maybe AI isn’t going to take your job; it will just make it easier. We doubt, though, that it can much improve on Dame Shirley Bassey’s history lesson.

Contrary View: Chatbots Don’t Help Programmers

[Bertrand Meyer] is a decided contrarian in his views on AI and programming. In a recent Communications of the ACM blog post, he reveals that — unlike many others — he thinks AI in its current state isn’t very useful for practical programming. He was responding, in part, to another article from the ACM entitled “The End of Programming,” which, like many other articles, is claiming that, soon, no one will write software the way we do and have done for the last few decades. You can see [Matt Welsh] describe his thoughts on this in the video below. But [Bertrand] disagrees.

As we have also noted, [Bretrand] says:

“AI in its modern form, however, does not generate correct programs: it generates programs inferred from many earlier programs it has seen. These programs look correct but have no guarantee of correctness.”

Continue reading “Contrary View: Chatbots Don’t Help Programmers”

Motion Canvas Helps Get Your Point Across

Generating videos for projects can be difficult. Not only do you have to create the thing, but you film the process and cut it together in a story that a viewer can follow. Explaining complex topics to the viewer often involves a whiteboard of some sort, but as we all know, it’s not always a perfect solution. [Jacob] was working on a video game and making videos to document the progress and built a tool called Motion Canvas to help visualize topics like custom shaders. A few months ago, he decided to release it as an open source project.

Since then, it has seen quite a few forks and GitHub forks with a lively showcase on the community Discord. Looking at the docs, it is pretty easy to see why. The interface allows you to write procedural animations using the async semantics of TypeScript while still offering the GUI interface we expect from our video editors. In particular, the signal system allows dependencies to be defined between values. The system runs in Node, and the GUI runs in your browser locally while you edit the files in your terminal/notepad/IDE. CSS and Flexbox are available as the video is rendered to a web canvas and then compiled into a video via FFMPEG. The documentation is quite extensive, and it’s a great example of a tool someone built to fit a need they had going on to become something a little more fantastic.

This isn’t the first time we’ve discussed how to share your projects with the world, and we’ll freely admit we have a bit of bias toward encouraging folks to document their projects.

Continue reading “Motion Canvas Helps Get Your Point Across”

C++17’s Useful Features For Embedded Systems

Although the world of embedded software development languages seem to span somewhere between ASM and C89 all the way to MicroPython, there is a lot to be said for a happy medium between ease of development and features that makes the software more robust without adding overhead or bloat to the final firmware image.

This is where C++ has objectively many advantages over even C99, and as [Çağlayan Dökme] argues in a recent blog post C++17 adds many developer critter comforts to C++98 and the more recent C++11 C++14 standards.

First stepping back a generation (technically two, with C++20 also being a thing already), the addition of binary literals (e.g. 0b1010'1100) in C++14 and the expanded use of constexpr is addressed, with the latter foreshadowing C++17’s increased focus on compile time optimizations. A new attribute in C++17 that is part of this is [[nodiscard]], which when added before to the return type of a function or method requires the return value to be used in some manner, much like with functions in Ada (contrasted with procedures).

As [Çağlayan] notes, the biggest strength of compile-time checks is that it can save a lot of deploy-test-fix round-trips, with the total number of issues caught after deployment that could have been caught during compilation ideally being zero. Here C++17 streamlines the static_assert() mechanism and simplifies using if constexpr to instantiate code depending on compile-time conditions. Beyond compile-time optimizations there are a few other niceties, such as C++17 guaranteeing copy elision (return value optimization) when an object is returned directly, which is a welcome feature in hard real-time environments.

With today even MCUs having enough grunt to run multi-threaded applications and potentially firmware compiled from a many-thousand LoC codebase, picking a programming language that assists the developer with such an arduous task is very important, with Ada being the primary choice for high-reliability embedded platforms, but C++ along with C enjoying the most widespread (free) compiler support. Even if C++ isn’t supported on every single MCU out there (8051-based and most PIC MCUs mostly), whenever it is an option, it’s a pretty solid choice, especially with knowledge of these new language features.

The Glitch That Brought Down Japan’s Lunar Lander

When a computer crashes, it usually doesn’t leave debris. But when a computer happens to be descending towards the lunar surface and glitches out, that’s a very different story. Turns out that’s what happened on April 26th, as the Japanese Hakuto-R Lunar lander made its mark on the Moon…by crashing into it. [Scott Manley] dove in to try and understand the software bug that caused an otherwise flawless mission to go splat.

The lander began the descent sequence as expected at 100 km above the surface. However, as it descended, the altitude sensor reported the altitude as much lower than it was. It thought it was at zero altitude once it reached about 5 km above the surface. Confused by the fact it hadn’t yet detected physical contact with the surface, the craft continued to slowly descend until it ran out of fuel and plunged to the surface.

Ultimately it all came down to sensor fusion. The lander merges several noisy sensors, such as accelerometers, gyroscopes, and radar, into one cohesive source of truth. The craft passed over a particularly large cliff that caused the radar altimeter to suddenly spike up 3 km. Like good filtering software, the craft reasons that the sensor must be getting spurious data and filters it out. It was now just estimating its altitude by looking at its acceleration. As anyone who has tried to track an object through space using just gyros and accelerometers alone can attest, errors accumulate, and suddenly you’re not where you think you are.

We know what you’re thinking: surely they would have run landing simulations to catch errors like these? Ironically they did, it’s just that after the simulations were run, the landing site for Hakuto-R was changed. Unfortunately, nobody thought to re-run the simulations, and now the Moon has a new lawn ornament,

We’ve previously written about why lunar landings are so hard. While knowing what led to the crash will hopefully prevent a similar fate for future missions, the reality is that remotely landing a robot on a dusty world without the help of GPS is fiendishly difficult and likely will be for some time.

Continue reading “The Glitch That Brought Down Japan’s Lunar Lander”