What’s The Deal With Snap Packages?

Who would have thought that software packaging software would cause such a hubbub? But such is the case with snap. Developed by Canonical as a faster and easier way to get the latest versions of software installed on Ubuntu systems, the software has ended up starting a fiery debate in the larger Linux community. For the more casual user, snap is just a way to get the software they want as quickly as possible. But for users concerned with the ideology of free and open source software, it’s seen a dangerous step towards the types of proprietary “walled gardens” that may have drove them to Linux in the first place.

Perhaps the most vocal opponent of snap, and certainly the one that’s got the most media attention, is Linux Mint. In a June 1st post on the distribution’s official blog, Mint founder Clement Lefebvre made it very clear that the Ubuntu spin-off does not approve of the new package format and wouldn’t include it on base installs. Further, he announced that Mint 20 would actively block users from installing the snap framework through the package manager. It can still be installed manually, but this move is seen as a way to prevent it from being added to the system without the user’s explicit consent.

The short version of Clement’s complaint is that the snap packager installs from a proprietary Canonical-specific source. If you want to distribute snaps, you have to set up an account with Canonical and host it there. While the underlying software is still open source, the snap packager breaks with long tradition of having the distribution of the software also being open and free. This undoubtedly makes the install simple for naive users, and easier to maintain for Canonical maintainers, but it also takes away freedom of choice and diversity of package sources.

Continue reading “What’s The Deal With Snap Packages?”

New Silq Programming Language Aims To Make Quantum Programming Easier

Fresh from ETH Zurich comes the new Silq programming language. They also have submitted a paper to the PLDI 2020 conference on why they feel that it is the best quantum programming language so far. Although it may be not common knowledge, the lack of usable general purpose quantum computers has not kept multiple teams from developing programming languages for such computer systems.

Microsoft’s Q# is a strong contender in this space, along with the older QCL language. The claims by the Silq team on exactly why their language is better appear to come down to it being ‘more high level’, and by supporting automatic (and safe) uncomputation. While the ‘high level’ aspect is suspect since Q# is most decidedly a high-level programming language, their uncomputation claim does at least have some merit.

Quantum algorithm with uncompute step.

Uncomputation is a concept in quantum programming, where one occasionally has to remove a few intermediate objects from the current state because they may cause quantum interference that would affect the resulting output. Normally, one would save the intermediate result to a register for this, then reset the state and continue. Which parts of the state to keep and what to uncompute is however not easily determined, as a quick glance at related answers over at the Quantum Computing StackExchange and Theoretical Computer Science might reveal.

The main question thus appears the validity of this claim about Silq being able to automatically determine what ‘garbage’ can be safely uncomputed, and what should be part of the quantum interference. We have all seen with languages like Java and C# how even with traditional computing something as simple as garbage collecting can go horribly wrong. Maybe we shouldn’t count our quantum chickens yet until this particular waveform has fully collapsed.

(Thanks to Qes)

Simulate Your World With Hash.ai

We will admit that we often throw together software simulations of real-world things, but we’ll also admit they are usually quick and dirty and just dump out text that we might graph in a spreadsheet or using GNUPlot. But with Hash.ai, you can quickly generate simulations of just about anything quickly and easily. The simulations will have beautiful visualizations and graphs, too. The tool works with JavaScript or Python and you don’t have to waste your time writing the parts that don’t change.

The web-based tool works on the idea of agents. Each agent has one or more behaviors that run each time step. In the example simulation, which models wildfires in forests, the agent is named forest, although it really models one virtual tree. There’s also a behavior called forest which controls the tree’s rate of growth and chance of burning based on nearby trees and lightning. Other behaviors simulate a burning tree and what happens to a tree after burning — an ember — which may or may not grow back.

Continue reading “Simulate Your World With Hash.ai”

Learn Quantum Computing With Spaced Repetition

Everyone learns differently, but cognitive research shows that you tend to remember things better if you use spaced repetition. That is, you learn something, then after a period, you are tested. If you still remember, you get tested again later with a longer interval between tests. If you get it wrong, you get tested earlier. That’s the idea behind [Andy Matuschak ‘s]and [Michael Nielsen’s] quantum computing tutorial. You answer questions embedded in the text. You answer to yourself, so there’s no scoring. However, once you click to reveal the answer, you report if you got the answer correct or not, and the system schedules you for retest based on your report.

Does it work? We don’t know, but we have heard that spaced repetition is good for learning languages, among other things. We suspect that like most learning methods, it works better for some people than others.

Continue reading “Learn Quantum Computing With Spaced Repetition”

Ask Hackaday: Are 80 Characters Per Line Still Reasonable In 2020?

Software developers won’t ever run out of subjects to argue and fight about. Some of them can be fundamental to a project — like choice of language or the programming paradigm to begin with. Others seem more of a personal preference at first, but can end up equally fundamental on a bigger scale — like which character to choose for indentation, where to place the curly braces, or how to handle line breaks. Latest when there’s more than one developer collaborating, it’s time to find a common agreement in form of a coding style guide, which might of course require a bit of compromise.

Regardless of taste, the worst decision is having no decision, and even if you don’t agree with a specific detail, it’s usually best to make peace with it for the benefit of uniformly formatted code. In a professional environment, a style guide was ideally worked out collaboratively inside or between teams, and input and opinions of everyone involved were taken into consideration — and if your company doesn’t have one to begin with, the best step to take is probably one towards the exit.

The situation can get a bit more complex in open source projects though, depending on the structure and size of a project. If no official style guide exists, the graceful thing to do is to simply adopt the code base’s current style when contributing to it. But larger projects that are accustomed to a multitude of random contributors will typically have one defined, which was either worked out by the core developers, or declared by its benevolent dictator for life.

In case of the Linux kernel, that’s of course [Linus Torvalds], who has recently shaken up the community with a mailing list response declaring an overly common, often even unwritten rule of code formatting as essentially obsolete: the 80-character line limitation. Considering the notoriety of his rants and crudeness, his response, which was initiated by a line break change in the submitted patch, seems downright diplomatic this time.

[Linus]’ reasoning against a continuing enforcement of 80-char line limits is primarly the fact that screens are simply big enough today to comfortably fit longer lines, even with multiple terminals (or windows) next to each other. As he puts it, the only reason to stick to the limitation is using an actual VT100, which won’t serve much use in kernel development anyway.

Allowing longer lines on the other hand would encourage the use of more verbose variable names and whitespace, which in turn would actually increase readability. Of course, all to a certain extent, and [Linus] obviously doesn’t call for abolishing line breaks altogether. But he has a point; does it really make sense to stick to a decades old, nowadays rather arbitrary-seeming limitation in 2020?

Continue reading “Ask Hackaday: Are 80 Characters Per Line Still Reasonable In 2020?”

Samsung’s Leap Month Bug Teaches Not To Skimp On Testing

Date and time handling is hard, that’s an ugly truth about software development we’ll all learn the hard way one day. Sure, it might seem like some trivial everyday thing that you can easily implement yourself without relying on a third-party library. I mean, it’s basically just adding seconds on top of one another, roll them over to minutes, and from there keep rolling to hours, days, months, up until you hit the years. Throw in the occasional extra day every fourth February, and you’re good to go, right?

Well, obviously not. Assuming you thought about leap years in the first place — which sadly isn’t a given — there are a few exceptions that for instance cause the years 1900 and 2100 to be regular years, while the year 2000 was still a leap year. And then there’s leap seconds, which occur irregularly. But there are still more gotchas lying in wait. Case in point: back in May, a faulty lunar leap month handling in the Chinese calendar turned Samsung phones all over China into bricks. And while you may not plan to ever add support for non-Gregorian calendars to your own project, it’s just one more example of unanticipated peculiarities gone wild. Except, Samsung did everything right here.

So what happened?

Continue reading “Samsung’s Leap Month Bug Teaches Not To Skimp On Testing”

Programming In Plain English

Star Trek had really smart computers, that you could simply tell what you wanted to do and they did it. The [Rzeppa] family has started a plain English compiler. It runs under Windows and appears to be fairly capable.

Plain language programming isn’t exactly a new idea. COBOL was supposed to mimic natural language with statements like:

MULTIPLY HOURS BY RATE GIVING PAYAMOUNT

You could argue this didn’t go over very well, but there is still a whole lot of COBOL doing a whole lot of things in the business world. Today computers have more memory and speed, so programmers have been getting more and more verbose for decades. No more variable names such as X1 and fprdx. Maybe this will catch on.

Continue reading “Programming In Plain English”