In 2020, our digital world and the software we use to create it are a towering structure, built upon countless layers of abstraction and building blocks — just think about all the translations and interactions that occur from loading a webpage. Whilst abstraction is undoubtedly a great thing, it only works if we’re building on solid ground; if the lower levels are stable and fast. What does that mean in practice? It means low-level, compiled languages, which can be heavily optimised and leveraged to make the most of computer hardware. One of the giants in this area was Frances Allen, who recently passed away in early August. Described by IBM as “a pioneer in compiler organization and optimization algorithms,” she made numerous significant contributions to the field. Continue reading “Frances Allen Optimised Your Code Without You Even Knowing”
If you’ve logged onto GitHub recently and you’re an active user, you might have noticed a new badge on your profile: “Arctic Code Vault Contributor”. Sounds pretty awesome right? But whose code got archived in this vault, how is it being stored, and what’s the point?
In June, 1995, Rasmus Lerdorf made an announcement on a Usenet group. You can still read it.
Today, twenty five years on, PHP is about as ubiquitous as it could possibly have become. I’d be willing to bet that for the majority of readers of this article, their first forays into web programming involved PHP.
Announcing the Personal Home Page Tools (PHP Tools) version 1.0.
These tools are a set of small tight cgi binaries written in C.
But no matter what rich history and wide userbase PHP holds, that’s no justification for its use in a landscape that is rapidly evolving. Whilst PHP will inevitably be around for years to come in existing applications, does it have a future in new sites?
Microsoft’s open-source shopping spree has claimed another victim: npm. [Nat Friedman], CEO of GitHub (owned by Microsoft), announced the move recently on the GitHub blog.
So what motivated the acquisition, and what changes are we likely to see as a result of it? There are some obvious upsides and integrations, but these will be accompanied by the usual dose of skepticism from the open-source community. The company history and working culture of npm has also had its moments in the news, which may well have contributed to the current situation. This post aims to explore some of the rationale behind the acquisition, and what it’s likely to mean for developers in the future.
If you write software, chances are you’ve come across Continuous Integration, or CI. You might never have heard of it – but you wonder what all the ticks, badges and mysterious status icons are on open-source repositories you find online. You might hear friends waxing lyrical about the merits of CI, or grumbling about how their pipeline has broken again.
Want to know what all the fuss is about? This article will explain the basic concepts of CI, but will focus on an example, since that’s the best way to understand it. Let’s dive in. Continue reading “Continuous Integration: What It Is And Why You Need It”
With an ever-growing range of smart-home products available, all with their own hubs, protocols, and APIs, we see a lot of DIY projects (and commercial offerings too) which aim to provide a “single universal interface” to different devices and services. Usually, these projects allow you to control your home using a list of devices, or sometimes a 2D floor plan. [Wassim]’s project aims to take the first steps in providing a 3D interface, by creating an interactive smart-home controller in the browser.
Note: this isn’t just a rendered image of a 3D scene which is static; this is an interactive 3D model which can be orbited and inspected, showing information on lights, heaters, and windows. The project is well documented, and the code can be found on GitHub. The tech works by taking 3D models and animations made in Blender, exporting them using the .glTF format, then visualising them in the browser using three.js. This can then talk to Hue bulbs, power meters, or whatever other devices are required. The technical notes on this project may well be useful for others wanting to use the Blender to three.js/browser workflow, and include a number of interesting demos of isolated small key concepts for the project.
We notice that all the meshes created in Blender are very low-poly; is it possible to easily add subdivision surface modifiers or is it the vertex count deliberately kept low for performance reasons?
This isn’t our first unique home automation interface, we’ve previously written about shAIdes, a pair of AI-enabled glasses that allow you to control your devices just by looking at them. And if you want to roll your own home automation setup, we have plenty of resources. The Hack My House series contains valuable information on using Raspberry Pis in this context, we’ve got information on picking the right sensors, and even enlisting old routers for the cause.
When building robots, or indeed other complex mechanical systems, it’s often the case that more and more limit switches, light gates and sensors are amassed as the project evolves. Each addition brings more IO pin usage, cost, potentially new interfacing requirements and accompanying microcontrollers or ADCs. If you don’t have much electronics experience, that’s not ideal. With this in mind, for a Hackaday prize entry [rand3289] is working on FiberGrid, a clever shortcut for interfacing multiple sensors without complex hardware. It doesn’t completely solve the problems above, but it aims to be a cheap, foolproof way to easily add sensors with minimal hardware needed.
The idea is simple: make your sensors from light gates using fiber optics, feed the ends of the plastic fibers into a grid, then film the grid with a camera. After calibrating the software, built with OpenCV, you can “sample” the sensors through a neat abstraction layer. This approach is easier and cheaper than you might think and makes it very easy to add new sensors.
Naturally, it’s not fantastic for sample rates, unless you want to splash out on a fancy high-framerate camera, and even then you likely have to rely on an OS being able to process the frames in time. It’s also not very compact, but fortunately you can connect quite a few sensors to one camera – up to 216 in [rand3289]’s prototype.
There are many novel uses for this kind of setup, for example, rotation sensors made with polarising filters. We’ve even written about optical flex sensors before.