The gameplay is a little nontraditional as well. To play the game, a tweet needs to be sent with specific instructions for the bot. The bot then plays the game according to its instructions and then tweets a video. By responding to this tweet with more instructions, the player can continue the game tweet-by-tweet. While slightly cumbersome, it does have the advantage of allowing a player to resume any game simply by responding to the tweet where they would like to start. Behind the scenes of the DOOM-playing Twitter bot is interesting as well and the code is available on the project’s GitHub page.
While we’ve seen plenty of DOOM instances on all kinds of hardware, it’s safe to say we’ve never really seen a gameplay experience quite like this one. It may stay as a curiosity, but DOOM porters are always looking for something else to run this classic game so it may eventually branch out or develop into something more user-friendly like this cloud-based Atari 2600.
For readers that might not spend their free time watching spools of PLA slowly unwind, The Spaghetti Detective (TSD) is an open source project that aims to use computer vision and machine learning to identify when a 3D print has failed and resulted in a pile of plastic “spaghetti” on the build plate. Once users have installed the OctoPrint plugin, they need to point it to either a self-hosted server that’s running on a relatively powerful machine, or TSD’s paid cloud service that handles all the AI heavy lifting for a monthly fee.
Unfortunately, 73 of those cloud customers ended up getting a bit more than they bargained for when a configuration flub allowed strangers to take control of their printers. In a frank blog post, TSD founder Kenneth Jiang owns up to the August 19th mistake and explains exactly what happened, who was impacted, and how changes to the server-side code should prevent similar issues going forward.
For the record, it appears no permanent damage was done, and everyone who was potentially impacted by this issue has been notified. There was a fairly narrow window of opportunity for anyone to stumble upon the issue in the first place, meaning any bad actors would have had to be particularly quick on their keyboards to come up with some nefarious plot to sabotage any printers connected to TSD. That said, one user took to Reddit to show off the physical warning their printer spit out; the apparent handiwork of a fellow customer that discovered the glitch on their own.
According to Jiang, the issue stemmed from how TSD associates printers and users. When the server sees multiple connections coming from the same public IP, it’s assumed they’re physically connected to the same local network. This allows the server to link the OctoPrint plugin running on a Raspberry Pi to the user’s phone or computer. But on the night in question, an incorrectly configured load-balancing system stopped passing the source IP addresses to the server. This made TSD believe all of the printers and users who connected during this time period were on the same LAN, allowing anyone to connect with whatever machine they wished.
The mix-up only lasted about six hours, and so far, only the one user has actually reported their printer being remotely controlled by an outside party. After fixing the load-balancing configuration, the team also pushed an update to the TSD code which puts a cap on how many printers the server will associate with a given IP address. This seems like a reasonable enough precaution, though it’s not immediately obvious how this change would impact users who wish to add multiple printers to their account at the same time, such as in the case of a print farm.
While no doubt an embarrassing misstep for the team at The Spaghetti Detective, we can at least appreciate how swiftly they dealt with the issue and their transparency in bringing the flaw to light. This is also an excellent example of how open source allows the community to independently evaluate the fixes applied by the developer in response to a discovered flaw. Jiang says the team will be launching a full security audit of their own as well, so expect more changes getting pushed to the repository in the near future.
We were impressed with TSD when we first covered it back in 2019, and glad to see the project has flourished since we last checked in. Trust is difficult to gain and easy to lose, but we hope the team’s handling of this issue shows they’re on top of things and willing to do right by their community even if it means getting some egg on their face from time to time.
While the Google Stadia may be the latest and greatest in the realm of cloud gaming, there are plenty of other ways to experience this new style of gameplay, especially if you’re willing to go a little retro. This project, for example, takes the Atari 2600 into the cloud for a nearly-complete gaming experience that is fully hosted in a server, including the video rendering.
[Michael Kohn] created this project mostly as a way to get more familiar with Kubernetes, a piece of open-source software which helps automate and deploy container-based applications. The setup runs on two Raspberry Pi 4s which can be accessed by pointing a browser at the correct IP address on his network, or by connecting to them via VNC. From there, the emulator runs a specific game called Space Revenge, chosen for its memory requirements and its lack of encumbrance of copyrights. There are some limitations in that the emulator he’s using doesn’t implement all of the Atari controls, and that the sound isn’t available through the remote desktop setup, but it’s impressive nonetheless
[Michael] also glosses over this part, but the Atari emulator was written by him “as quickly as possible” so he could focus on the Kubernetes setup. This is impressive in its own right, and of course he goes beyond this to show exactly how to set up the cloud-based system on his GitHub page as well. He also thinks there’s potential for a system like this to run an NES setup as well. If you’re looking for something a little more modern, though, it is possible to set up a cloud-based gaming system with a Nintendo Switch as well.
The electric power grid, as it exists today, was designed about a century ago to accommodate large, dispersed power plants owned and controlled by the utilities themselves. At the time this seemed like a great idea, but as technology and society have progressed the power grid remains stubbornly rooted in this past. Efforts to modify it to accommodate solar and wind farms, electric cars, and other modern technology need to take great effort to work with the ancient grid setup, often requiring intricate modeling like this visual power grid emulator.
The model is known as LEGOS, the Lite Emulator of Grid Operations, and comes from researchers at RWTH Aachen University. Its goal is to simulate a modern power grid with various generation sources and loads such as homes, offices, or hospitals. It uses a DC circuit to simulate power flow, which is visualized with LEDs. The entire model is modular, so components can be added or subtracted easily to quickly show how the power flow changes as a result of modifications to the grid. There is also a robust automation layer to the entire project, allowing real-time data acquisition of the model to be gathered and analyzed using an open source cloud service called FIWARE.
In order to modernize the grid, simulations like these are needed to make sure there are no knock-on effects of adding or changing such a complex system in ways it was never intended to be changed. Researchers in Europe like the ones developing LEGOS are ahead of the curve, as smart grid technology continues to filter in to all areas of the modern electrical infrastructure. It could also find uses for modeling power grids in areas where changes to the grid can happen rapidly as a result of natural disasters.
Companies like Google and Microsoft have been investing heavily in the concept of cloud gaming, where a player uses their computer or a mobile device to stream the video feed of a game that’s running on powerful machine tucked away in a data center somewhere. With this technology you can play the latest and greatest titles, even if the device you’re using doesn’t have the processing power to run it locally.
Considering the Switch is already a portable system, it’s not too surprising Nintendo doesn’t seem interested in the technology. But that didn’t stop [Stan Dmitriev] from doing a bit of experimentation on his own. With little more than a Raspberry Pi 4 and Trinket M0, he’s demonstrated that users can remotely interact with the Switch well enough to play games in real time.
The setup is fairly straightforward. A cheap HDMI capture device is used to grab the video from the Nintendo Switch dock, which is then streamed out to web with the help of the Pi’s hardware video encoder. Input from the user is sent over the Pi’s UART to the Trinket, which itself is running a firmware specifically developed for mimicking Nintendo Switch controllers. With so many elements involved, naturally some latency comes into play. The roughly 100 millisecond delay [Stan] is reporting isn’t exactly ideal for fast-paced gaming, but is certainly adequate for more relaxed titles.
On the software side of things, the project is using a SDK developed by [Stan]’s employer SurrogateTV. Right now you need to apply if you want to get your game or other interactive gadget up on the service, though he says it will be opened up to the public next year. But even without all the details, we’ve got a clear idea of how both the video capture and user input sides of the equation are being handled. For personal use, all you’d really need to do is put together a simple web interface to tie it all together.
Storing data “in the cloud” — even if it is your own server — is all the rage. But many cloud solutions require you to access your files in a clumsy way using a web browser. One day, operating systems will incorporate generic cloud storage just like any other file system. But by using two tools, rclone and sshfs, you can nearly accomplish this today with a little one-time setup. There are a few limitations, but, generally, it works quite well.
It is a story as old as computing. There’s something new. Using it is exotic and requires special techniques. Then it becomes just another part of the operating system. If you go back far enough, programmers had to pull specific records from mass storage like tapes, drums, or disks and deblock data. Now you just open a file or a database. Cameras, printers, audio, and even networking once were special devices that are now commonplace. If you use Windows, for example, OneDrive is well-supported. But if you use another service, you may or may not have an easy option to just access your files as a first-class file system.
The rclone program is the Swiss Army knife of cloud storage services. Despite its name, it doesn’t have to synchronize a local file store to a remote service, although it can do that. The program works with a dizzying array of cloud storage providers and it can do simple operations like listing and copying files. It can also synchronize, as you’d expect. However, it also has an experimental FUSE filesystem that lets you mount a remote service — with varying degrees of success.
What’s Supported?
If you don’t like using someone like Google or Amazon, you can host your own cloud. In that case, you can probably use sshfs to mount a file using ssh, although rclone can also do that. There are also cloud services you can self-host like OwnCloud and NextCloud. A Raspberry Pi running Docker can easily stand up one of these in a few minutes and rclone can handle these, too.
GitHub has enabled free code analysis on public repositories. This is the fruit of the purchase of Semmle, almost exactly one year ago. Anyone with write permissions to a repository can go into the settings, and enable scanning. Beyond the obvious use case of finding vulnerabilities, an exciting option is to automatically analyse pull requests and flag potential security problems automatically. I definitely look forward to seeing this tool in action.
The Code Scanning option is under the Security tab, and the process to enable it only takes a few seconds. I flipped the switch on one of my repos, and it found a handful of issues that are worth looking in to. An important note, anyone can run the tool on a forked repo and see the results. If CodeQL finds an issue, it’s essentially publicly available for anyone who cares to look for it.
Simpler Code Scanning
On the extreme other hand, [Will Butler] wrote a guide to searching for exploits using grep. A simple example, if raw shows up in code, it often signals an unsafe operation. The terms fixme or todo, often in comments, can signal a known security problem that has yet to be fixed. Another example is unsafe, which is an actual keyword in some languages, like Rust. If a Rust project is going to have vulnerabilities, they will likely be in an unsafe block. There are some other language-dependent pointers, and other good tips, so check it out.