Counter-Strike At 20: Two Hackers Upend The Gaming Industry

Choices matter. You’ve only got one shot to fulfill the objective. A single coordinated effort is required to defuse the bomb, release the hostages, or outlast the opposition. Fail, and there’s no telling when you’ll get your next shot. This is the world that Counter-Strike presented to PC players in 1999, and the paradigm shift it presented was greater than it’s deceptively simple namesake would suggest.

The reckless push forward mantra of Unreal Tournament coupled with the unrelenting speed of Quake dominated the PC FPS mind-share back then. Deathmatch with a side of CTF (capture the flag) was all anyone really played. With blazing fast respawns and rocket launchers featured as standard kit, there was little thought put towards conservative play tactics. The same sumo clash of combatants over the ever-so inconveniently placed power weapon played out time and again; while frag counts came in mega/ultra/monster-sized stacks. It was all easy come, easy go.

Counter-Strike didn’t follow the quick frag, wipe, repeat model. Counter-Strike wasn’t concerned with creating fantastical weaponry from the future. Counter-Strike was grounded in reality. Military counter terrorist forces seek to undermine an opposing terrorist team. Each side has their own objectives and weapon sets, and the in-game economy can swing the battle wildly at the start of each new round. What began as a fun project for a couple of college kids went on to become one of the most influential multiplayer games ever, and after twenty years it’s still leaving the competition in the de_dust(2).

Even if you’ve never camped with an AWP, the story of Counter-Strike is a story of an open platform that invited creative modifications and community-driven development. Not only is Counter-Strike an amazing game, it’s an amazing story.

Continue reading “Counter-Strike At 20: Two Hackers Upend The Gaming Industry”

Exploring The Raspberry Pi 4 USB-C Issue In-Depth

It would be fair to say that the Raspberry Pi team hasn’t been without its share of hardware issues, with the Raspberry Pi 2 being camera shy, the Raspberry Pi PoE HAT suffering from a rather embarrassing USB power issue, and now the all-new Raspberry Pi 4 is the first to have USB-C power delivery, but it doesn’t do USB-C very well unless you go for a ‘dumb’ cable.

Join me below for a brief recap of those previous issues, and an in-depth summary of USB-C, the differences between regular and electronically marked (e-marked) cables, and why detection logic might be making your brand-new Raspberry Pi 4 look like an analogue set of headphones to the power delivery hardware.

Continue reading “Exploring The Raspberry Pi 4 USB-C Issue In-Depth”

New Space Abort Systems Go Back To The Future

Throughout the history of America’s human spaceflight program, there’s been an alternating pattern in regards to abort systems. From Alan Shepard’s first flight in 1961 on, every Mercury capsule was equipped with a Launch Escape System (LES) tower that could pull the spacecraft away from a malfunctioning rocket. But by the first operational flight of the Gemini program in 1965, the LES tower had been deleted in favor of ejection seats. Just three years later, the LES tower returned for the first manned flight of the Apollo program.

Mercury LES Tower

With the Space Shuttle, things got more complicated. There was no safe way to separate the Orbiter from the rest of the stack, so when Columbia made its first test flight in 1981, NASA returned again to ejection seats, this time pulled from an SR-71 Blackbird. But once flight tests were complete, the ejector seats were removed; leaving Columbia and all subsequent Orbiters without any form of LES. At the time, NASA believed the Space Shuttle was so reliable that there was no need for an emergency escape system.

It took the loss of Challenger and her crew in 1986 to prove NASA had made a grave error in judgment, but by then, it was too late. Changes were made to the Shuttle in the wake of the accident investigation, but escape during powered flight was still impossible. While a LES would not have saved the crew of Columbia in 2003, another seven lives lost aboard the fundamentally flawed Orbiter played a large part in President George W. Bush’s decision to begin winding down the Shuttle program.

In the post-Shuttle era, NASA has made it clear that maintaining abort capability from liftoff to orbital insertion is a critical requirement. Their own Orion spacecraft has this ability, and they demand the same from commercial partners such as SpaceX and Boeing. But while all three vehicles are absolutely bristling with high-tech wizardry, their abort systems are not far removed from what we were using in the 1960’s.

Let’s take a look at the Launch Escape Systems for America’s next three capsules, and see where historical experience helped guide the design of these state-of-the-art spacecraft.

Continue reading “New Space Abort Systems Go Back To The Future”

Robotic Dishwashers And Dishwashing As A Service

There’s a story that goes back to the 1980s or so about an engineering professor who laid down a challenge to the students of his automation class: design a robot to perform the most mundane of household tasks — washing the dishes. The students divided up into groups, batted ideas around, and presented their designs. Every group came up with something impressive, all variations on a theme with cameras and sensors and articulated arms to move the plates around. The professor watched the presentations respectfully, and when they were done he got up and said, “Nice work. But didn’t any of you idiots realize you can buy a robot that does dishes for $300 from any Sears in the country?”

The story may be apocryphal, but it’s certainly plausible, and it’s definitely instructive. The cultural impression of robotics as a field has a lot of ballast on it, thanks to decades of training that leads us to believe that robots will always be at least partially anthropomorphic. At first it was science fiction giving us Robbie the Robot and C3PO; now that we’re living in the future, Boston Dynamics and the like are doing their best to give us an updated view of what robots must be.

But all this training to expect bots built in the image of humans or animals only covers a narrow range of use cases, and leaves behind the hundreds or thousands of other applications that could prove just as interesting. One use case that appears to be coming to market hearkens back to that professor’s dishwashing throwdown, and if manufacturers have their way, robotic dishwashers might well be a thing in the near future.

Continue reading “Robotic Dishwashers And Dishwashing As A Service”

Raspberry Pi 4 Benchmarks: Processor And Network Performance Makes It A Real Desktop Contender

The new Raspberry Pi 4 is out, and slowly they’re working their way from Microcenters and Amazon distribution sites to desktops and workbenches around the world. Before you whip out a fancy new USB C cable and plug those Pis in, it’s worthwhile to know what you’re getting into. The newest Raspberry Pi is blazing fast. Not only that, but because of the new System on Chip, it’s now a viable platform for a cheap homebrew NAS, a streaming server, or anything else that requires a massive amount of bandwidth. This is the Pi of the future.

The Raspberry Pi 4 features a BCM2711B0 System on Chip, a quad-core Cortex-A72 processor clocked at up to 1.5GHz, with up to 4GB of RAM (with hints about an upcoming 8GB version). The previous incarnation of the Pi, the Model 3 B+, used a BCM2837B0 SoC, a quad-core Cortex-A53 clocked at 1.4GHz. Compared to the 3 B+, the Pi 4 isn’t using an ‘efficient’ core, we’re deep into ‘performance’ territory with a larger cache. But what do these figures mean in real-world terms? That’s what we’re here to find out.

Continue reading “Raspberry Pi 4 Benchmarks: Processor And Network Performance Makes It A Real Desktop Contender”

The Saga Of 32-Bit Linux: Why Going 64-Bit Raises Concerns Over Multilib

The story of Linux so far, as short as it may be in the grand scheme of things, is one of constant forward momentum. There’s always another feature to implement, an optimization to make, and of course, another device to support. With developer’s eyes always on the horizon ahead of them, it should come as no surprise to find that support for older hardware or protocols occasionally falls to the wayside. When maintaining antiquated code monopolizes developer time, or even directly conflicts with new code, a difficult decision needs to be made.

Of course, some decisions are easier to make than others. Back in 2012 when Linus Torvalds officially ended kernel support for legacy 386 processors, he famously closed the commit message with “Good riddance.” Maintaining support for such old hardware had been complicating things behind the scenes for years while offering very little practical benefit, so removing all that legacy code was like taking a weight off the developer’s shoulders.

The rationale was the same a few years ago when distributions like Arch Linux decided to drop support for 32-bit hardware entirely. Maintainers had noticed the drop-off in downloads for the 32-bit versions of their distributions and decided it didn’t make sense to keep producing them. In an era where even budget smartphones are shipping with 64-bit processors, many Linux distributions have at this point decided 32-bit CPUs weren’t worth their time.

Given this trend, you’d think Ubuntu announcing last month that they’d no longer be providing 32-bit versions of packages in their repository would hardly be newsworthy. But as it turns out, the threat of ending 32-bit packages caused the sort of uproar that we don’t traditionally see in the Linux community. But why?

Continue reading “The Saga Of 32-Bit Linux: Why Going 64-Bit Raises Concerns Over Multilib”

Five Years Of The Raspberry Pi Model B+ Form Factor, What Has It Taught Us?

With all the hoopla surrounding the recent launch of the new Raspberry Pi 4, it’s easy to overlook another event in the Pi calendar. July will see the fifth anniversary of the launch of the Raspberry Pi Model B+ that ushered in a revised form factor. It’s familiar to us now, but at the time it was a huge change to a 40-pin expansion connector, four mounting holes, no composite video socket, and more carefully arranged interface connectors.

As the Pi 4 with its dual mini-HDMI connectors and reversed Ethernet and USB positions marks the first significant deviation from the standard set by the B+ and its successors, it’s worth taking a look at the success of the form factor and its wider impact. Is it still something that the Raspberry Pi designers can take in a new direction, or like so many standards before it has it passed from its originator to the collective ownership of the community of manufacturers that support it?

Continue reading “Five Years Of The Raspberry Pi Model B+ Form Factor, What Has It Taught Us?”