Retrogadgets: The Ageia PhysX Card

Old computers meant for big jobs often had an external unit to crunch data in specific ways. A computer doing weather prediction, for example, might have an SIMD (single instruction multiple data) vector unit that could multiply a bunch of numbers by a constant in one swoop. These days, there are many computers crunching physics equations so you can play your favorite high-end computer game. Instead of vector processors, we have video cards. These cards have many processing units that can execute “kernels” or small programs on large groups of data at once.

Awkward Years

However, there was that awkward in-between stage when personal computers needed fast physics simulation, but it wasn’t feasible to put array processing and video graphics on the same board. Around 2006, a company called Ageia produced the PhysX card, which promised to give PCs the ability to do sophisticated physics simulations without relying on a video card.

Keep in mind that when this was built, multi-core CPUs were an expensive oddity and games were struggling to manage everything they needed to with limited memory and compute resources. The PhysX card was a “PPU” or Physics Processor Unit and used the PCI bus. Like many companies, Ageia made the chips and expected other companies — notably Asus — to make the actual board you’d plug into your computer.

The Technology

The chip had 125 million transistors on a 0.13 micron process. With 128 megabytes of 733 MHz GDDR3 memory, the board needed an extra power connector that could draw 20 watts. The price was around $300. Quite a bit for a card that did absolutely nothing without specialized software.

There was a physics engine, NovodeX, that could handle game physics for developers using either the chip or a software stack, so we presume that’s what most gamemakers would use.

Of course, today, a 20 watt GPU with an extra power connector isn’t enough to make you look up from your screen. But times were different then. According to contemporary reports, the chip has a two terabit per second memory bandwidth. Watch the demo vide below. It won’t knock your socks off, but for a computer system nearly twenty years ago, it was pretty good.

Aftermath

So what happened? Well, the company caused quite a stir, although it isn’t clear how many people ponied up to get better performance on a handful of games. The boards were a thing for only about two years. Ultimately, though, NVidia would buy Ageia and adapt its technology to run on NVidia hardware, and so some part of it lives on today as software, and you might find some games that still boast extra PhysX features.

If you want to see a direct comparison of before and after hardware acceleration, check out the video below. Don’t forget to note the frame rates in the bottom right corner.

These days, you are more likely to get heavy processing via CUDA or OpenCL. While GPU architectures vary, they will all outperform this early entry into hardware acceleration.

27 thoughts on “Retrogadgets: The Ageia PhysX Card

    1. The two aren’t mutually exclusive. Machine learning can accelerate physics calculations such as fluid dynamics. There are examples of fluid simulations that greatly benefit from machine learning instead of just brute forcing it. For real-time physics you need such tricks. It can be quite realistic. Enough for video games.

  1. Those ‘additional processing cards’ always seem to pop up whenever computer technology moves into a new area. In case of PhysX it was a need to do physics calculations in games, without the rest of the hardware being powerful enough. Another good example are the mpeg accelerator cards of the 1990s, allowing for movie encoding and playback. That has moved entirely into graphics cards. Now that AI seems to be catching on big time, I do wonder if we’ll see AI accelerator cards, before that task is going to be moved into the graphics card or somewhere, too.

    1. TPUs/NPUs for accelerating AI are already out there, either as a separate card or on-die with the CPU.

      I expect the next “additional processing card” of the day might be an FPGA that can be reconfigured into an ASIC by whatever application you’re running, for example John the Ripper can already do this to accelerate hash brute-forcing.

    2. 2017ish a third x86 company made a dedicated ai accelerated cpu. by the time they could come anywhere near a product, gpu ai had already taken over. There is a lot of information out there as to why gpu’s were already better equipped due to their architecture. They have since been purchased and all but gutted. see LTT for a look at it.

      Physx cards were super cool. Yes they didn’t catch on, however they pushed both companies into figuring out physics on their own platforms as time went on. It took from half life 2 until well after the physx cards came out in order for them to actually achieve it tho.

      To this day, i’m still not really impressed with physics in gaming. Yes it’s better these days, but I don’t see anything that really adds much to the experience. Physics needs to be something you discover in a game and when you make changes, something different happens, however it’s mostly so simple that after the initial discovery, there’s not much worth playing with. It’s kind of sad that CS2 has the most functional example of physics in a game with it’s new smoke grenade (unless i’ve overlooked some other example here).

      The reason “AI” architectures are being shoveled into other SOCs is because it’s cheaper to piggy back it rather than make a dedicated chip, and forces consumers to pay for it, and there is a profit motive considering that there is no way for the end user to completely control it as of this juncture. Not impressed, don’t care, don’t want it in my system as it’s functionally useless.

      I use “AI” in quotes because it’s a term that’s thrown around for complex algo’s rather than true AI. true AI doesn’t really exist. In it’s current form it’s being used for basically a buzz/marketing word. I do not accept this form of misinformation/compelled speech.

  2. I remember the game Bet on soldier: blood sport came out with an expansion pack in 2005-7 that included a liquid/fire based weapon specifically to utilise a physx card.

    It looked amazing in the videos with the liquids functioning like actual liquids rather than just coloured patched on the floor.

    I couldn’t afford the physx card at the time being 15 and they vanished shortly afterwards as Nvidia snapped them up in ’08 and the rest is history

  3. The hardware may have disappeared but PhysX is licensed to many computer games to this day. Look for the splash screen on start-up. On consoles the runtime is typically tailored to the specifics of that console.

  4. You forgot the part about PhysX card being a scam :) Actual ASIC product was slower than CPU.

    https://www.anandtech.com/show/2001/4

    Faster in provided test, slower in real game. Afair even the “PPU-Only” Cell Factor demo was later cracked to run CPU only faster than with the card.

    This of course was ideal for Nvidia, they revel in gimmicks as long as it can hamper competition. PhysX library was famously compiled to use x87 FPU when every CPU on the market had vectorized floating point units (SSE) https://arstechnica.com/gaming/2010/07/did-nvidia-cripple-its-cpu-gaming-physics-library-to-spite-intel/

    1. Whenever a game entered Software mode with the early libraries. games often would both:
      A) Significantly reduce the quantity of consecutive Physics entities on screen at any time. Less particles, fewer points on clothing, more rigid items, remove deformity AND
      B) Greatly simplify calculations at the cost of accuracy for CPU processing.

      It genuinely wasn’t a scam. It was more like Raytracing is today. Something you used not to accelerate performance as the silly physx demo showed, but to amp up fidelity as the games showed, albeit at some cost.

      And yeah. the original codebase was written before x87 deprecation, the full propagation of SSE2 (AMD was rather slow on adopting those) and the rise of dual-core processors.

    2. I don’t remember those specific claims/details but I do remember NSHITVIYA specifically crippling their own GPU drivers if they were installed as secondary GPU next to an ATI one (for those specific gamers who wanted to benefit from their newer ATI card while offloading PhysX on an older NSHITVIYA card).

      Personally I just remember Metro 2033’s performance dropping drastically with enabled PhysX but without NSHITVIYA/PhysX hardware even with more than two CPU cores (PhysX CPU mode or something?).
      Not entirely sure even that ^^ is correct but “I think” that’s how “I remember it”…

  5. I remember the discrete PhysX cards as fondly as I remember ATI’s Truform tech from about 5 years before. They’re both great ideas that were meant to solve problems that wouldn’t be problems for long. Truform was made to artificially increase the complexity of low/mid-poly geometry in hardware while rendering (without having to re-compile or re-design the meshes) all without slowing down the card. In reality, it usually just made circular objects more circular by subdividing geometry and averaging the new faces with surrounding faces (rounding objects out a bit). It wasn’t necessary once graphics reached the PS2 era, and character models went from MGS1 Snake to MGS2 Snake. PhysX didn’t become obsolete like Truform, but it did end up being easier to get the GPU to pull double duty by allocating 30-40 cores to doing all the physics calculations.

  6. Hm. Personally, as an observer, I think that there wasn’t much progress happening since the late 2000s.

    To me, as an unbiased observer, it’s the same old technology, still. Just more and transistors, shader units and even more power consumption.

    One notable evolution was the introduction of Tessellation in Direct3D 11, maybe.
    Or the introduction Mantle, Vulkan and Metal.
    But otherwise, it was pretty bland.

    By contrast, the 90s were full of diversity and competing technologies.
    Back then, you were still looking forward to progress, despite the ugly 3D graphics (hi N64, Playstation).
    It wasn’t just only about ever increasing resolution and photorealism.
    It was about fun and fresh new concepts.

    1. Vulkan as used by Proton is very interesting. It seems to keep frame pacing better and even is more compatible at running old PC games than actual Windows is.

  7. I had a Sherlock Holmes game for my laptop and the game required PhysX. Fortunately, my new laptop had it, and I was able to load the game.

    My father was 70 years old and a HUGE Sherlock Holmes fan when we started. I handled the “tech” side (“Do we have enough RAM?”, “What’s this PhysyX nonsense?” And so on and so forth. But we got it all set up just fine.

    So Dad and I were working for a company and stuck out on Interstate-4, near Disney Springs. Lot of people coming and going, but all Dad and I had to do was watch a pipe up the road that had gotten washed out and could have been a problem if people tripped or fell. But fortunately, NO DAMAGES REPORTED!!!

    So for the rest of the time, we played this Sherlock Holmes game (with PhysyX!). My 70-year-old dad sat with me each night in the van, to watch the gameplay on the laptop screen while looking up every so often to see if the ditch is still there. (It was. Surprisingly, we never lost a ditch.)

    All in all, it wasn’t much, but it was fun. As the last thing I’d get to do with him before he passed two weeks later, it was special. I got to see his love for Sherlock Holmes stories. He got to see that I knew how to build and tweak PCs (which was my job). And he got to share his love for stories he’d read when he was 12 to 14.

    This is probably not they type of story expected, but I hope it’s accepted. 😏

  8. Ah i had one these little fellas. The Asus PhysX P1. It was an impressive little card that had a massive impact on the games that supported it.

    But then NVidia happened who swallowed Ageia up and turned PhysX from a generally system agnostic AIB. Into a proprietary component of CUDA that only their GPUs could do. And they didn’t do it that well as the PhysX code cannibalized rasterizer performance. To which Nvidia’s main suggestion was to just go SLI….

    Physics took the back seat and never really gone as far as it did back then. Now its either Raytracing or “AI” that gets pushed.

    1. > massive impact on the games that supported it.

      Garbage. The impact of PhysX is actual _garbage_ ;-) Its only good for stuff that has no influence over gameplay, so visual clutter like garbage cans, dust particles, water ripples etc. It doesnt even sync in multiplayer games because that would be too much data to pump thru so other players see different garbage to your garbage.

  9. The part at 1:35 is particularly bad for a product that claims to simulate physics. The barrels start falling gently, and then suddenly one speeds up and flies into orbit. Where did that energy come from? It was a common problem in old games (and still is, sometimes; looking at you, goat simulator).

    I too was hoping we’d get FPGAs in consumer PCs at some point, especially when Xilinx and Altera were taken over by AMD and Intel.

  10. Physics are non value added for game companies. Despite processing power having increased there is simply no demand for real time physics processing. So both hardware and software (like the DMM engine from pixelux) simply aren’t needed in the industry.

  11. Sometime in the mid 2000s I remember stumbling across “Novodex Rocket”, a physics tech demos pack from the company that Ageia eventually bought before creating Physx. I remember being blown away even then by some of the stuff on display, though my PC could barely run a lot of it. It was a free download at one point, but all the links to it died long ago. I still look every once in a while, hoping someone’s found and uploaded it to archive.org or the like.

Leave a Reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.