[Linus Tech Tips] undertook a fun experiment a few years back. By running multiple virtual machines on a single tower PC with tons of RAM and GPUs, it was possible to let seven gamers play on a single rig at once. [CelGenStudios] found the idea intriguing, and has theorised that the same feat could be possible on mid-1990s Silicon Graphics hardware.
The idea is to use the Origin 2000 server as the base. These didn’t ship with any form of video output or even a keyboard and mouse interface. However, by substituting in the IO6G module from the Onyx2 machine, and SI graphics cards from the Octane, it’s possible to get graphics and input up and running. With multiple graphics cards and a few CAD DUO boards installed via a PCI adapter called the “shoebox”, there’s provisions for up to four separate monitors, keyboards and mice. With all this hardware, it’s theoretically possible for four users to login to the X server running in the IRIX OS on the Origin 2000 machine. Then, it’s a simple matter of firing up four instances of Quake and a dedicated server and you’re up and gaming.
[CelGenStudios] goes so far as exploring the limits of the supercomputer-grade hardware, suggesting that 7 players or more could be possible. Unfortunately, SGI hardware isn’t easy to come by, nor is it cheap, even decades after release — so thus far, the concept remains untested. We’d dearly love to see such a setup happen at QuakeCon or a hacker con, though, so if you pull it off, you know how to call. We note there’s a few Octane 2000s at the Jim Austin Computer Collection, so perhaps they might be the ones to achieve the feat.
In the meantime, check out a practical exploration of the concept on modern hardware with the original [Linus Tech Tips] project. The basic theory is simple – create a hugely powerful PC, with a beefy CPU, plenty of RAM, and one graphics cards for each of the seven players. They run multiple virtual machines and managed to deliver a full 7 player experience running off just one CPU.
Multi-head Linux was available over 10 years ago so what’s the big deal. HP even sold PCs pre-configured with multi-head support but only to 3rd world countries. ie they refused to sell into first world markets and the likely reason is Microsoft contracts preventing them from selling anything running Linux.
note: I saw it mentioned years ago that HP created a Linux powered PDA running their Java Chai software ontop of a Linux kernel( years before Android ) but the project was terminated internally because of the revenue stream they’d lose from Microsoft if they shipped it.
http://www.linuxtoys.org/multiseat/multiseat.html
Or because it had a very niche use case of “can afford several monitors and keyboards, but only one computer”
Also, “While the system worked very well, it was extremely unstable. In particular, we got a kernel oops fairly often when we logged out.” As usual for a Linux solution, it was possible, but generally not very functional or complete.
Right, because computers were very cheap so it made no sense to just buy 4 or 8 or 16 computers instead of one.
If only Linux were more stable we’d probably see more companies running their products on it like routers, smart TVs or even services like Google, Facebook etc. And it were any good they’d probably have it running robots on Mars by now. Maybe some day.
As I said, the reason why it wasn’t sold elsewhere is probably because the savings were marginal and it only introduced problems – not that Microsoft somehow prevented it.
When you buy the “infrastructure” for let’s say 8 seats in an internet cafe, you’re paying for a lot more than just the computers. The the actual desk and seats (!) cost hundreds of dollars, the lighting, the AC/heating, the cleaning services, security, insurances, maintenance and upkeep, rent… everything around the setup costs money, and the one-time cost of the actual computers is basically insignificant. The whole multi-head configuration with Xorg was a hack in the first place, and as they pointed out, even Linux software didn’t support it well, so you also have to figure in the cost of competent administration and development support to deal with all these issues, and explain to the customers why they don’t have audio with their youtube video (only one sound card for 8 seats), or why they can’t play/burn a CD (because it’s a dumb terminal)… etc.
Then two years later the internet cafe died because everyone got smartphones and laptops.
They might have tried to sell it in Africa or India to schools or other venues that have to pinch every penny, but then again it’s such a tiny shifting market segment that was already filled by charities dumping second hand PCs on anyone, so there was really no place for such systems in 3rd world countries either.
Cant tell if sarcasum or if you are unaware that most of those things do run linux….
Are you unaware that most of computing, phones, tv boxes, tvs, cars, the internet; christ almost everything is running on Linux?
woosh!
Monitor is a ‘long term use’ device. good monitors actually last at least a decade, or two and that is far longer than a PC that was with it.
You got the power circuits and budget to buy 10 Silicon Graphics Origin2000s?
By the way, this computer is not 10 years old, but 25!
These machines were usually not used as a multi-head configuration, but rather as a computing workhorse in the data centre accessed remotely. Hence why they had to make a “bitsa” computer by grabbing bits from an Onyx2.
One of the Gentoo developers had one, used to use it for doing the big-endian builds of Gentoo stages since it had enough RAM to just mount some tmpfs somewhere to point catalyst at and do the entire stage build out of RAM. It was also available for testing, but special permission was required as the machine was expensive to run.
Yep, indeed. I used to operate and then decommissioned an Onyx2 at VRLab in Lausanne (EPFL, prof. Thalmann if anyone wants to google it). That machine was a relatively “small” config, with only 2 graphic pipes (I believe it could support up to 4 if you had two racks – one had the CPUs + HDD and one pipe, then the second cabinet held two more pipes), each pipe allowing for 2 monitors. We used this for real time animation, running Maya (we are talking 2001 or so), many places used these to run CAVEs and virtual reality setups with multiple projectors.
Origin2000 series is the same hardware but has no “graphic card” (or rather cards in this case – each pipe was a set of large 19″ modules cabled into the rack).
And re power and costs – yep, not cheap to neither own nor operate. That was the main reason we have decommissioned it in 2004 or so. It was still running fine but was getting long in the tooth (SGI has introduced Onyx3/Origin3000 series in the meantime, then they pivoted completely to supercomputing wth Altix series based on Intel CPUs) and the operating costs were getting unreasonable at the time when you could start getting reasonable accelerated OpenGL graphics on desktop machines, not only SGI workstations.
The whole thing requires a 380V three phase power (it has one of those huge plugs), the central part just under the LCD about where the grille is hides an absolutely enormous horizontal fan that blows airs through the rack vertically. It makes noise like a small jet engine while running, definitely not designed to have standing next to your desk.
Above and below the central bit with the console and fan (which are permanently fixed in the rack) are two identical spaces in each cabinet which can host various modules (“bricks”) – CPUs, graphic pipes, storage, etc. I believe the largest configurations used up to 4 full 19″ cabinets, connected by high speed cable harnesses.
Operating costs also included about 20k CHF annual maintenance subscription to SGI, if I remember correctly. You couldn’t really run the machine (or anything SGI, for that matter) without because if anything broke, you were screwed. Unlike a PC where you can dash out to the nearest store to pick up a new disk drive or a stick of RAM, here everything was proprietary and SGI was the only source.
The same for IRIX updates (no, these large SGI MIPS systems never ran Linux to any reasonable degree). That said, SGI’s support and maintenance were first class, nothing like the common tech support gauntlet you get with more “pedestrian” PC hardware vendors.
BTW, the LCD – that was a status display and console, showing, apart from a fancy animated SGI logo, things like load of the machine, temperature, etc. and permitted to power the machine up and down.
I’ve done something similar with disk-less graphics terminal project http://www.ltsp.org back in the days. Remember that we bought used Pentium I PCs with ATI graphics cards and puny 16 megabytes system RAM. Once we finish setting terminals to my surprise the OpenGL game bzflag ran accelerated on the terminals I suppose due well supported ATI video card. I haven’t try it with more with 2 terminals but I’m spositive it will work.
If I remember correctly – there was GLX? extension that allowed opengl use over network for X11. That was circa 2005-2008? or so. That usecase stopped working at the time when shaders started actively being used, because that didn’t transfer over network well and also there was architecture overhaul.
I really miss the time when enterprise hardware spent effort on design. SGI, Sun… Even HP to some extent. Now it’s just boring grey boxes.
I loved when I got a chance to look inside those systems and see the quality of design and airflow management. But the Wintel economy drove the public into believing crap was good enough. And when you start someone on crap, they only know crap and think that crap doesn’t stink because it’s what they know and use.
Funny how it was Apple which rose up with a user base able and willing to pay for quality hardware and software which busted the Wintel mindset with a little MP3 player and its companion software.
Would Windows even exist in the server space today had virtual machine technology not allowed redundancy in a single box for ever crashing Windows software? Two 9’s are good enough though. Right?
” But the Wintel economy drove the public into believing crap was good enough.”
Affordable enough. Doesn’t matter how wonderkin your hardware is if only a minority can afford it.
Now now, don’t ruin his sense of elitism. For some people it’s eternal August 31, 1993 and that’s the way they like it.
Sorry dude, you have no clue what you are talking about. We are talking about high-end datacenter hardware, not consumer smartphones or game PCs.
Have you checked how much did a commodity wintel server with comparable capability cost back then? (if it even existed to begin with). I guess not.
Apple wastes time, money, and effort making pretty parts no user will ever see, while using the same electronic components every other company does. That’s why 1980’s through early 2000’s Macs suffer from the same plague of bad capacitors as the PCs. Then there are all the electrical design gaffes Apple has made – and described in detail by Louis Rossmann while he repairs and often improves on them.
Nonsense. SGI hardware was expensive but it was not significantly more expensive than comparable machines from e.g. Sun, Compaq, HP or others.
But if you have seen the construction of e.g. SGI or Sun server or workstation, where everything is neatly fixed in the rack, there are NO cables inside at all (everything plugs into backplanes), everything is built like a brick, with a lot of metal both for shielding and durability and at the same time easily accessible for maintenance and replacement – and then compared that to an equally expensive Compaq or Dell server where dangling power and IDE cables (no SATA back then) were the norm, where getting access to many things required disassembling half of the machine, lot of cheap plastic that cracked and broke over time, no proper locking connectors …
Those SI card don’t support texture mapping, so you won’t be running GLQuake. It’ll be all software rendering.
You can add a texture module to SI/SE to make Sx-T, and that will run glquake.
Man I wish someone would have told me that old sgi junk would be worth something some day. When pee cees took over for most professional engineering tasks and the software house I worked for went to Windows only, we had a ton of old unix stuff that we could not get rid of. I have a few pieces but for the most part no one wanted that stuff.
Ok, then I hereby give you fair notice – EVERYTHING you own will *eventually* be worth more than it is now. You have been told.
The “eventually” is the trick – you have to keep your JUNK until it becomes “living history”. Go have a chat with the retro gamers and vintage computer folks to keep you company. :-)
I’d love to believe that is true (and behaving as it is true), but once you factor in the cost of storing said junk until it does become treasure, then it tends to be worth less the the money (purchase cost + storage cost) you’ve put into it.
Also, the value goes up as quantities go down.
There were a few hundred Apple Is, and I gather a good portion were returned to Apple (and presumably scrapped) when owners upgraded to IIs. That leaves few, and thus they are expensive.
Something that was common would take so much longer to become rare by attrition, and may never reach that point. I don’t keep my KIM-1, OSI Superboard, and Radio Shack Color Computers because I might someday mame some money. I keep them for me.
Anyone want to buy a pet rock? :-D
You just might have to wait a thousand years, so it becomes valuable to future archeologists.
Although they will not have any trouble finding the same stuff sealed up in landfills.
I made good profit on an old Dell workstation with the fastest P4 Extreme Edition CPU. I swapped in an AGP FX5200 card I had and maxed out the RAM. Then I installed XP Pro. Why? Because I found a guy who wanted a super duper XP box to play some old wargame that ran barely tolerable on whatever contemporary hardware the Joe Gamer could afford when the game was new. The P4 Extreme lit a fire under the game.
IIRC the workstation was a freebie and I paid probably $10 to have the CPU voltage regulator caps replaced. Sold it for $125.
Does (or did?) anyone actually *like* the SGI machines? We had one in the biochemistry lab I worked in that was so user-unfriendly it basically sat unused except for the single piece of software we bought it for. That was 3d visualization software for proteins, BTW. It had, basically, a single peripheral port of some kind so the keyboard would work, and a network / LAN port. That was it. If you wanted anything more, it was breathtakingly expensive and any software had to be loaded over the network, there was no floppy, CD or any other way to get software on there. In the weird in-between era of the late 90’s /early 2000’s, the internet wasn’t as easy to use so, at least for us, remote install of software was impossible. But it was late enough that a CD drive would be common too.
I realize for people on this site, those things may be trivial but man, what a pain.
Also it was physically heavy AF.
No wonder they didn’t sell many.
Sounds like you had an Octane as they didn’t have a floppy or cd drive. They did have a scsi port so you could daisy chain hard drives or cd drives. SGI sold tons of systems to companies like Pixar. When the Octane was released you could put 8 gigs of ram in the machine. Most PCs would top out at 256Mb.
Thank you for your thoughtful and insightful comment. not sarcastic. I mean it (esp compared to responses below)
SGI machines were costly, and needed to be correctly chosen. Many places would buy the machines believing they could use them for other things, as normal pcs , and later get angry when discovering it was not that easy.
I still need to find some use for an O2 ( the machine has some style ) , but have a couple of old Workstation 320/540 that I don´t feel that emotional about. Finnicky memory and the thing runs Windows in Pentium 3 processors, so doesn´t have that “different” look.
Pity I didn´t have hobby money left when recycling shops around were selling old SGI hardware. Could have get me some Octanes or Fire …Saw a guy dismantling one with a hammer to sell the metal parts as scrap.
Yep, exactly this.
I used to run a lab that had both the Onyx2 mentioned in the article and a smattering of O2s, Indigos, Indigo2 and, some Octanes and Octane2s and I believe we had even an old Iris stashed somewhere.
O2 had quite unique architecture internally, if you needed something for real-time video manipulation, that was the box to use.
And yes, if someone wanted to use an SGI as a PC for office work beyond checking e-mail that was a poor idea. As with most Unix workstations of the era.
They sold a couple billion dollars of hardware a year in the 90s. I’m not even sure where to start with your comment…
It’s like listening to a zoomer complain about having a Ferrari in their garage because no one knows how to shift it and it doesn’t have cup holders.
No one wants to read a point by point response to everything you said but basically you and your school just didn’t know how to use the hardware or the software. Literally everything you’re typing is incorrect. It has a very nice GUI anyones mom could get around and was far more user friendly than any contemporaneous OS with the exception of MacOS.
The professionals that used the systems still like SGI to this day. It’s just not a club you’re in.
>your school just didn’t know how to use the hardware or the software
>It has a very nice GUI anyones mom could get around
You’re arguing it both ways.
It’s clear you and your school had no clue how to use SGI’s. When I was in military aerospace flight test we had entire mission control rooms populated with SGI machines and the engineers loved them. Slick GUI, excellent tool set for real time data analysis, etc. There was nothing out there that compared to them. Linux was still only fit for cellar dwellers and Windows was a joke. .
Then there were the Oynx server farms, again no complaints from the people using them for real work.
Yes, there is certainly a difference between research biochemistry lab and military aerospace flight testing! Haha.
I wonder what would use that CD-drive or floppy for, given that there was no software for SGI that you could just load from floppies or CDs (apart from the OS). Everything else was easy to copy over using network shares or FTP.
It wasn’t a PC, it wouldn’t run common office software (like MS Office …).
As with all Unix workstations of the era, it was designed for being a part of a network and intended to be managed by a sysadmin who prepared the system for the users – loaded software, created accounts, set up network shares or whatever else was necessary. Completely different mindset than the “DIY” mentality of having a PC with administrator access on one’s desk, as was common.
So no wonder you had issues if someone has bought one of these machines for your lab thinking it can be used as just another PC.
And yes, they were expensive and really heavy – there was a ton of metal inside all of SGI hardware. They were built like tanks.
Never even occurred to me that software wasn’t even available on physical media. Was so used to it coming that way for Mac and PC. Makes sense I guess, thank you for comment. We were (are) biochemists and bought it for the single purpose that it was actually very good for. Well, until not that many years later when my mac mini could use academically obtained software to do basically same thing, all for $1000 or so.
cheers
–craig
Then your IT department didn’t know what it was doing. Most users loved the SGI systems, unless you were dealing with the VisualPC, which was a dog of a system which should never have been made. But Irix still can do things modern PCs can’t, and they were very expandable. It doesn’t sound like your IT department knew what it was doing.
Why go to all this trouble when you could run multiplayer quake on a properly configured SGI Onyx2?
If anyone wants to pay shipping from PA for a SGI Altix 350 it’s all there’s. Im quite sure it works, but ive never been able to find anything particular useful with an Itanium based system, especially considering the power budget on those things.
I did this back in 2000 with a PC running Debian.
AMD K6-2 500MHz
1.5GB PC-133 RAM
3DFX Voodoo 5 5500 64MB AGP
three ATI Rage XL 8MB PCI
A cheap BestBuy brand USB 2.0 four port PCI Card
I had used this rig for the intent of flight simulation but for fun I did a Multi seat configuration in Debian and had four X sessions going I was able to log into another machine that was hosting a Quake server and had a 2on2 match going with some friends, they brought their USB keyboards and mice. It worked very well too except no Audio but we had some speed metal going on the stereo in the background.
The hard part was getting X11 to play nice with a multi seat setup. After all Quake doesn’t require much hardware and is doable with SVGA 2MB VRAM software rendering.
So if four sessions can be done on a single core x86 it shouldn’t be a problem with a Dual CPU 195 MHz R10000.
Today I run a Dell R410 Proxmox and have four HP Multi-seat clients connected so I can have HID and Video/Audio for four VM Clients. Came in handy when the Kids had to stay home during the COVID thing and is awesome for retro gaming. A little config tool called EasySeats makes it much easier to setup a Multiseat environment.
It\s no surprise, OpenGL was designed with networking in mind.
Ex-SGI engineer here. This is all from memory going back two decades, so corrections invited.
The multi-head config was heavily (but not exclusively) driven by the needs of flight simulators, which in the era regularly incorporated SGI Onyx systems. They would use the multiple channels to generate the surround projection views which were shown to the pilots. Fun fact: they also had support for “calligraphic lighting”, which was an overlay which was projected for things like high-brightness runway lights. Typically they used a library called Iris Performer, which was custom designed for the needs of real time simulation, and abstracted the various SGI graphics pipelines away from the end user having to worry about it.
BTW, the Origin 2000 was a CC:NUMA system. This means that while each processor had a local memory, it was really a “cache” into a huge, flat virtual memory space shared between all processors. When a particular processor needed a particular span of memory, the data would be transparently migrated to the local memory of the processor which wanted it. Memory replication would happen if multiple processors wanted read-only access to a particular location. This was over a proprietary interconnect called LegoNet, which SGI marketing ended up selling as CrayLink, pissing the SGI engineers off as Cray had nothing to do with it. The acquisition of Cray by SGI was badly managed by SGI management, and there was a lot of animosity between the two orgs.
This system could go to 128 processors. It’s follow-up, the Origin 3000, scaled to thousands and terabytes of physical RAM, although most of those super huge configs were for orgs which didn’t want them publicized.
I’m not vaguely surprised the Origin 2000 could do this.
Hi Ian, nice to see an ex-SGI engineer here. I have a question I hope you may be able to answer, at least roughly. These days we do have a small but active SGI community ( sgi.sh ) which maintains new software developments for SGI (called Sgug-RSE search on Google to find the Github) which provides an entire build tool chain with GCC etc, and people have been busy porting thousands of modern Unix software to IRIX, for fun!
We currently struggle with a bug when it comes to SGI’s rather unique version of X and the way GL works within it. When we try to compile software that requires GL hardware acceleration the SDL doesn’t seem to be able to open a hardware accelerated window. This bug is really holding us back and we really could do with some advice from someone like yourself! Please would you, or perhaps one of your old collegues help is out a little? Maybe this is something that might ignite a bit of nostalgia and fun for you! :)
Anyhow, yes please feel free to join sgi.sh forum and perhaps enter some dialogue about contributing to sgug-rse :)
Thank you!