The best gaming platform is a cloud server with a $4,000 dollar graphics card you can rent when you need it.
[Larry] has done this sort of thing before with Amazon’s EC2, but recently Microsoft has been offering a beta access to some of NVIDIA’s Tesla M60 graphics cards. As long as you have a fairly beefy connection that can support 30 Mbps of streaming data, you can play just about any imaginable game at 60fps on the ultimate settings.
It takes a bit of configuration magic and quite a few different utilities to get it all going, but in the end [Larry] is able to play Overwatch on max settings at a nice 60fps for $1.56 an hour. Considering that just buying the graphics card alone will set you back 2500 hours of play time, for the casual gamer, this is a great deal.
It’s interesting to see computers start to become a rentable resource. People have been attempting streaming computers for a while now, but this one is seriously impressive. With such a powerful graphics card you could use this for anything intensive, need a super high-powered video editing station for a day or two? A CAD station to make anyone jealous? Just pay a few dollars of cloud time and get to it!
My friend (on a system I built) plays this just fine on an i3 Skylake with a 750TI and no SSD – A <$500 system. That's 320 hours of playback – Something a large number of users have already beat out in Overwatch alone (not mentioning other games).
If you have a PC parts picker list of parts, I’d love to see it. A young friend may be getting a small computer (cancer diagnosis, I’m helping him raise funds for something he wants but hasn’t picked what yet) and I’d love to see other folks functional low price PCs. I’ve seen some, but my tendance towards uATX loaded to the gills (6700K, aiming for 4×32 ram, etc) means my knowledge is the other direction in price.
Just to be on the safe side, aim for an intel i5-6500. Some games these days are asking for an i5 for the recommended *minimum* hardware. Depending on the size of the games he wants to play, a small low-cost 120~240GB SSD would be a better idea than a bigger HDD, cutting down on loading times and increasing gaming time.
To lower the costs a bit, you could also go with an intel i5-4460. That could also save some money on the motherboard and the RAM compared to a Skylake/DDR4 setup.
Lol – in Australia it would cost less for a serious gaming computer than the monthly fee for a 30Mbps data connection – that’s proving you could actually get a connection.
Lol quite true.
I am in US (rural) and I only get 1Mbps down, and 100kbps up. :(
You need some friends in the city with a pigeon loft for big downloads then…
I can confirm this.
Try explaining to migrant parents that in 2016 you don’t
run a internet connection through a phone line.
this was the “next big thing” in 2009. still hasn’t gone anywhere. Renting time on a computer you can’t afford has been around since the existence of prohibitively expensive hardware.
This is neat and all, but today must be a slow news day.
And it was the next big thing fresh from 1974 then.
A little short-sighted don’t you think? Some things just take longer than others and it’s not ready for the masses yet.
VR has been around since the 90’s. Though limited and mostly made up of proprietary expensive hardware. Now look where we are.
Oh, i’m not passing judgement, i’m just saying it hasn’t gone anywhere yet.
The reason it hasn’t gone anywhere is because there’s still a huge network lag between the terminal and the server, caused by the de/encoding of the video as well as the network layer. Then there’s also the fact that the image quality tends to be shit because of the heavy lossy compression.
That’s why dumb terminals have always been absolute shit for any sort of interactivity. If the server is right next door and you got a 14+ gbps pipe to it (HDMI bandwidth), then yeah – it can work adequately. Problem is, you can’t place the servers geographically close enough for all the customers without running the costs up.
Just in the local computer, it takes 80-100 ms between an input action to turn into a display output. If you add 100-200 ms of network and off-site processing lag to that, all games play like you had a TV with bad input lag.
Yeah, we’re about to get real VR, and hydrogen cars, and fusion power… tomorrow.
You forgot about the hover boards!
VR VR VR VR – Virtual Reality – sounds like an ethereal Jerry Springer show. I recall a word that was floating around in the mid 1990s which would be far more appropriate given the amount of net porn that lingers out there: Teledildonics. But then the lazy computer nerds who prefer acronyms would eventually factor it down to TD, or just T.
No, the problem was that Microsoft and other companies pimping the “Personal Computer” paradigm set back cloud technology a decade or more. Personal computers were the antithesis of the cloud. We’d all probably be running thin clients if the PC wouldn’t have caught on (not saying that’s a good or a bad thing, I’ve got no leg in that fight).
I’ve played the nvidia shield on a good connection and it’s not good enough for anything fast paced.
Err, isn’t round-trip time also pretty important. I don’t care how pretty the screen looks ifor it takes 250 ms for my mouse input to result in changes to the screen.
If it’s in the same data center as the game server, it’s probably bearable.
Why bother with the data center? If you’re going to move the person, skip the network-based rendering and put the expensive computer in an Internet café.
No, if the person stays home, and he picks a game on the cloud rig that has a server at the same location, or is hosted by same provider, so it’s like one hop over ludicrous speed backbone, then you cut the client to server lag, more so than other players whose locally hosted clients are a dozen hops away in ISP land, which balances out the the lag between this gamer and his cloud hosted game client. If you pick gaming location, cloud hosted client and game server on three different continents, then of course your mileage will vary by a parsec or two.
Interesting. It’s true the total network latency is unchanged, but as others have said, a lot of work goes into hiding this with client-side smarts.
Perhaps the best approach would be a compromise between server-rendered and client-rendered graphics. First, the server renders everything more than a couple of meters away from the player onto flat panels approximating a sphere, and streams these along with details of very close and very fast objects to the client. The client’s more modest GPU renders only that subset of objects, relying on the pre-rendered panels for elements which aren’t as latency-sensitive.
That said, I know little about realtime 3D rendering; this may not take into account optimizations already being done to avoid recomputing slow-changing features for each frame.
If you have a sub-30ms ping to the datacenter and a monitor with no more than 1 frame of buffering then maybe you wouldn’t notice it, otherwise most people will, and although most would also probably be able to get used to it over time, it will in all liklihood affect your performance in realtime games like overwatch. 50-100ms of network latency in a networked game is an entirely different beast than 50-100ms of visual input lag. FPS games use a lot of tricks to make network latency appear to not exist. In the case of overwatch, just search for “overwatch netcode” for details. But a setup like this prevents all of those systems from working correctly, and you’d probably get slaughtered by everyone else who *does* have the advantage of client-side prediction, server rewind, etc.
Games can do movement prediction to hide the latency, which is more difficult when you’re just receiving video.
I imagine lending the asynchronous timewarp -technology from the VR side of 3D would be applicable here. It would be interesting to see someone set that up.
was about to say this.
even a latency in excess of around 50 ms is gonna cause huge problems.
it depends on the net code. playing shooters at 300ms isnt a problem. People are quite good at adjusting to a delay if it is “clean.”
It’s bad enough when multiplayer games get laggy. I’d rather not have my single-player games turn into slideshows because one of my housemates turned on bittorrent.
we have had the same sort of idea of “thin clients” pop up in the enterprise IT world about once every five years or so, never ends up being anything with usable qualities.
I would argue the browser has become the thin client.
fair point.
But for processor heavy, and sometimes even graphics heavy applications seen across a number of enterprise environments, it just isn’t possible to perform them over a web browser.
Sure, but there is plenty of room for both. The chromebook (which is a thin client) has been very successful.
The browser was a thin client ten years ago, now it’s arguably an OS in its own right.
I don’t need the video capabilities, but I wouldn’t mind renting the GPU itself for $1.65 per hour….
Why not rent a GPU from LiquidSky for 50 cents?
Never heard of it (LiquidSky)before. Thanks for the info!
On the other hand, I do have the corporate AWS accounts at my disposal, and the mandate to move my crap there if possible.
I don’t believe in any subscription services.
Also, in my University we had thin clients and virtualisation software through WMware. It was a pain in the ass to use.
Well you must believe in subscription services to the extent that you’re using someone else’s subscription to internet services, hence your ability to read and comment here.
I use a lot of tools across VNC (TigerVNC is my preferred) but I can’t seem to get games to work that use mouse grab and mouselook, such as minecraft and minetest. Blender3D works great though!
Input latency already makes me feel uneasy using cloud CAD/excel solutions at work (i can feel the mouse lagging behind, compared to the local excel/CAD copy).
In a game … welp. Not even bothering.
instead of gaming couldnt you mine bit coins?
Sure, if you can timewarp back to about August 2013 or earlier, the last time GPUs were competitive against the then new custom ASICs…. meaning it still technically works, but the 10 cents per machine month isn’t going to be terribly worth it.
Though someone is going to mention that monero or ether is marginally profitable on recent GPU hardware.
plus the fact that nearly everyone is implimenting data caps these days so how much gaming could you actually get in before you start dishing out even more $$$ because nearly every isp sucks. that and this has been around since forever.
> data caps
Well, not everyone lives in USA ;]
Ahh That moment when people realize that you don’t need a $4000 graphics card to run most games at pretty close to maxed settings. Seconded only by that moment where you realize you don’t need to run games at max settings for them to still be very enjoyable.
Not to mention the latency headache you save yourself.
Actually the $4000 graphics cards aren’t really made for gaming and probably would perform no better or even worse than a $500 gaming card in most games. Those type of cards are made for GPU compute applications where things like double precision floating point performance is important. The features that set the Tesla cards apart from the regular Geforce cards are the DP floating point performance, larger amount of memory, and ECC support. Those aren’t things that are going to make most games run faster and in fact will sometimes make them run slower, especially the ECC.
This is true, what was their old pro brand, Quadro, an aquaintance won one of those of eBay for $200 or someting in about 05 and was super excited that he got this “thousand dollar” pro grade GPU for only $200…. meanwhile it was a GF2MX with delusions of grandeur and just open GL 2.0 unlocked over standard 1.1 or something, and by then you could probably pic up a used MX for $10 or a new one from old stock at bestbuy for like 29.99
The Quadro line is still around. The difference is OpenGL API (but you can do that with a firmware hack) and the memory size/layout/architecture. Right now, the Quadro that compares to the GTX 1080 or the Titan XP is just a Pascal based chip with way more memory and a little more memory bandwidth. I saw LinusTechTips review it as a gaming card, and it made no difference. Nothing made today is going to use those 12GB of video ram.
But if design videogames or do animated movies and need all that VRAM to avoid touching system RAM for every operation, nothing compares.
Congratulations! You’ve just invented the VT100 terminal.
Hokay, let’s do that, let’s dick swing graphics, let’s compare the rendered scene above, to say a well known fast action movie as played on a VT100 (Well blame microsoft for not including a Dx11 driver for mo80 or we’d do the same game.) ….
https://youtu.be/Dgwyo6JNTDA
Ok.
Congratulations. You’ve invented the X terminal.
No, no, please don’t mention X in this context, can’t survive another go round as to how X-term is actually a shell window on top of X which runs in a window manager provided for by an X server which locally serves X windows API to local client applications, but isn’t a server. Then your remote application may be a client to your X server, which is on a server not your local client.
So….Limelight?
Funny, I do this at work all day. Seriously, I built a 200+ client 100% VDI system for my employer, we use 5x hosts with each host having 2x Nvidia K2 graphics cards(older but works), 512GB of ram and 2x Intel E5-2690’s @ 2.6gz. a 24 disk SSD SAN. We have a mix of needs and I use games to test our system regularly( yes my boss is that nice!). Using dedicated clients, software clients and web browsers. I can even game with just an HTML 5 browser. No its not bad but its not “great” either its just fun, but hell what do you expect for bleeding edge.
Star Citizen, 7Days to Die, Overwatch, Fallout 4, World of Tanks & Warships, League of Legends. along side ESRI’s ArcGIS suite, Google Earth, and other such programs
I can tell you this they all “work” but even with a 1ms latency between my clients and the VDI session The results are:
1) the games are nice looking and mostly smooth
2) the game play and “feel” is off at times.
3) its fun to use an android device or ipad to show off a windows PC game running over a LTE connection
The best ways to describe most of the feel issues is that their are latency issues in the chain between input and display that just doesn’t make you feel like its a high end machine but the fact is the performance levels as measured all show excellent stats, there just is a delay between when you press a button and something happens and that is compounded as you add network latency. It makes it darn right bad for real-time multiplier games at times, so if your a serious twitch gamer this isn’t for you unless your dang close to the host. However. I get a fixed 60fps(limited by PCoIP protocol, that can be overridden) in most games, however the update to the client isn’t nearly that fast and is protocol and network dependent.The PCoIP protocol we use, is a tile based update system that tries to update only the “changed” section of screen and cache’s the rest. This effectively means that bandwidth can be minimized but the butter smooth feel of 60+fps just doesn’t come across as well as a local client. I have tested some with RDP and the vmware BLAST connections but there has been little need to explore this area for our system.
That said we have dumped a lot of cash into the system, less than the total hardware refresh our agency was facing, and as a side benefits the majority of our users could game, if we let them. I also know when someone says the system is “slow” that its perception or application issues not the infrastructure that runs our company.
It’s only a matter of time before this becomes a bigger market. It will have limited availability due to latency and consolidation ratios for now. Hopefully AMD will compete better in this market since Nvidia has a easy to work with solution however the M60 has a per VM license that AMD’s solutions don’t require.
When the whole screen updates, how does the system handle it?
Because 60 fps 32 bit ~2 mpix image takes around 4 gbps of bandwidth uncompressed. A good compressor that has several seconds of video backwards and forwards for prediction can squeeze that to 4 mbps with acceptable quality, but that’s not possible with an interactive application.
There’s no way you can push all the data through a regular DSL connection without seriously degrading the picture.
In my system (vmware Horizon) I have 3 choices with various trade offs as far as display protocol goes.
1) PCoIP- developed by Teradici – our primary choice but it’s a proprietary protocol by one vendor with limited support
2) RDP – Microsoft’s usual – one could use hyper-v and use RemoteFX
3) BLAST – VMware’s in house protocol – based on H.264
https://en.wikipedia.org/wiki/Teradici#PCoIP_Protocol
PCoIP uses UDP to pump a stream of updates to the client in small sections called tiles and a full screen refresh rarely. The system is “lossy” so if it can it will drop data in favor of speed depending on settings or congestion detection. In my tests a “4k” youtube typically uses around 80mbps per screen and is very nice but you can notice it’s not 4K. I have no issue using a quad monitor setup since most content is fairly static. Playing a game full screen works pretty well and to everyone at the office it’s indistinguishable from a standard high end workstation. I also use the dedicated APEX 2800 card they make to offload screen compression, I have no idea how much of a predictive buffer there is. This does change the “feel” of the machine a bit, but by lowering our cpu usage by 20+% and thus saving me from licening more vmware sockets the cost trade off is worth it on a larger scale system. We have no issue running skype or other video chat for our distance medicine programs, the biggest issue is the input side and usb was not designed to be encapsulated over ethernet frames even on a 10gig lan with sub millisecond latency. It just doesn’t work for some things and video frame rates are more limited by how fast this virtual USB interface capturing from the camera is then anything else.
Our digital xray system captures and displays a diagnostic full 2k image in about 3 seconds. Again USB is our limit. Sorry if I can’t directly answer your question, The device only has a gig lan port and a dedicated proprietary processor that feels like an FPGA. I have taken the software limiter off of the virtual machine side had no issue with 3dmark putting up fps’s higher than 150 on some HD tests. The limit here quickly becomes the Nvidia hardware’s capabilities, and bandwidth to the client as long as you’re not looking at a lot of smoke and physics it seems to work well. And keep in mind in my system the video card is actually 2 GPU’s glued together and the system doesn’t do SLI or anything where you can use more than one GPU processor, unless you dedicate the entire card to the VM. I did this in testing with folding at home, bitcoin mining and password cracking. This let me load test the card to test the thermal limits of the cards since they are passively cooled. In all of our testing the video response was never an issue so for me the concern over the raw bandwidth needs never materialized.
ANY PC? 60fps? Clearly the writers have never heard of ISA, VLB, PCI, AGP, or any variety of integrated graphics motherboards, made before 2008, like the amazing MediaGx. I’ll believe it when I see it.
Uncompressed video should be handled okay by anything since middling to late 90s… given reasonable video hardware for era with matching era display tech… Think you needed about a P133 to play Mpeg-2 FMV and P400ish to play DVD, and P900ish for divX (standard def) but those all needed the CPU ooomph for decoding/decompresson. 2nd Gen of PCI graphics cards and latter half of VLB cards would be able to slap 2D pixels on screen fast enough at SVGA resolutions. You’d probably need a decent 100Mbit card with DMA drivers, no PIO mode ne2000 ones. However, getting enough RAM in it to buffer could be an issue, since update bloat for age appropriate windows OSes would tend to approach max capacity one could install or was cached. So maybe you’d want a minimal puppy linux with X and low bloat window manager…. Suffice it to say that there’s no reason you couldn’t use quite old hardware, but you’d have to configure software very carefully.
However, we should probably take it to mean, “Any PC that would averagely be considered worth websurfing on still.” … though again, software a factor, some kid will think his Mom’s core 2 duo with only a gig of RAM and bogged down with spyware and viruses is good for nothing any more, meanwhile you can stick lubuntu on a 2Ghz P4 system with 512MB and run rings around it…. though not to say it’s only because it’s linux, the c2d would probably be much improved with a complete wipe and w10 install.
> play Overwatch on max settings at a nice 60fps
like used $80-100 gpu (GTX 760) and ANY modern Intel budget cpu?
>for $1.56 an hour
so 50h = ~ 20 days of playing casually
btw FPS over remote GPU? MAHAHAHAHAHAHaaaa
ask OnLive how great that works, oh wait you cant, they went bankrupt because input lag is a real thing
>Just pay a few dollars of cloud time
Gerrit want to smoke some crack? first hit is free
seconding what many people have said already, this line is nonsense:
> It’s interesting to see computers start to become a rentable resource.
… because that was the way computers were used in the beginning anyway. I “rented” time on a VAX. Sure, I do get what the author *wants* to say, but I feel like he should give it a bit more time to find suitable words :-)
Anyway: There is another thing about this “play over a network”. If you are the only player, things are fine. But imagine a few hundred thousand players in a slightly larger city playing at the same time, all sucking up network bandwidth for their own personal perspective. And now imagine you are a doctor in a hospital trying to do some serious accident surgery using an internet “real time” connection.
Catch my drift?
Technically we may see the possibilities. In the REAL world however, we do not *have* the resources to have “everyone” use network computers this way. And I do not see that happening anytime soon.
“> It’s interesting to see computers start to become a rentable resource.”
@Nitpicker: I do have to agree. It’s interesting to see the fads swing back in and out. Mainframes -> PCs -> Thin clients -> Desktops -> “the Cloud”
This suggests that, rather than a remotification, the next wave is going to be local again. Then in another ten years…
And yeah. Pushing graphics over the network is soooo X-windows.
>Catch my drift? – in a word – no!
Da flock are you talking about! ”
1) DWDM on fiber allows, PER fiber 16+ TERABITS! so just about 300,000 such gaming sessions at the same time.
https://en.wikipedia.org/wiki/Wavelength-division_multiplexing
2) Hospitals and clinics needing “real time” access pay a huge premium for assured bandwidth. I know because my internet at my office is almost $20,000 per month, and I am in healthcare, in alaska, thanks to USAC your subsidizing my lines so thank you, the cost is for redundant links on 2 totally physically and logically diverse paths. (one fiber, one microwave)
http://www.usac.org/default.aspx – thanks to your subsidies we pay 2% of the cost
3) A DWDM system scaled out on just a 24 strand fiber common in larger cities yields about 7 million simultaneous users real time access with capacity to spare. or put another way enough for all but a small handful of the largest cities in the world.
So in summary. No I don’t catch your drift. Hospitals don’t compete with the average home user for bandwidth. DWDM allows a pencil sized bundle of fibers to serve up all the bandwith ANY city would need for this and still have capacity to spare.
If you think renting computer time is a thing of the past. Let’s look at the new name they use for this, the “Cloud”, services such as AWS, Azure, vmware, Google, Salesforce, Oracle, IBM, or any other provider listed here:
https://cloudharmony.com/directory
Every one of them rents out capacity on an as needed basis, with such a huge investments in this type of service it’s only a matter of time before gaming makes its mark. Think the Uber of computer access. Is it a fit for all, no but will it work for most, sure.
Here is a typical new undersea cable:
http://www.hawaiiantel.com/wholesale/Wholesale/SEAUSUnderwaterCable/tabid/1740/Default.aspx (initial 20 terabytes per second of capacity over more than 9,000 miles of fiber)
All of this said the only thing I can agree with most posts is that the latency is the issue that will be hardest to overcome. Physics are a pain so until there is a different protocol linking computers together other than TCP/IP for wide area traffic, we likely won’t see this issue resolved and so adoption rates will be limited.
Yeah, we’d already have a problem if critical data links were pwned every time netflix put up a new and popular season
The future of cloud gaming isn’t as far off as it seems! I downloaded this software called LiquidSky and I was able to play Battlefield 1, 60 FPS, max settings with no lag and perfect quality! Below is a video link of people doing the same thing.
https://www.youtube.com/watch?v=i8iteqGFp-c
LiquidSky is still accepting beta users, so you can sign up and see for yourself~
https://liquidsky.tv/
Just signed up for the beta, but there is a waiting list of over 500,000 users.
They let you bypass the line by getting three friends to sign-up under your account, but they have to verify with phone numbers.
I created three fake profiles, skipped the line only to get the “Sorry all our free beta servers are full”
So much for a free 1 day trial.
Yes, I use LiquidSky. It’s pretty flawless, assuming you can get into the beta. Kinda funny to read all these comments saying cloud gaming is still years off when some of us know there’s a perfectly functioning service already out there…
Registered, I searched and found there are many chances previously to get quick access. I’m not so hurry but think a good product deserves waiting.
NVidia have been doing this for some time. They call their service GeForce Now:
https://shield.nvidia.com/game-streaming-with-geforce-now
Would like to see a sideways move here with a bit of lateral thinking you could see cloud computing and distributed computing going through the same level of innovation with a hybrid of the two.
We have so much wasted potential with PC,s Laptops, consoles and mobile phones that could be used by organisations like medical ones for number crunching cancer and other disease work, similar to the way ‘SETI’ did with their screen saver.
Greedy ISP Data Caps = FAIL!