We believe that a new era of computing is approaching, one where you’ll access your hardware rather than own it. Today, we support accessing and sharing hardware between friends as well as accessing cloud gaming PCs on Amazon Web Services and on Paperspace. But Parsec can be installed and run anywhere. As part of this mission, we’ve broadened where you can launch a cloud gaming PC while making it easier to build a gaming PC in the cloud with the click of a button. Although this is getting more seamless through our application, there’s a lot that happens behind the scenes to make it possible. Cloud gaming has had many false starts, but the availability of hardware specializing in video decoding, increasing access to high bandwidth connections, and distribution of data centers is making the possibility of never having to upgrade your gaming rig a reality. In the future, we believe everyone will access games in the same way they stream other content from the web. Our low-latency networking protocol and streaming software are the core features that make this possible. However, if you’re going to game on a cloud PC, not only do you want lag to feel like it’s native, but you also want the startup process and data storage to feel like the machine is sitting under your desk. Typically, cloud infrastructure providers don’t optimize for this type of experience, but for us, we had to make sure that we eliminated two major issues with starting a cloud gaming PC at Amazon: Provision times which can take minutes; and very slow disk I/O when you first “build” your cloud machine.
Challenge One — It Takes About 5 Minutes To Launch A GPU Instance
In our testing, we’ve found that it takes about 5 minutes to launch a g2 instance across the 8 regions they’re offered. Honestly, it’s pretty impressive and not a big deal when you’re provisioning server instances for use in a non-interactive application like processing machine learning workloads, but when you’re making a gamer wait for his machine, it can feel like an eternity.
Solution — Prelaunch g2 Instances In Every Region
To solve this problem, we introduced a prelaunching feature. The initial launch is much slower than subsequent starts. As a result, we decided to launch a new instance every time a user sets up his or her cloud gaming machine. This way, we have an instance that was launched, stopped, and is now waiting for the next user to “build” his or her gaming PC.
Challenge Two — EBS Volumes Have Very High I/O Latency On Start
In the past, I’ve used AWS to launch Linux servers in a non-interactive way. For example, typically I’ve provisioned a new instance to start some server process and begin receiving traffic automatically. This made it easy to miss the fact that EBS-backed EC2 instances restored from a snapshot (such as when provisioning a new EC2 instance from an AMI) suffer from very high disk I/O latency. We recorded I/O latency approaching 45 milliseconds in the first 10 minutes of the instance’s existence. As a low latency video streaming solution for cloud gaming, this was completely unacceptable. Users would have thought our software had a lot of lag when it was simply a high disk read latency issue.
Solution — Read All Disk Blocks
Using a simple program written in C that is only 85 lines of code, we eliminated the high I/O latency on start. The program simply sequentially reads all blocks from the root EBS volume, which forces the volume to copy the data from the snapshot. Subsequent reads to that block can then be served from the EBS volume, without making a request to the snapshot stored on S3. By running this disk prewarming program when we prelaunch our EC2 instances, we are able to bring this EBS cold start disk read latency down from about 45 milliseconds at its peak to under 1 millisecond.
Check out the presentation from the talk at the AWS Summit for more information.