Update: I’ve received some suggestions about using external v-sync controllers and allowing the game to rebuild shaders after updating drivers. I’ll update the post when I’ve had a chance to run some new benchmarks.
It’s hard to find any reviews and reports and No Man’s Sky without hearing about performance issues. Game stability aside, there are reports all over the place about FPS drops and stuttering while playing the game. A Google search for “No Man’s Sky PC performance issues” shows 1.24 million results, including an article from Polygon titled Don’t buy No Man’s Sky on PC yet.
What are these issues, exactly? Frame time stability seems to be a major one. At 60 frames per second, it takes 16.7 milliseconds to draw one frame. There’s always going to be some variance, so if one frame takes 15 ms and the next takes 18 you’re not going to notice much; everything will still be nice and smooth. However, if you’re averaging 16 ms per frame and suddenly get frame times bouncing between 16 ms and 50 ms (that is, sudden fluctuations between 62 FPS and 20 FPS) that sudden, drastic change becomes a very noticeable stutter.
There seem to be a few different causes of this frame time instability. One is that the game is constantly generating new terrain and lifeforms using algorithms using a process known as procedural generation. It’s important to point out that this does not mean it is randomly generated. Procedural generation uses a formula with some number of input variables to generate everything, so if the same variables are given (X, Y, and X coordinates in space, for example) the output will always be the same. This means that the game as a helluva lot of math to do all the time. Every time you go somewhere new, every time you land on a planet, and every time you warp to a new star system, the game needs to take all of the inputs, run it through the formula, and start generating every plant, animal, mountain, mineral, body of water, space pirate, landing pad… you can see where your computer’s processor might start to sweat a little. Traditional games have levels created by a designer, so everything is already determined and there’s very little math involved. Ammo crate there, blood demon there, and typically all loaded into system memory before the game even starts.
Another issue is that, with such a small studio creating the game, they simply could not optimize the game for such a wide variety of system configurations so some incompatibilities and strange default configuration options are going to make their way into the “final” product. Things like G-sync, a technology that requires a relatively modern video card and a new, expensive monitor, are enabled by default with no way to disable without digging through system configuration files. Or, the fact that the game is locked to 30 FPS by default on PC.
So how does the game actually perform? Here I’m testing the game under two scenarios: traveling from a space station to a planet’s surface and exploring the surface of the planet, harvesting resources. This should give us a pretty good understanding of how the game performs under the most demanding scenario as well as a more typical one. My testing system consists of an AMD FX-8350 overclocked to 4.4 GHz, an EVGA GTX 970 SC, and 16 GB of system memory. Running the game as it’s purest default settings at 1080p we get the following results:
So what we’re seeing here is how long it took to generate each video frame. By default the game is locked at 30 frames per second, so each frame should take about 33 milliseconds. Most of the time that’s true. Between both scenarios, the average frame time was 34.9 (28.6 frames per seconds). As a comparison let’s take a look at a modern juggernaut of real-time graphics: DOOM.
Here I’ve taken the frame time data from earlier and overlayed that with the frame times of typical DOOM gameplay. I chose DOOM because it’s a showcase of what modern hardware and software are capable of. If you follow the red data you can see that there are very few frames that deviate from the pack. This uniformity gives a very smooth gameplay experience. Going back to the No Man’s Sky data you can see that, not only do they deviate more often, but much further as well.
If we uncork the performance by disabling v-sync and turning the max FPS off (I’m not entirely sure how these are different yet) we can see a huge jump in performance.
Overall the frame times have improved, with the average jumping from 39.4 milliseconds (28.6 frames per second) to 20.7 ms (48.3 FPS). However, the distribution of stuttering, deviant frames is still the same.
Just to see what would happen with the maximum graphics settings used with a wider field of view (90 on foot, 100 in the ship) I got this:
Frame rates with all the graphics settings maxed out and wider field of view.
While running around the planet and mining resources, the frame rate was a little less stable than with the default settings, which isn’t that surprising. There were a few more deviant frames causing stutters but not a significant amount. However, leaving and re-entering the atmosphere saw a tremendous surge in stuttering. Granted, most of your time isn’t spent traveling between planets and space, but there should be something we can do to make that transition smoother. Hello Games has an experimental patch available that is supposed to improve performance on some AMD CPUs and 8-core CPUs (of which mine is both). It also disables things that should have never been enabled in the first place, like G-sync, to address performance and compatibility issues. First, though, let’s update my video card drivers.
The nVidia GeForce Experience control panel is telling me my current driver is from a month ago, and that the newest driver “Provides the optimal experience for No Man’s Sky, Deus Ex: Mankind Divided, Obduction, F1 2016, and the Open Beta for Paragon”. So that’s a good start.
Before the driver update, the average frame time was 26.6 ms (37.6 FPS). Afterward, it jumped down to 17.7 ms (56.4 FPS), which is a staggering change. We can see that the deviant frames are reduced overall, though some frames took much, much longer than most. How much longer? All the charts so far have had a ceiling of 250 ms (4 FPS) for uniformity. If I remove that ceiling…
Woah, woah, woah. Those 5 frames that we couldn’t see before? In total, those 5 frames took 4.6 seconds total to render. During these tests I’ve been jumping back and forth to the same region of the same planet, so while I understand that the procedural generation does take an awful lot of resources, it’s also weird that it does not seem to be caching planet data anywhere. I don’t know how much drive space a cached planet or five might take but if it’s able to smooth the transition from space to surface, but it might be a good trade-off if we’re given the option in a later patch.
Speaking of patches, how does the experimental patch affect performance?
Wow! As soon as I loaded No Man’s Sky after patching it I immediately felt a smoother frame rate but I wasn’t expecting this kind of result. The average frame time dropped a respectable 2.7 ms to 15.0 ms (66.6 FPS) from the pre-patch 17.7 ms (56.4 FPS). I should mention that after patching the game I did get a crash between the two benchmarks, which I haven’t had until now, so there might be increased frame time stability at the cost of game stability. That said, it was a graceful crash that didn’t take the rest of the system down with it, and I was able to jump back in with no problems.
As long as the crashes don’t come too often I think it’s pretty good trade for an experimental patch. The game has been out for a week and I’m anxious to see what fully supported patches bring to the game.