Let’s Complain about the Nintendo Switch

Note: While I was writing about the potential Wii U backwards compatibility on the Switch I neglected to take into consideration that there is an option to play Wii U titles exclusively on the Wii U gamepad, which resolves one issue I brought up. That said, I still don’t believe we’ll be seeing backward compatibility on the Switch.


Recently, Nintendo announced their new… console? Portable gaming platform? Whatever it is, there isn’t a whole lot known about it outside of what we saw in the promotional video.

Google search results for backward compatibility on the Switch.
Google search results for backward compatibility on the Switch.

Let’s address the biggest “issue” I’ve seen popping up lately: The Switch is not backward compatible with the 3DS or Wii U. In a nutshell, no shit. The 3DS and Wii U are both dual-screen systems, so where this expectation came from that a single-screen system would support dual-screen games is beyond me.

Granted, Nintendo does have a history of supporting old games and hardware on new systems. The Super Nintendo could play Game Boy games and the GameCube could play Game Boy, Game Boy Color, and Game Boy Advance, though both had their own issues and required extra hardware. The Wii supported GameCube games and controllers natively due to the Wii simply being a faster GameCube, and the Wii U supports Wii games, Wii controllers, and even GameCube controllers with a USB breakout box. While not officially supported, it’s even possible to play GameCube games on the Wii U through some software hacking. Even the Super Nintendo was going to have backward compatibility with the original NES, at first natively, then through a hardware add-on, but proved cost prohibitive.

So why not support backward compatibility on the Switch? Let’s start with 3DS compatibility. The Switch has enough buttons to properly replicate the 3DS controller, but still only has one screen. In theory, it could be possible to use the Switch’s screen as the lower screen and your TV as the upper screen, but Nintendo has dispelled that already by designing a docking station (which is how you get video to your TV) that completely obscures the Switch’s screen. Additionally, the idea that 3DS games would be supported but only in very specific circumstances doesn’t make much sense.There’s also the issue of hardware differences between the Switch and 3DS. It hasn’t been confirmed if the Switch even has a touch screen, something an overwhelming majority of 3DS games require, or at least make use of in some way, which puts a huge limit on the number of games that can be played.

Traditional backward compatibility comes from similarities in CPU architecture. Like I stated above, the Wii uses the GameCube’s CPU to achieve perfect compatibility. The PlayStation 2’s CPU is vastly different than the original PlayStation’s CPU, so they included a PlayStation CPU inside the PlayStation 2 for purposes of backward compatibility. Due to CPU changes between the original Xbox, Xbox 360, and Xbox One, backward compatibility between generations was achieved through software emulation (that is, software pretending that it’s hardware, allowing the game to play on hardware it wasn’t designed for). Emulation was not available when each console launched, not all games were supported, and many which were supported exhibited graphical and performance issues. So if we apply that logic to the 3DS and Switch, sure, Nintendo could possibly get software emulation working on the Switch, but with potentially iffy results, a poor user experience, and the fact that the Switch provides no additional benefit over just using a 3DS, there is literally no reason Nintendo should support this.

Analogy of backward compatibility where it doesn't belong.
Analogy of backward compatibility where it doesn’t belong.

So what about the Wii U? All the CPU architecture and software emulation stuff still applies, so I’m going to skip over that part. The biggest reasons I could see this not happening are the storage medium (Wii U uses a proprietary disc format) and, again, lack of a second screen.

The Wii U stores its games on a 25-gigabyte disc that is similar to, but not, Sony’s Blu-Ray discs. The Switch doesn’t have an optical drive. See a problem? It’s not like Nintendo would let you rip your games from your Wii and transfer it to your Switch. Not to mention that we don’t know if there is a touch screen or motion controls (though the announcement of Just Dance suggests that this might be happening). Lacking either of these features would break plenty of games. Speaking of motion controls, the system is supposed to be portable; who in their right mind is going to be using motion controls on an airplane, at the park, or even in their own living room? How about four- and five-player games? On a 6-inch touch screen? No chance.

All of this isn’t to say that there won’t be any backward compatibility. Nintendo could breathe new life into their Virtual Console service, allowing players to play older portable games… portably. Kirby’s Dreamland on Game Boy? Pokemon LeafGreen and FireRed on Game Boy Advance? There’s even the potential for Nintendo 64 and GameCube titles to be played on the go, not to mention non-Nintendo systems that are already supported like the TurboGrafx-16 and Neo Geo.

The most important point to consider, I think, is that we already have devices that perfectly play 3DS and Wii U games: They’re the 3DS and Wii U. If these are the systems you want to play just by those systems. You’ll save money and have a much better experience.

Okay, so what else are people complaining about? Battery life. In this article from Forbes, contributor David Thier says “It would appear to be a pretty powerful machine for the size, and that doesn’t come cheap power-wise. So we’re going to need a machine that gives us 5+ hours of playtime — if we’re short of that, we’re going to have a problem.”

Hello? We’re going to need 5+ hours of play time? Okay, hang on. We need to talk about use cases. Computers and game systems don’t use a constant amount of electricity; it varies depending on what you’re using the system for. This Gizmodo article tests Apple’s claims of 10+ hours of battery life on the original iPad. With approximately 50% video watching and 50% gaming, they got just under 6 hours of battery life. What’s important to note here is that the iPad’s twin battery is massive. Yes, processors have become more efficient and battery capacities have grown, but it gives you a realistic expectation. CNET claims 3-5 hours of game time from both the PlayStation Vita and 3DS. The Switch looks to be quite a bit more powerful with a bigger screen, resulting in more power consumption. I think David Thier is going to have a problem, but only because of unrealistic expectations, bordering on entitlement.

The last issue I see people complaining about is price. We don’t know all the details about it; we don’t know what all it can do. Is the screen 720p? Does it output native 4K? Upscaled 4K? 1080p? What is the quality of the graphics? Last-gen console? Top-of-the-line tablet? We don’t know anything about what we’ll actually be looking at come March, so making blanket statements about “It can’t cost more than…” really doesn’t make any sense.

Let’s all just take a long breath, exhale, and just take what we know at face value.

Of course, the most important thing we know is that a new GameFreak-made Pokemon game is coming to the Switch.

Retro Gaming in the Modern World, part 1

You know that feeling when you start to look up something on WebMD and you start to panic because you think you have some terrible disease? That’s kind of what happened to me when I started looking up retro gaming video quality. My WebMD, in this case, was the My Life in Gaming RGB Master Class. I had been doing a lot of research on playing retro games on modern displays, but my only modern display was a Panasonic plasma TV, which is not ideal for retro games due to the risk of image retention and burn-in. As luck would have it my large CRT has started to act really strange when it first turns on and has only been getting worse. The replacement for my failing CRT handles retro games with surprising grace but still falls flat in a few areas. To address those issues I’ve purchased a video upscaler. Why not just plug in my consoles and let the TV do it’s thing? Well, that takes a lot of explaining. In part 1 we’ll address some of the technical information we need to know before diving head-first into what the scaler does.

Pixels, Sub-pixels, and Resolution

An example 4-pixel by 3-pixel display with each red, green, and blue sub-pixel shown.
An example 4-pixel by 3-pixel display with each red, green, and blue sub-pixel shown.

When an image is displayed on a screen you’re actually looking at small squares called pixels (short for ‘picture elements’) that, when viewed from a distance, make up an image. On top of that, each pixel is made up of three sub-pixels, each one displaying either red, green, or blue (RGB). Colors are created by changing the brightness of each red, green, and blue sub-pixel individually. For example, if red and green are at full brightness and blue is completely darkened you get a bright yellow.

Standard definition is 480i, or 480 lines (rows) of horizontal resolution with interlaced video. Interlacing displays only the odd lines of a video frame (1, 3, 5…), then the even lines of the next (2, 4, 6…). Modern displays are typically 1080p, with 1,080 lines of horizontal resolution with progressive scan. Progressive scan means the whole image is drawn in a single pass, on every line, rather than alternating the lines. The result is a much better quality video when there’s fast motion or scrolling test.

It should be noted that I’m only mentioning horizontal resolution. This is because the vertical resolution, or the vertical rows that made up the image, could vary wildly. Even the true resolution of standard-definition was much wider than what the TV was able to display, and some games ran at wider resolutions than other, even though the horizontal resolution was the same.

Retro game consoles only had the processing capability to generate 240p video, which, despite being a non-standard resolution, TVs were able to display without issue. It wasn’t until the Sega Dreamcast that consoles could display 480i and 480p images. Most modern TVs are able to accept and display a 240p image, but they see this non-standard resolution as 480i and attempt to deinterlace an image that is not interlaced to begin with, ironically making the image appear interlaced and introducing other potential issues. This can be as minimal as a blurry image, but can also interfere with flickering transparency effects, effectively making some sprites and characters disappear when taking damage. The process of upscaling this “480i” signal to 1080p can also introduce input lag, making time-sensitive games like MegaMan or Beatmania impossible to play.

Connection Types

So now we understand what makes up a picture, but how does that picture get from the console to the TV? When the console generates each frame of video it leaves the image processor and enters a digital-to-analog converter (DAC), which turns the video into a signal that the TV can display. The quality of the video that gets sent to your TV depends largely on two things: the quality of the DAC, which you can’t change, and the connection type used, which you usually can.

RF adapters

RF adapter for the Nintendo Entertainment System.
RF adapter for the Nintendo Entertainment System.

There was a time where many consumer TVs in the United States only had a single input for their video; the coaxial connection also called the antenna connection. This was used for both over-the-air TV signals as well as cable TV signals and was often the only way to plug in your video games. Internally the game system would convert the video signal, which is digital when it’s originally created, converts it to an analog signal, then sends it to an RF (radio frequency) adapter which converts the analog signal to another kind of analog signal that, to the TV, looks just like a TV broadcast. If you remember having to use radio adapters to listen to your iPhone in your car, it’s the exact same thing but with a physical connection. The signal was also susceptible to interference from other devices, like TV broadcasts, which would create distortions and ghost images. All this, combined with cramming all the audio and video information into a single cable, really took a toll on the image quality.

As a side note, even if you wanted to connect your console to your modern high-definition TV, many no longer come with analog TV tuners (since it’s no longer used in the US), so this may not work at all.

Composite

Typical composite cables, red and white for audio and yellow video.

Where RF combines audio and video data into a single connection, composite only transmits video data; audio is transmitted over one or two separate RCA cables (white and red). Picture quality is greatly improved because there’s less information to transfer over a single connection, there one less signal conversion and the connection is not susceptible to the same interference as RF. A lot of newer TVs support composite, but not s-video, so for some situations, this may be the only connection type you can use.

This connection is also referred to as “AV” or “RCA”, though RCA the physical connection type and doesn’t refer specifically to composite video.

S-video

S-video cable, carrying separate chroma and gamma .
S-video cable, carrying separate chroma and gamma .

S-video, short for ‘separate video’, splits the video signal into two connections: one for color information and one for gamma (brightness) information. Composite video carries both of these signals on two separate frequencies. These signals can interfere with each other, causing blurriness in the image. Separating these into their own connections means they cannot interfere with each other, providing a higher quality image.

If your TV supports it, S-video is typically the way to go. Most consoles support it and it’s typically the best video quality you can get with a very minimal investment.

Component

Component cables for YPbPr video. Audio cables not shown.
Component cables for YPbPr video. Audio cables not shown.

The correct name for this connection is YPbPr, but is known largely as ‘component’. It carries video over three separate RCA cables; one for gamma, (which is basically a combination of the red, green and blue color information), one for gamma minus red, and one for gamma minus blue. Green is created by subtracting red and blue from the gamma information. It’s also possible to carry an RGB connection over this connection, which the PlayStation 2 has the option to do, but most TVs don’t support this option.

For consoles with AV multi-ports it should be possible to get YPbPr video by using a SCART cable with an SCART-to-component adapter, though your results may vary depending on the console and TV used. You’ll also be getting 240p output, so you’ll end up with similar blurring, interlacing, and input lag issues that you would get with composite and s-video.

What’s the Result?

I took a screenshot of Super Mario World and did some Photoshop work on them to give you an example of the kinds of image quality differences you can expect with each connection. For a more real-life comparison check out the RGB Master Class series.

So What’s the Solution?

There’s a group of video products called scalers that take standard-definition and output them at 720p and 1080p. Most of these devices are expecting a 480i signal, so while you might have less input lag and other issues caused by the TV’s misinterpretation of the 240p signal, you might still end up with some distortion. Common issues are halos around sprites from heavy-handed sharpening and image stretching to fill the TV screen. While there are plenty of options out there, the best so far seems to be the Micomsoft XRGB Mini, also known as the Framemeister. This piece of hardware was designed specifically for 240p video, allowing for proper, distortion-free scaling. Mine was just delivered today, and I’ll be documenting my experience with it as soon as I’m back from Korea.

Another solution is console-style emulators like the Retron, but I’ve never liked that solution. Yes, it uses cartridges, but there’s nothing authentic about the feel of it, the controller is garbage, there’s apparently some amount of input lag, and I already have a PC to connect to the TV, so why pay for an emulator that you could legitimately download for free?

There’s also official emulation from Nintendo, Sony, and Microsoft, as well as backwards compatibility from newer consoles with higher quality output. Some consoles offer perfect compatibility, like playing PlayStation games on a PlayStation 2, but the Xbox 360’s emulation of original Xbox games hit hit-or-miss, but usually ‘miss’. Having a single solution that solves all my video issues, rather than a dozen bandaid solutions, is the better option for me, and the HDMI-out from the XRGB Mini also allows for easy capture of extremely high quality video for streaming or recording gameplay videos.

Frame Rates in CounterStrike Throughout History

While watching a CounterStrike: Global Offensive stream on Twitch there were questions about the streamer’s resolution. A popular resolution for competitive players is 800×600. There are a few reasons for this, mostly revolving around performance. Thinking about this superfluous bit of information sent my brain down a rabbit hole, eventually ending up at “I wonder how each version of CS runs on modern hardware.” I booted up each of the main versions of CS, 1.6, Source, and Global Offensive, and got to work.

By default, CS is limited in the number of frames per second it can display, so those limits had to be removed. Next, I set the resolution to 800×600 so we were measuring against some kind of standard. There weren’t many, if any, visual options for CS 1.6 so there wasn’t much to do there. In Source, I turned up all the settings to their maximum values, except for HDR, which I disabled because it’s not something I would have enabled anyway. In Global Offensive I normally run with rather conservative settings. I turned down a few options, like texture filtering, to better match how a competitive player would have their game setup. Here are the results.

Frames per second in each of the three main releases of CounterStrike
Frames per second in each of the three main releases of CounterStrike

So… wow, these numbers are kind of all over the place. Let’s take a closer look and see what’s going on here.

In 1.6, the oldest of the three versions, we have an average frame time of 3.9 ms (253.5 fps) with a standard deviation of 1.5 ms. Frame times drop as low as 2 ms (500 fps), and the 90th percentile (that is, what 90% of the frame times will be better than) is 5 ms (200 fps). In Source, a version that’s still quite old but much more optimized, we have an average frame time of 3.5 ms (285.7 fps) with a standard deviation of 1.6 ms. The results are, as near as makes no difference, identical between these two games. Even the 90th percentile is only 0.8 ms lower at 5.8. It would appear, then, that these older titles are hitting some kind of limitation with the game engine, rather than hardware. Global Offensive had an average frame time of 8.4 ms (119.0 fps) with a standard deviation of 1.9 ms. Why does GO have a higher deviation compared to the other versions when the graph appears much more stable? Part of it is the way the graph displays the relationship between frame time and frame rate, but another reason is that at these longer frame times (8.4 ms compared to 3.5) each millisecond of frame time has less of an impact in the overall frame rate. The difference between 2 ms and 4 ms is 250 fps, but 8 to 10 is only 25. Here’s the same chart display only the frame rate.

Same chart as earlier, displaying frame rate linearly.

Here the difference is frame rate looks much more severe in 1.6 and Source. Is there anything we can do about that? We could enable V-sync, which would delay the presentation of the frame to the monitor until the previous frame has finished drawing, but that would introduce a delay between what’s happening in the game and when it’s displayed on the screen. We’ll leave V-sync disabled but tell the game to limit the frame rate using the fps_max command and set it to 300 for 1.6 and Source, and 130 for GO.

 

Limiting the maximum number of frames to make gameplay smoother.
Limiting the maximum number of frames to make gameplay smoother.

This had two unexpected effects. First, while the average frame time only decreased from 3.5 ms to 3.4, the standard deviation dropped from 1.6 ms to only 0.5!  Second, the frame rate didn’t get any more stable in 1.6. Global Offensive is expectedly smoother, but still bumpy, because we’re running to limitations of my CPU and GPU, but it’s odd to me that 1.6 didn’t see any smoothing.cs 1. I set fps_max to 300, 250, and 200 to see if we could get smoother frame times, and…

CS 1.6 with fps_max set to 300, 250, and 200.
CS 1.6 with fps_max set to 300, 250, and 200.

Surprisingly, while lower fps_max values did stabilize the frame rate some amount (standard deviation was 1.2, 1.0, and 1.2 ms respectively), I never saw the same kind of stability as I did with Source capped at 300 fps. It’s unfortunate, but it’s likely due to a lack of modern optimizations in the game engine, like hardware-based smoke and particle effects and multicore rendering. Average frame times were 4.1 ms (241.7 fps), 4.5 ms (221.2 fps), and 5.2 ms (192.8 fps), which is a little disappointing, but at that 200+ fps level, you’re going to be hard-pressed to notice a difference anyway.

Going back to the original thing that got me thinking about this, how does resolution affect frame rate in the most recent version of CounterStrike?

800×600 vs 1680×1050 vs 2376×1485

Using nVidia’s Dynamic Super Resolution I was able to run the game at 2376×1485, basically the 16:10 equivalent of 1440p. In order to do this, I had to use nVidia’s optimized presets, so I opted for the performance option and ran new benchmarks with the new settings. At 2376×1485 the average frame time was 5.6 ms (179.4 fps) with 7.5 ms (133.3 FPS) in the 90th percentile. At 800×600 you would expect to see double or triple that frame rate, but surprisingly I got an average frame time of 6.3 ms (158.7 fps) with 8.4 ms (119.0 fps) in the 90th percentile. 1680×1050 saw similar results. It seems, in this instance, that increasing the resolution actually increased frame rate as well.

What did we learn? Well, just because a game is old doesn’t mean it’ll run at a million FPS; the engine needs to be designed to scale with increased CPU and GPU capabilities. As we saw, updated game engines are not able to run faster, but smoother as well. We also learned that GPUs work better when they’re given a healthy workload.

The benefits to gaming at 800×600 don’t appear to be performance related. Instead, it’s more likely tied to what old school players are used to. The only significant change is when using a stretched 4:3 resolution, the whole game appears wider, including player models and doorways. On the flip-side, you’re also limiting your field of view. Some players may even prefer this, as it keeps focus on what’s in front of you. TL;DR, it’s just about preference and what’s comfortable. There’s no performance to be gained either way.

Let’s Clear the Air: Decoding Technical Jargon

With Sony revealing the PlayStation Slim and Playstation Pro yesterday, along with Microsoft’s reveal of the Xbox One S at E3, there have been a lot of terms use that the average consumer may not be familiar with. Terms like “teraFLOPS of compute performance” and HDR make consoles and video cards sound impressive but what do they actually mean?

Let’s start with FLOPS, or Floating-point Operations per Second. Computers can basically do two kinds of math: with decimal places (floating-points) and without decimal places (integers). Floating-point math is critical for scientific computation, including simulating 3D objects and environments, which is basically what a game is. Processor speed is measured in Hertz (Hz), or cycles per second. If a processor is able to execute 100-million instructions per second its speed is rated at 100,000,000 Hz or 100 megahertz (MHz). Modern processors are typically in the range of 3 gigahertz (GHz), or 3-billion instructions per second. Depending on the processor it might be able to perform a single floating-point operation per clock cycle, or maybe it can do 4, 6, or 8 (which also depends on how much precision (how many decimal points) are used in the math). This is one of the reasons that CPUs rated at the same speed can produce different results. So if our 3 GHz processor can perform 8 FLOPS per cycle, that’s 3-billion times 8, or 24-billion FLOPS (24 gigaFLOPS). Of course, modern processors might have 4, 6, or 8 cores, so if we assume we’re looking at a 4-core CPU we need to multiply that number by 4, so we now have 96 gigaFLOPS.

CPUs have to perform a wide variety of computational tasks. Being a jack-of-all-trades means they aren’t quite as fast as a processor that’s dedicated specifically to floating-point calculations. This is where video cards (GPUs) come in. GPUs are purpose-built for doing as much floating-point math as possible. That means that, while a typical desktop CPU might perform somewhere in the 50-100 gigaFLOP range, mid-range GPUs can perform in the 3-5,000 gigaFLOP (3-5 teraFLOP) range.

Now that we know what a FLOP is and how it’s calculated we can look at what Sony claims the Playstation 4 is capable of. This chart from AnandTech shows the GPU performance between the original and Slim Playstation 4 models at 1.84 teraFLOPS (1,840 gigaFLOPS) and the Playstation 4 Pro at 4.2 teraFLOPS (4,200 gigaFLOPS). That 2.3x performance jump means that games can run at higher resolutions, texture and model detail, higher and smoother frame rates, or any combination thereof. Is it enough for native 4K? Probably not. Comparing that 4.2 teraFLOP number to a desktop GPU, it’s right in between a GTX 970 and 980, meaning it’s closer to a 1440p or 2K resolution performer unless you really dial down the rest of the graphics settings.

“But the PS4 Pro supports 4K. How is that possible if it isn’t powerful enough for 4K games?” It might not be able to render typical games in 4K resolution, but it might be possible to render it at 2K resolution and upscale the images to 4K. It’s basically like resizing an image in Photoshop, but with a little bit of sharpening and other effects to make it look kind’a like it maybe was originally rendered in 4K. Similarly, if you have a 1080p display, the system could still render a game at 2K and down-sample that image to 1080p, resulting in sharper, more natural images. This technology is already available on the desktop with nVidia’s Dynamic Super Resolution. The PS4 Pro is, however, capable of playing 4K video through services like Netflix and YouTube, though it looks like the Pro does not include Ultra-HD Blu-Ray support, so you won’t be able to watch your 4K movies on your new Playstation.

One of the other new features that was announced is HDR, or High Dynamic Range. Imagine you’re indoors on a bright, sunny day. You’re taking a picture of your friend, who is standing in front of an open window. One of two things is likely to happen: First, your camera may correctly determine your friend needs to be exposed correctly, leaving the background “blown out”, virtually pure white, or it may try to expose for the outdoors, leaving your friend as a blacked out silhouette. This is an example of a low dynamic range. When you look at your friend with your eyes, though, you can see your friend clearly as well as what’s outside with no trouble at all. This is an example of high dynamic range. HDR video aims to provide a more lifelike range of color and brightness than a typical TV or computer monitor is capable of.

That’s all I can think of for now. If you have any questions about fancy-pants words companies are throwing around in their press announcements leave a comment below.

Pre-PAX Prime PC Preperations

A little over a year ago I decided to undertake a somewhat unique approach to water cooling: using surface area and evaporation to silently remove heat from the computer. If you want details, the build thread is here on the Linus Tech Tips forums. The long and short of it is, while it did work, summer temperatures made the water evaporate at an annoying rate and the cheap pump I was using generated more noise than I was happy with. That, and I couldn’t move my computer downstairs if I wanted to setup an HTC Vive.

I pieced together a massively overkill water cooling loop with the idea that excessive cooling meant less noise. The end result was this:

water-cooling-1

I tried to make the diagonal lines work, but what I really wanted was something with a lot more 90-degree bends. The lines running into and out of the CPU cooler weren’t the same length, ran across the case at slightly different angles, and when combined with the mostly-horizontal line running from the radiator back to the reservoir, I was pretty unhappy with how it looked. The performance was fine, and I achieved a perfectly stable 4.4 GHz overclock with relative silence. The HZXT Hue+ also provided nice, ever-changing mood lighting which really set the build off.

This year I decided to bring my PC to PAX. There’s a LAN across the street every year, and I figured I could edit and post content without relying on my laptop, play some games, and just have a nice space to myself when I needed a break from the madness of PAX. This was the perfect opportunity to address some of the issues I had with the system.

water-cooling-2

First up was the tubing. One of the issues I had with the tubing was how it ran all over the case and was visually too messy for my taste. To resolve this I wanted to move the radiator to the front of the case. Despite there seeming to be enough room, I just couldn’t manage to cram the radiator, fans, and reservoir all in the hard drive bay. My backup plan was to keep the radiator in the front, but rotate it so the inlet and outlet were now in the front of the case. This allowed me to make shorter, more direct lines between components and get those parallel 90-degree bends I wanted so badly.

Swapping the radiator back and forth was a massive pain in the ass because every time I would have to remove the bolts holding the fans on, reconnect them, realize I put them on the wrong way and have to do it again. Eventually, I got it all sorted and everything was fine.

Except when I realized I ran the water loop backward through the water block. Luckily that was an easy fix; I just had to flip the block upside down.

Now that the radiator lines are in the front of the case there is no room for a hard drive. The 3 TB hard drive now lives in the ever-cramped basement with all the power cables. Seems happy enough, but in the future, I’d really like to mount it at the bottom of the hard drive case under the reservoir.

The last thing I changed, which isn’t pictured, is swapping the OCZ SSD and the Hue+ controller. I had originally put the SSD in first and didn’t think about the aesthetics when I put the Hue+ controller in. The position of the massive black box started to wear on me over time, so I figured now would be the best time to swap all that stuff around. Now it looks much, much better.

I ordered a set of clear acrylic cable combs for the GPU wiring. It’s not supposed to arrive until Saturday, so if it doesn’t show up early I’ll have to install them at PAX. The last thing I ordered, which should be here the day before PAX, is a plastic scratch remover kit. My plastic case window has been through a lot, and I’d like it to look new before putting it on display for all to see (They’re rather prominent in the video below). Hopefully it gets here, hopefully it works, hopefully my loop doesn’t have a meltdown like in the dream I had last night.

Because pictures are kind of boring these days, here’s a short build video I took and edited in a hurry. There are chunks missing because the battery in my camera died, but all you’re missing is me struggling with the radiator. When I do my next rebuild I’ll tear the system down to bare components and do a complete build video from the ground up.

No Man’s Sky: A Performance Analysis

nms-google

Update: I’ve received some suggestions about using external v-sync controllers and allowing the game to rebuild shaders after updating drivers. I’ll update the post when I’ve had a chance to run some new benchmarks.

It’s hard to find any reviews and reports and No Man’s Sky without hearing about performance issues. Game stability aside, there are reports all over the place about FPS drops and stuttering while playing the game. A Google search for “No Man’s Sky PC performance issues” shows 1.24 million results, including an article from Polygon titled Don’t buy No Man’s Sky on PC yet.

What are these issues, exactly? Frame time stability seems to be a major one. At 60 frames per second, it takes 16.7 milliseconds to draw one frame. There’s always going to be some variance, so if one frame takes 15 ms and the next takes 18 you’re not going to notice much; everything will still be nice and smooth. However, if you’re averaging 16 ms per frame and suddenly get frame times bouncing between 16 ms and 50 ms (that is, sudden fluctuations between 62 FPS and 20 FPS) that sudden, drastic change becomes a very noticeable stutter.

There seem to be a few different causes of this frame time instability. One is that the game is constantly generating new terrain and lifeforms using algorithms using a process known as procedural generation. It’s important to point out that this does not mean it is randomly generated. Procedural generation uses a formula with some number of input variables to generate everything, so if the same variables are given (X, Y, and X coordinates in space, for example) the output will always be the same. This means that the game as a helluva lot of math to do all the time. Every time you go somewhere new, every time you land on a planet, and every time you warp to a new star system, the game needs to take all of the inputs, run it through the formula, and start generating every plant, animal, mountain, mineral, body of water, space pirate, landing pad… you can see where your computer’s processor might start to sweat a little. Traditional games have levels created by a designer, so everything is already determined and there’s very little math involved. Ammo crate there, blood demon there, and typically all loaded into system memory before the game even starts.

Another issue is that, with such a small studio creating the game, they simply could not optimize the game for such a wide variety of system configurations so some incompatibilities and strange default configuration options are going to make their way into the “final” product. Things like G-sync, a technology that requires a relatively modern video card and a new, expensive monitor, are enabled by default with no way to disable without digging through system configuration files. Or, the fact that the game is locked to 30 FPS by default on PC.

So how does the game actually perform? Here I’m testing the game under two scenarios: traveling from a space station to a planet’s surface and exploring the surface of the planet, harvesting resources. This should give us a pretty good understanding of how the game performs under the most demanding scenario as well as a more typical one. My testing system consists of an AMD FX-8350 overclocked to 4.4 GHz, an EVGA GTX 970 SC, and 16 GB of system memory. Running the game as it’s purest default settings at 1080p we get the following results:

Frame times in milliseconds with default settings @ 1080p.
Frame times in milliseconds with default settings @ 1080p.

So what we’re seeing here is how long it took to generate each video frame. By default the game is locked at 30 frames per second, so each frame should take about 33 milliseconds. Most of the time that’s true. Between both scenarios, the average frame time was 34.9 (28.6 frames per seconds). As a comparison let’s take a look at a modern juggernaut of real-time graphics: DOOM.

Frame times from No Man's Sky (default, 1080p) vs DOOM (ultra, 1080p).
Frame times from No Man’s Sky (default, 1080p) vs DOOM (ultra, 1080p). V-Sync was enabled on DOOM for a more accurate comparison.

Here I’ve taken the frame time data from earlier and overlayed that with the frame times of typical DOOM gameplay. I chose DOOM because it’s a showcase of what modern hardware and software are capable of. If you follow the red data you can see that there are very few frames that deviate from the pack. This uniformity gives a very smooth gameplay experience. Going back to the No Man’s Sky data you can see that, not only do they deviate more often, but much further as well.

If we uncork the performance by disabling v-sync and turning the max FPS off (I’m not entirely sure how these are different yet) we can see a huge jump in performance.

unlocked-framerate

Overall the frame times have improved, with the average jumping from 39.4 milliseconds (28.6 frames per second) to 20.7 ms (48.3 FPS). However, the distribution of stuttering, deviant frames is still the same.

Just to see what would happen with the maximum graphics settings used with a wider field of view (90 on foot, 100 in the ship) I got this:

Frame rates with all the graphics settings maxed out and wider field of view.

Frame rates with all the graphics settings maxed out and wider field of view.

While running around the planet and mining resources, the frame rate was a little less stable than with the default settings, which isn’t that surprising. There were a few more deviant frames causing stutters but not a significant amount. However, leaving and re-entering the atmosphere saw a tremendous surge in stuttering. Granted, most of your time isn’t spent traveling between planets and space, but there should be something we can do to make that transition smoother. Hello Games has an experimental patch available that is supposed to improve performance on some AMD CPUs and 8-core CPUs (of which mine is both). It also disables things that should have never been enabled in the first place, like G-sync, to address performance and compatibility issues. First, though, let’s update my video card drivers.

The nVidia GeForce Experience control panel is telling me my current driver is from a month ago, and that the newest driver “Provides the optimal experience for No Man’s Sky, Deus Ex: Mankind Divided, Obduction, F1 2016, and the Open Beta for Paragon”. So that’s a good start.

Frame times with maximum quality settings and the latest drivers from nVidia.
Frame times with maximum quality settings and the latest drivers from nVidia.

Before the driver update, the average frame time was 26.6 ms (37.6 FPS). Afterward, it jumped down to 17.7 ms (56.4 FPS), which is a staggering change. We can see that the deviant frames are reduced overall, though some frames took much, much longer than most. How much longer? All the charts so far have had a ceiling of 250 ms (4 FPS) for uniformity. If I remove that ceiling…

Frames that took forever to render.
Frames that took forever to render.

Woah, woah, woah. Those 5 frames that we couldn’t see before? In total, those 5 frames took 4.6 seconds total to render. During these tests I’ve been jumping back and forth to the same region of the same planet, so while I understand that the procedural generation does take an awful lot of resources, it’s also weird that it does not seem to be caching planet data anywhere. I don’t know how much drive space a cached planet or five might take but if it’s able to smooth the transition from space to surface, but it might be a good trade-off if we’re given the option in a later patch.

Speaking of patches, how does the experimental patch affect performance?

experimental-patch

Wow! As soon as I loaded No Man’s Sky after patching it I immediately felt a smoother frame rate but I wasn’t expecting this kind of result. The average frame time dropped a respectable 2.7 ms to 15.0 ms (66.6 FPS) from the pre-patch 17.7 ms (56.4 FPS). I should mention that after patching the game I did get a crash between the two benchmarks, which I haven’t had until now, so there might be increased frame time stability at the cost of game stability. That said, it was a graceful crash that didn’t take the rest of the system down with it, and I was able to jump back in with no problems.

As long as the crashes don’t come too often I think it’s pretty good trade for an experimental patch. The game has been out for a week and I’m anxious to see what fully supported patches bring to the game.

I think I’m Giving up on Pokemon GO (For Now)

In my last post I talked about some of the issues I, and the rest of the Pokemon GO community, are dealing with as Niantic continues to patch the game and roll out to the rest of the world. I’ve been keeping my spirits high and continuing to play, but on my walk to work today I realized something. There’s nothing to do anymore.

It sounds like I’m being a little harsh, and I might be. You can still do things, but I’m not compelled to. Going back to my last post, gym battles are literally pointless. I pass gyms held by other teams and have no motivation to take it over. Or, I’ll pass gyms held by my team and have no motivation to spar and increase the gym’s level. Without any meaningful tracking system, I can only know that somewhere within a several-hundred-meter bubble there is a pincer that will likely disappear before I ever find it. Over the past couple weeks, I’ve only opened the game on my walks to and from work, hatching eggs and catching Pokemon that happen to wonder in front of me. During these walks, I’ll have the phone at my side, the screen turning black to save power. When I feel the vibration and look at my phone, I’ll spin my map or try to tap on the Pokemon only to see that the app no longer responds to touch inputs. After restarting the application the Pokemon I wanted to catch are no longer there. In other instances, I’ll tap on the Pokemon that spawned next to a Poke Stop. When I leave the Poke Stop the Pokemon will have disappeared or seemingly transformed into something else (yesterday a Kingler and Clefable appeared to turn into a Nidoran and Weedle).

I was a Pokemon Go apologist, and I still might be, but at this point, the game is still an early beta with no enticing gameplay mechanics. The fact that the current build is version 0.31 more-or-less confirms that. Last I heard, Niantic has made over $130 million off of Pokemon go with an initial budget of about $20 million. This is only going to grow and grow and they roll out their game to more countries, so I don’t feel like taking a hiatus for a while is going to hurt the game at all. Hopefully, by the time I get to Korea in October the game will be more of a game and less of a meta walking simulator.

Pokemon GO: IVs and why Powering up is Broken

Update: The original post had a few inaccuracies and has been updated accordingly.

Pokemon GO can be broken down into three core game components:

  • Finding and catching Pokemon
  • Powering (leveling) up and evolving Pokemon
  • Battling at, and controlling, gyms

The “3 Foot” bug has been plaguing Pokemon GO for weeks, making it nearly impossible to track down individual Pokemon. If they’re on the radar at all you know they’re somewhere but the only way to track them down is by consistently monitoring where they fall within the list compared to the rest of the Pokemon (Pokemon at the front of the list should be closer to Pokemon at the end). As of version 0.31.0 the footprints have been removed altogether. Posting on the official Pokemon GO Facebook page, Niantic says “We have removed the ‘3-step’ display in order to improve upon the underlying design. The original feature, although enjoyed by many, was also confusing and did not meet our underlying product goals. We will keep you posted as we strive to improve this feature.” It makes sense to not display something that’s known to be broken, and it very well might be causing confusion for new players, but it also sounds like they might not reimplement the original functionality. Hopefully what this really means is they’ll come up with a better way of tracking Pokemon; something akin to the Poke Radar or Dowsing Rod items from the core series games.

Before I explain why I feel like powering up Pokemon is a wasted effort I should explain how powering up works, the risks involved, and some of the underlying game mechanics. Please note that when I say something like “we know” or “this is how it works”, what I really mean is “As a community, we currently believe, based on the data collected and analyzed by groups within the community…”.

When you catch or hatch a Pokemon it comes with a predetermined set of stats. For a Bulbasaur I just hatched those states are:

  • CP: 594
  • HP: 60
  • Type: Grass/Poison
  • Weight: 6.35 kg
  • Height: 0.65 m

We know that weight and height have no influence on any other stats, so we can ignore that. Also, the typing, “grass/poison”, is the same for all Bulbasaurs across the game. The only stats that matter, then, are CP and HP. 594 for a little baby Bulbasaur feels like a lot. To the player (me) it feels like he’s a monster capable of quite a lot. But, we have other information to consider. In order to level up this Bulbasaur, we can see that it will take 2,500 star dust and 2 candies, but how much star dust and candies do I need to spend in order to get him up to a CP 2000+ gym defender? Through whatever means the community has figured out how to calculate a Pokemon’s level and IVs. They have also determined what the maximum level and IVs are, so now we can know what the actual potential of a given Pokemon is. Using Poke Assistant’s IV Calculator tool I entered the required information and got this result:

iv calculator

There are multiple combinations that could result in my Bulbasaur’s stats, each listed below. Sometimes the level may change, too, creating even more possible combinations. The higher level a Pokemon is, typically, the fewer possible combinations there are. So we can see that, with all the possible combinations, my Bulbasaur is somewhere between 84.4% and 86.7% of it’s potential (100% being 15 for attack, defence, and stamina).

Hold up, I haven’t explained exactly what IVs are yet. IVs, or “individual values”, are used to determine how strong a Pokemon can be. In the core games they determine your Pokemon’s physical attack strength, special attack strength, physical defense, special defense, speed, and HP. Here it’s a little simplified, only determining attack, defense, and stamina (HP). Basically, if we have two Bulbasaurs that are both level 20, but one has higher IVs across attack, defense, and stamina, it will be stronger overall than one with lower IVs. This is important to know when selecting a Pokemon to evolve and power up.

Let’s assume you’ve been diligent about not spending your dust and candies on powering up your Pokemon. You’ve chosen your prized Pokemon that you want to pour all your hard work into. Maybe it’s a Golduck with a maximum potential of 91.1% (remember, there are likely multiple possible combinations of level and IV with no way of knowing which is correct without powering up). Currently, I have 98,835 star dust and 32 Psyduck candy. That should be plenty to power up this Golduck into an unstoppable powerhouse. Right?

To level this Golduck from 20 (though we’re hoping it’s currently 19) to its maximum level of 40.5 it would take 242,500 star dust and 41 candies. And that’s gambling it’s actually level 19 with good IVs, requiring even more. If it’s currently level 20 it’s IVs are, at best, average and not worth powering up since it would take all of our current resources, denying a Pokemon with much higher potential the chance to power up.

Note: Your Pokemon’s max level is proportionate to your trainer level. I wasn’t able to find any hard numbers online, so I calculated the potential IVs and levels for my Tauros and started powering it up. Cross-referencing the perfection percentages, before and after, I was able to figure out which one was correct and what the current level was. After powering up Tauros seven times his level increased from 20 to 23.5 (half a level per power-up), his CP rose 181 to 1211 and his HP went up by a staggering 8. My trainer level is 22, so it appears that the max Pokemon level is [trainer level] + 1.5. Seems weird that it isn’t the same as your trainer level, so it’s probably a bandaid for early level trainers. What all this means is that, while Pokemon have a level cap of 40.5, you would need to have a trainer level of 39 to reach that, which, for legitimate players, is going to be a long way off. More likely, your max Pokemon level is going to be around 25. For the Golduck in the previous example, powering up from level 20 to 25 would consume approximately 18,500 stardust.

With weeks of playing, I can invest reasonably in one Pokemon. It feels like so much work for very little payoff. It’s disheartening.

The third part of the game is gym battling. I’ve tried so many times to find a reason to do this but there honestly is none. I live in fairly large city with a heavy technical presence (Nintendo, Microsoft, The Pokemon Company International, ArenaNet, Valve, WarGaming, and more are all 15 minutes or less from my home). Where I live, gyms are impossible to hold for more than a couple hours at best. Many gyms change color multiple times per hour. With a 21 hour timer between collecting gym rewards, you’re lucky to get control of two gyms before collecting your reward for the day. The largest number of gyms I’ve been able to control before cashing in is three, and even then it was mere minutes before all three had fallen. So, what is your reward for holding gyms? 10 PokeCoins and 500 dust per gym. 500 dust?! You get 100 for each Pokemon you catch, so if you catch 20 Pidgies and Weedles in a short session that’s 2,000 dust. You can do that in an hour pretty easily, compared to 500 for a gym every 21 hours. It just doesn’t make any sense. While you do get XP for winning gym battles it doesn’t come close to what you can get for just catching Pokemon. So what’s the point? Bragging rights for your team?

We’ve identified two issues that could potentially be solved by a single change. In the core games, your Pokemon gain XP by battling. Get enough XP and they’ll level up. So why not apply this same mechanic to Pokemon GO’s gym battles? It would require making a few changes but let’s run through some options.

  1. Dust bonuses for winning gym battles
    Did you beat a Pokemon at a rival gym? Have some dust. Did you beat the gym leader? Get a dust bonus. Are you in control of the gym? Dust bonus! Of course, dust could be applied to other Pokemon, but this happens in the core games with Exp. Share, so it really wouldn’t be that different.
  2. Add an additional level-up mechanic
    Currently, the only way to power (level) up your Pokemon is by spending dust and candies, and it happens all at once. Add a counter to the Pokemon’s stats that tracks battles won. Each battle increments the counter by one, gym leaders by 2, 3, etc. depending on the level of the gym or CP of the defending Pokemon. Once that counter hits a certain number, determined by the current level, ding! Level up!
  3. Add an actual Pokemon XP mechanic
    This is probably the most work and least ideal from Niantic’s side. Having XP for both the trainer and Pokemon could be confusing, and would largely eliminate the need for dust, except as a Rare Candy to immediately gain levels.

This is a subtle change but could help trainers feel like their time and effort are going somewhere.

The Nintendo PlayStation

Last year it came to light that someone, a family from the east coast, was in possession of a PlayStation. Of course, millions of families own PlayStations, but this one is different. It’s a prototype of the CD-ROM version of the Super Nintendo.

IMG_0925

In the 90s Nintendo had seen Sega and other hardware manufacturers turn to CD-ROM technology to store games and multimedia content. They decided to invest in a CD-ROM add-on for the Super Nintendo, much like the Disk System for the FamiCom, to enhance the capabilities of the Super Nintendo. Sony was already supplying Nintendo with sound chips and was a leader in home electronics manufacturing so the fit was perfect. Nintendo soon realized that, due to the licensing agreement with Sony, that they would not be receiving royalties for games sold on CD-ROM for the PlayStation. Behind Sony’s back Nintendo signed a new agreement with Phillips to create the CD-ROM add-on. During the 1991 Consumer Electronics Show, Sony formally announced it’s new console, which would play both Super Nintendo cartridges and CD-ROM games. The next day Nintendo announced, at the same show, that they would instead partner with Phillips. Nintendo would eventually cancel all plans for a CD-ROM add-on. This temporary collaboration is what eventually spawned the Phillips CD-i and its well-known Mario and Zelda games.

IMG_0926

Olaf Olafsson, founder and CEO of Sony Interactive Entertainment, held onto his prototype PlayStation after being pushed out of his position at Sony. Later he would become president of Advanta Corporation, which would later collapse. For whatever reason, Olaf seems to have left the system at the company when he left. As the company’s assets were being liquidated a former employee, Terry Diebold, bid on an auction lot; one of the items happened to include this prototype PlayStation. Since then the system has sat in the family’s attic until it was dug out after the son mentioned having it in a reddit post.

Since then the family has toured all over the world, allowing news outlets and enthusiastic gamers to play with it, rather than keeping it hidden in a private collection. The system did have a few problems, however. The sound didn’t work from either it’s Super Nintendo multi-out connection or the dedicated RCA outputs. The CD-ROM drive appeared not to power on at all, and there were multiple failures during the system’s self-diagnostic check. Recently the family brought the console to well-known hardware hacker Ben Heck to document a tear-down of the unit and see if functionality could be restored.

Eventually, the family and the console made their way to the Seattle Retro Gaming Expo this year where, not only did I get to bask in its glory and take pictures of it, I also got to play it. The crazy thing is that literally anyone at the show could stop by and put some time on the system and get a personal demonstration of the hardware. The fact that it’s been damn near given to the gaming community is astonishing; I really can’t get over it.

Anyway, here’s a bunch of photos I took of it. Some things to note are the functioning LCD for the CD-ROM and the rear AV outputs, which is nearly identical to the original release of the PlayStation (model SCPH-1000).