Sony aims to give its next-generation gaming console an up to 10 year shelf-life, according to the CTO of its Computer Entertainment division, Masaaki Tsuruta, in this exclusive E&T interview.
The target longevity of the ‘PlayStation 4’ (it will not be called that) is a simple question of economics - primarily the inflationary economics of electronics system design, believes Tsuruta. The Cell Broadband Engine that sits in the Sony PlayStation 3 is reckoned to have cost $400m as a joint development with IBM and Toshiba. Its successor will undoubtedly need to go further still, much further, for such elements as the main central processor and graphics processor. And not just because all consoles are expected to push chip and software design to the limit.
The company is not really talking about its budget, but analysts Like Silicon Map believe that $1bn for the silicon alone would be a lower-end estimate. Given the cost of software development, support, marketing and a whole lot more that goes into launching a console, the final number will likely be a solid multiple of that. This will be a huge technological and financial play, and Sony will need to see the result maintain a steady return on its investment for some time. The PS3, meanwhile, is getting somewhat long in the tooth in console terms, with its capabilities being pushed by some of the leading-edge software designed to run on it; rival Nintendo has already announced its next generation platform, the Wii U, forcing Sony’s hand on TAT.
“You have to look at the current solutions and the current technologies and see how long you can extend those for the expected life of the product,” Tsuruta admits. “You always want ‘perfect’ technologies, but there are none. So, you look at what is available, and try to get as close as possible to that goal. Even then, some of the things that we want are still five years away [from development].”
His last point is important because it points to a change in strategy that is part of Sony’s decision to explicitly develop a longer-lasting core product, and also highlights one of the main design challenges inherent within it.
Consoles themselves are now only part of the game; highly sophisticated peripherals can deliver as much of a market advantage as the main platform. Nintendo proved this with its Wii controller, which gave a non-HD product the ability to compete on level terms with – and at times beat – its higher-revolution rivals. A better indicator is Microsoft’s Kinect add-on for the Xbox 360: it added motion sensing to that platform five years after the original launch.
If the next PlayStation has to deliver stellar performance out-of-the-box, it also has to have enough processing headroom to carry on delighting the consumer for long after with new options. That means that Sony is, as Tsuruta’s earlier comment suggests, creating a new product with a view to peripherals that will be added post-launch – in some cases, quite some time after – and being more open today about what they are likely to be.
At December 2011’s International Electron Devices Meeting, Tsuruta delivered a keynote on ‘Interactive Games’ that was as much shopping list as strategic vision. It set out a Sony gameplan that includes games which can respond to a player’s emotions, with controllers that incorporate more motion-sensing accelerometers, and even vital signs sensors. There’s even been talk of systems that read players’ eye movements.
Then the company wants to up the ante in haptics technology. Current controllers may vibrate or give some sense of resistance to the user’s movements, but this vision is one that incorporates sufficient touch sensitivity to, say, reproduce the full tactile sensation of stroking a cat. Then there’s Augmented Reality (AR), a Sony concept that will make its first-generation debut in the handheld PlayStation Vita, launching in Europe next month. This feature uses the camera on the tablet-like player to capture your real-world surroundings, and CGI characters are then inserted within them for you to interact with.
For the future of AR, Tsuruta’s presentation imagined a 3D version using lightweight glasses to create a hybrid gaming environment - no mean task. Locating 3D virtual objects within ‘flat’ environments is hard enough, particularly in real-time – only a handful of research projects, including SLAM (Simultaneous Localisation and Mapping) at Imperial College, London’s department of Computing, have even begun to tackle the same challenges for 3D rendered ones.
“For the haptics and the very advanced graphics, we are talking about those five years at least,” Tsuruta says. The fact remains that means Sony’s ambitions and design plans today must already capture the next PlayStation’s peripherals market. That raises several challenges, he acknowledges, not the least of which concern where the digital muscle should go.
These kinds of technology will require more advanced types of sensor technologies such as micro-electromechanical systems (MEMS, a technology branch that includes accelerometers). Not a straightforward design task, but an easier one to locate: they will go in the controller/headset. A bigger question surrounds the traditional ‘heavy-lifting’ processors.
“It took five years before we saw games that used the full power of Cell, so we are used to looking ahead and having capacity,” Tsuruta says. “We are looking at an architecture where the bulk of processing will still sit on the main board, with CPU and graphics added to by more digital signal processing and some configurable logic.”
This type of system integration is becoming more common (Sony has always been a master of all types of integration), but the real challenge here lies in the scale. To give a further, more metric-driven sense of that, Sony’s target is to get latency for a typical playing experience to below 50ms for framerates of more than 300fps. Now, 50ms is an absolute best performance level to start with – most displays actually increase it – for framerates of about 60fps ceiling. Moreover, the target is not for 1080p resolution, but reflect a drive towards 8kx4k.
There is then another factor why Sony is still set to locate the bulk of the processing power for launch and future use within the console. It believes that packaged media, typically BluRay discs, remain the way forward if next generation systems are to offer compelling enough experiences that current PS3 (and rival box) owners will trade up. Online is exciting, possibly profitable, but it is not – yet – sufficient.
“We think that the core games will continue to be the most important,” says Tsuruta. “We don’t want to limit what people do on the console and we will have to do more on the server side, account for some aspects of thin client computing. Many people like the ability to play simultaneously, and when the networks are available we would like to open the platform up to more complex content through them… But we will have to wait for a while because current networks have limitations in bandwidth. A typical PlayStation console game is 50GByte – transferring those kinds of size over most of today’s [public IP] networks won’t work. But more important is the experience. The [public IP] networks cannot yet deliver it.”
So while there will be some features that aim to make the cloud-based gaming experience more immersive – “and, this is key, more secure”, Tsuruta adds – the focus remains local.
Given all these factors, if there is a feeling that Sony is ‘late’ in launching a fourth generation PlayStation, these ambitions suggest it is with good reason. Although Tsuruta (obviously) will not disclose the detailed specification, it now seems reasonably obvious that Sony is developing not so much an immersive games console as something that could evolve into a fully-realised virtual reality machine, rather than simply paving the way for one. For sure, there is a lot on that IEDM shopping list that needs to be refined, but most of it already exists in some form, some quite well developed although some is nascent.
Whatever that near-term future holds though, Sony will need to leverage the best of current technologies and it is here that the company is working in emerging fields. Setting aside the intellectual property that will need to sit in the processing architectures on the system, there is the simple challenge of making the chips.
This vision will need to leverage an emerging chip manufacturing technology: through silicon via (TSV). It stacks multiple pieces of silicon in 3D structures interconnected by pathways that run through the chips themselves. The technique promises to hugely reduce latency and boost performance by greatly reducing the wires signals must traverse. It offers a high integration of traditional and graphics processing alongside analogue.
Given that today’s advanced fabs are operating in 28nm nanometer process geometries and advancing on 20nm, it becomes clear how incredibly delicate and complex a task this is. After all, it’s hard enough right now to lay out a ‘flat’ 28nm chip and get it to yield in profitable quantities with such minute features. As a result, the leading chip foundries are working hard on 3D but offer typical customers an interim/2.5D alternative, often called a silicon interposer. This looks to integrate silicon more closely side-by-side than stacked.
Article update : PlayStation Vita now has even more to live for