Sony Vita

Sony Vita's inner curriculum

Sony's PlayStation successor has been designed to be more than a games console – the company has aspirations for it to find a place as a high-end processing workhorse that can support more business-oriented applications.

Consumer technology giant Sony Entertainment aims to give its next-generation Vita gaming console a 10-year shelf-life, according to the CTO of its Computer Entertainment division, Maasaki Tsuruta. This is significantly longer than the market has seen historically, although the PlayStation 3 will be at least seven years old by the time its successor appears. It is also, of course, much longer than would be expected for the average enterprise server, let alone endpoint productivity device (deskbound or mobile).

This target longevity is a simple question of economics – primarily the inflationary economics of electronics system design, believes Tsuruta. The Cell Broadband Engine that sits in the Vita is reckoned to have cost $400m as a joint development with technology partners IBM and Toshiba. The games market is a massively profitable one, but even so, the scale of this investment is indicative of the fact that in 2012, games consoles are no longer predicated on just game-playing. Sony expects that this core technology will both lend itself to other computing applications, such as high-performance computing, and also form the basis of other profitable technology partnerships in the future. The story of how the games console is finding a place in enterprise computing begins with componentry.

Its successor will undoubtedly need to go further still, much further, for such elements as the main central processor and graphics processor. And not just because all consoles are expected to push chip and software design to the limit.

As Tsuruta says: "We don't want to limit what people do on the console, and we will have to do more on the server side, account for some aspects of thin client computing."

The company is not really talking about its budget, but analysts Silicon Maps believe that $1bn for the silicon alone would be a lower-end estimate. Given the cost of software development, support, marketing and a whole lot more that goes into launching a console, the final number will likely be a solid multiple of that. This will be a huge technological and financial play, and Sony will need to see a steady return on its investment. The PS3, meanwhile, has got long in the tooth in console terms, with its capabilities being pushed by some of the leading-edge software designed to run on it. Rival Nintendo has already announced its next generation platform, the Wii U, forcing Sony's hand on TAT. Consoles themselves are now only part of the game; highly sophisticated peripherals can deliver as much of a market advantage as the main platform. Nintendo proved this with its Wii controller, which gave a non-HD product the ability to compete on level terms with – and at times beat – its higher-resolution rivals. A better indicator is Microsoft's Kinect add-on for the Xbox 360: it added motion sensing to that platform five years after the original launch.

An emotional response

If Vita has to deliver stellar performance out of the box, it also has to have enough processing headroom to carry on delighting the consumer for long after with new options. That means that Sony is, as Tsuruta suggests, creating a new product with a view to peripherals that will be added post-launch.

At December 2011's International Electron Devices Meeting, Tsuruta delivered a keynote on 'Interactive Games' that was as much shopping list as strategic vision. It set out a Sony gameplan that includes games which can respond to a player's emotions, with controllers that incorporate more motion-sensing accelerometers and even vital-signs sensors; there's also talk of systems that read players' eye movements.

Then the company wants to up the ante in haptics technology. Current controllers may vibrate or give some sense of resistance to the user's movements, but this vision is one that incorporates sufficient touch sensitivity to, say, reproduce the full tactile sensation of stroking a cat. Then there's Augmented Reality (AR), a Sony concept scheduled to make its first-generation debut in the handheld PlayStation Vita with its European launch. This feature uses the camera on the tablet-like player to capture real-world surroundings, and CGI characters are then inserted within them for the user to interact with. For the future of AR, Tsuruta imagined a 3D version using lightweight glasses to create a hybrid gaming environment – no mean task. Locating 3D virtual objects within 'flat' environments is hard enough, particularly in real time – only a handful of research projects, including SLAM (Simultaneous Localisation and Mapping) at Imperial College, London's department of Computing, have even begun to tackle the same challenges for 3D-rendered ones.

"For the haptics and the very advanced graphics, we are talking about five years at least," Tsuruta says. The fact remains that Sony's ambitions and design plans today must already capture the next PlayStation's peripherals market. That raises several challenges, not least where the digital muscle should go. These kinds of technology require more advanced types of sensor technologies, such as micro-electromechanical systems (MEMS, a technology branch that includes accelerometers). Not a straightforward design task, this, but an easier one to locate: they will go in the controller/headset. Bigger questions surround the traditional 'heavy-lifting' processors.

"It took five years before we saw games that used the full power of Cell, so we are used to looking ahead and having capacity," Tsuruta points out. "We are looking at an architecture where the bulk of processing will still sit on the main board, with CPU and graphics added to buy more digital signal processing and some configurable logic."

This type of system integration is becoming more common; but the real challenge here lies in the scale. To give a further, more metric-driven sense of that, Sony's target is to get latency for a typical playing experience to below 50ms for frame-rates of more than 300fps.

Now, 50ms is an absolute best performance level to start with – most displays actually increase it – for frame-rates of about 60fps ceiling. The target is not for 1080p resolution, but reflect a drive towards 8kx4k.

The 50GB play-off

There is then another reason why Sony is still set to locate the bulk of the processing power for launch and future use within the console. It believes that packaged media, typically Blu-ray discs, remain the way forward if next generation systems are to offer compelling enough experiences that current PS3 (and rival box) owners will trade-up. Online is exciting, possibly profitable, but it is not – yet – sufficient.

"Many people like the ability to play simultaneously, and when the networks are available we would like to open the platform up to more complex content through them," says Tsuruta, "but we will have to wait a while because current networks have limitations in bandwidth. A typical PlayStation console game is 50GB – transferring those kinds of sizes over most of today's [public IP] networks won't work; but more important is the experience. The [public IP] networks cannot yet deliver it."

So while there will be some features that aim to make the cloud-based gaming experience more immersive – "and, this is key, more secure", Tsuruta adds – the focus remains local.

Whatever that near-term future holds, though, Sony will need to leverage the best of current technologies and it is here that the company is working in emerging fields. Setting aside the intellectual property that will need to sit in the processing architectures on the system, there is the simple challenge of making the chips.

This vision will need to leverage an emerging chip manufacturing technology: through-silicon via (TSV). It stacks multiple pieces of silicon in 3D structures interconnected by pathways that run through the chips themselves. The technique promises to hugely reduce latency and boost performance by greatly reducing the wires that signals must traverse. It offers a high integration of traditional and graphics processing alongside analogue.

Given that today's advanced fabs are operating in 28nm process geometries and advancing on 20nm, it becomes clear how incredibly delicate and complex a task this is. After all, it's hard enough right now to lay out a 'flat' 28nm chip and get it to yield in profitable quantities with such minute features.

As a result, the leading chip foundries are working hard on 3D but offer typical customers an interim/2.5D alternative, often called a silicon interposer. This looks to integrate silicon more closely side-by-side than stacked. 

Further information

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close