AR technology is widely seen as the next big thing, but how easy is it to develop apps based on this new revolution.
The latest technology craze on mobile devices, augmented reality (AR) has been around for many years, predating the era of the smartphone.
AR aims to supplement information in layers on top of an actual image, as seen through a mobile device's camera. The resultant image can then be displayed on screen.
All of this has been made possible thanks to a number of technologies that are now found on smartphones and tablet devices. These include sensors such as accelerometers and GPS, large clear displays with multitouch capabilities, faster processors and graphic processor units (GPUs), and high Internet speeds.
These technologies were not specifically designed with AR in mind. Therefore, the concept will present challenges for any app developers working to create useful software that can truly be described as augmented reality.
Over the past few years a number of AR apps have been made available for Android and iOS devices. Many of these are primarily marketing-related and AR is seen as an ideal platform. If you want to find the location of the nearest ATM machine, bank or restaurant, for example, then AR can offer a fun and practical way of finding them.
GPS-based applications take advantage of the Global Positioning System (GPS) tools already found in your smartphone. The applications use the position of your device to find nearby landmarks and any other point of interests (POI). Once the POI or landmark is located, the user can get additional information about it or get directions. These apps provide the user with education in real-time via their mobile devices.
Marker-based, or image recognition, apps use a camera to recognise an image in the real world, calculate its position and orientation, and then augment the reality. They overlay the image with content or information for the consumer.
The reality of AR
However, as the technologies on smartphones are not specifically designed to enable AR the problem comes when trying to maintain a certain level of quality and accuracy of AR, particularly when TV adverts and science fiction movies promise such a massive change to how we interact with the real world.
Pattern recognition, for example, depends on a good quality camera, which most modern smartphones possess. But the world which we live in exists in 3D. While your smartphone camera works well when interpreting 2D glyphs such as barcodes and QR codes, an actual object is far more challenging.
Another obstacle is the quality of compass and accelerometer technology in the current generation of mobile devices. Naturally, these depend on interpreting the planet's natural magnetic radiation to determine direction. However, they aren't very good at filtering out all the electromagnetic interference found in built-up areas.
Any owner of an iPhone, for example, has occasionally been instructed by their device to wave it in a large figure-of-eight motion when the compass has gone haywire. Typically, if a large object – such as a train or a tram, happens to go past while you're using your device's digital compass, it's likely to go crazy.
Another issue is the accuracy of the GPS. Currently, it is only accurate up to six metres. For simple direction finding this is fine, but for anything more mission critical it could be problematic. If you are creating a geolocational app that does not require pinprick accuracy, however, then the current level of technical development will suffice.
This explains why many of the applications currently under development involve locating buildings rather than anything that requires greater accuracy. Image recognition technology could be used to enhance the accuracy, but this would depend on the scenario.
So what can be created with the developer tools already available? And what type of knowledge would you require? The good news is that you won't require a significant amount of extra programming knowledge if you are already familiar with the software developer kits (SDKs) for the main mobile platforms.
There are a number of companies who specialise in AR that have released their own SDKs. You can use these to develop augmented reality apps yourself without a great deal of extra programming knowledge.
Layar has been around for a number of years, winning awards for its own branded app. The company encourages organisations or individuals to develop 'Layars' for use with its app. This is all part of its business strategy to make its app the de facto standard for augmented reality. In fact, there is a fierce war among software companies to become the Google of augmented reality – even Google has entered the fray with the development of Google Glass.
Blippar competes with Layar and would also like to become, to all intents and purposes, the go-to platform in AR. Aurasma and Nokia have also developed similar branded apps. No doubt, others will follow suit.
If you want to add your slice of AR to their apps, which they brand and market as being available on most mobile devices, then, in most cases, it's a simple drag-and-drop operation with negligible programming expertise required.
But not everyone wants to use the pre-existing branded applications. If you view AR as a function and not as an application, many would like to use augmented reality in existing apps.
AR Toolkit was one of the earliest companies to develop augmented reality apps. In fact, their software predates the introduction of feature-rich applications on smartphones. Surprisingly, the founders did not originally envisage AR technology being incorporated into mobile technology.
The company, based in Washington State, USA, was started by five leading engineering academics – three of whom are still professors. Their toolkit is available to download by anyone, but to use it commercially you have to purchase a license. The reason for allowing it to be downloaded by anyone is to increase the pool of developers who could add further functionality to it.
New technologies, some of which are only a year or two away, will allow developers to create a raft of new professional AR applications, some of which could make a real difference to our lives.
The Galileo satellite system, currently being developed with support from the EU, will offer European users a more accurate and reliable service. This will underpin a new generation of highly intuitive and sophisticated apps and consumer technologies, one of which will most definitely be augmented reality.
To encourage this, Galileo will provide the world's first specially developed commercial signal to try and boost private sector exploitation of its data. The project has already garnered interest from a variety of contributors including representatives from the aviation, maritime, rail, road, pedestrian, offshore oil and land surveying industries. This gives an indication of how widespread the use of AR could be.
The University of Nottingham's Centre of Excellence in Satellite Navigation, GNSS Research Applications Centre of Excellence (GRACE), along with the Satellite Applications Catapult, the Technology Strategy Board, UK Space Agency and EADS Astrium, recently launched the UK leg of the European Satellite Navigation Competition.
The competition is on the lookout for new ideas from the public for how we use these precise positioning and timing signals to create new technologies. The best entrants will be helped to turn their ideas into reality through financial prizes and business support, patent advice, and introductions to industry partners.
Chip-making company CSR is adding a new feature of indoor tracking to its line of SiRFusion location technology.
The company recently demonstrated devices that pulled together several different technologies to make reliable and accurate indoor navigation possible.
The company's SiRFusion platform and its SiRFstarV mobile chip architecture amount to the latest navigation technology that customers could use in smartphones, tablets and other mobile devices to track a person's location as they're walking through a big building.
The motion-tracking system combines navigation data from the GPS and Wi-Fi network triangulation, as well as motion-sensing devices in the smartphone itself including gyroscopes and compasses. Separately, each one of these systems has its own limitations but together they can do a decent job.