RAD: augmented reality turned on its head

Reality Augmented Data

Note: This was originally published on 25 October 2009 on the Carbon Quilt blog (now defunct)

Augmented Reality used to be expensive to do but now the essential components of an AR system are routinely built into consumer electronics. Smart phones have GPS, compasses and accelerometers and AR is accessible to every developer, which is marvellous. There are already some inspiring applications such as Acrossair’s Nearest Tube. And Nearest Tube is surprisingly nice to use, by the way – AR is no longer just for geeks. Here’s a nice collection of AR videos: http://bit.ly/AR-vids.

But AR’s essential components can be used for more than merely augmenting reality (for an account of this see A history of the future of computing: why AR is where it’s at). One of the most exciting opportunities for AR comes from turning it on its head. Instead of adding a layer of data to the real world, use the real world itself as a 'canvas' for data. We at Carbon Visuals have taken to calling this 'reality augmented data' or RAD.

A simple example of RAD in action is www.carbonquilt.org. We take a datum such as a country’s per-capita carbon dioxide emissions and display it as an actual volume on a user’s own street, so he or she can relate to it physically and personally. Users have rich relationship with the buildings and geography of their neighbourhood and Carbon Quilt makes this is available to them for making sense of abstract statistics (literally making sense of them). We are currently working on an iPhone app. that makes the connection even stronger.

Central Bedfordshires carbon footprint and target for carbon reduction by 2020

A basic illustration of RAD. Central Bedfordshire Council emitted 33,702 tonnes of carbon dioxide in 2008/9 (not including social housing stock). The council aims to reduce that figure by 60% by 2020. The illustration shows the actual size of this reduction in terms of volume of gas. Embedding the illustration in a map of Bedford (the nearest town) gives the unweildy volume a familiar context. Viewers' rich, embodied experience of Bedford helps turn the statistic into something personally meaningful.

Here’s the thing about AR: humans are already pretty good at making sense of the real world, but we are still rubbish at making sense of information such as statistics. It is often useful to add a layer of data to the real world, as Nearest Tube demonstrates. Nevertheless, the world is infinitely richer than any layer of data we could add to it. Data, on the other hand, is impoverished and our access to it clumsy. RAD lets us ‘borrow’ the world itself to provide a better interface with data.

This idea of borrowing aspects of world itself to visualise abstract data is not new in scientific visualisation. Philip Robertson mapped it out explicitly in an account of what called the ‘natural scene paradigm’ (Robertson, Philip K. 1991 May, ‘A Methodology For Choosing Data Representations’, IEEE Computer Graphics and Applications, pp 56 67, p 59). But up to now, scientific visualisation has aimed only to make data look like the world (e.g. make an undulating iso-surface look like a mountain, because we are good at looking at mountains and working out what we are looking at with a single glance). RAD goes much further in embedding data in the world itself.

Billions of years of programming (i.e. evolution) have given us a remarkable interface with the real world, much more powerful than any data interface we’ve built. With RAD, our rich experience of the world is co-opted to help us engage with data. This means our relationship with data could be as rich as our engagement with the real world (potentially).