Tuesday night was the first Augmented Reality New York Meetup. There were five quick presentations, and each was followed with a few minutes of Q&A.
Patrick O’Shaughnessey of Patched Reality - The 5 Lives of Criss Angel
Ohan Oda of Columbia University - Goblin XNA
Steve Henderson of Columbia University - Talk Maintenance
Noah Zerkin, Inventor - The Zerkin Glove
Ori Inbar of Ogmento - Put a Spell (spelling game), and Vampire Academy
There were no arrangements for anyone to film the Meetup presentations, so I made a few sample clips with my iPhone, and used them to make the teaser clip above (feel free to embed it and share it to help promote the group). Subsequently I have volunteered to bring a video camera to future events which we intend to stream on the web. The first two presenters I did not think to make a clip of, as I had already seen these projects presented before — I’ve already posted a video of Ohan Oda’s Goblin XNA presentation from the New York ARDevCamp; and Patrick O’Shaughnessey gave me a demo of his 5 Lives of Criss Angel AR puzzle over lunch, after the Emerge - Augmented Reality Unconference at Web 2.Open last month.
The real “Meetup” starts after the presentation, of course, where it’s all conversation with other people that share a passion for augmented reality.
I met Chris Brady of Chart Venture Partners and we discussed what kind of AR opportunities look ripe for their firm’s investment (Answer: Software based technologies with a short time-horizon to goto market. Most interested in software that has military and commercial/industrial applications but are staying aware of potential consumer possibilities. *Chris, let me know if I’ve got that right, or feel free to comment to clarify).
Tish Shute and I debated the possible mid-term ramifications of Google’s and Apple’s maneuvering in the mobile space, the mistakes of Nokia’s AR efforts and the long-term fate of the desktop/laptop computer form-factors in the coming mobile dominated world. We also talked about a comment I left on the blog of her friend David Oliver, and why Apple needs to have a search strategy. Tish made me an invitation see the work she’s been doing with augmented reality over the Google Wave Federation Protocol, and I gave Rob Kelley a crash course in the state of the industry and recent patent filings regarding augmented reality eyewear.
Brandhacker Meetup
On Monday night I went to see Joseph Jaffe speak at the Brandhacker Meetup. Jaffe is a wonderful presenter and it was nice to see him speak to a small crowd. Being Mr. “Join the Conversation,” the dialog was primarily Facebook, Twitter and user generated content focused, quite a world apart from the mobile AR circles I’ve been in for much of the past year. He was also promoting a new book he has coming out in early February titled, “Flip the Funnel”. I had a good conversation with a fellow named Stan Phelps who is writing a book under the working title of Purple Goldfish. Stan pointed me in the direction of B. Bonin Bough. I’ll see what comes of that conversation, and perhaps leave that story for another day.
This post is partially a response to Rouli Nir of Games Alfresco who wrote five predictions for 2010 in Augmented Reality. My site isn’t Augmented Reality specific, so I’ve posted three separate lists of five predictions each.
The first list has five predictions for Augmented Reality in 2010, the second has five predictions for the Internet in 2010, and because we’re at the beginning of a new decade, I’ve made five decade-long predictions.
Augmented Reality - 2010 Predictions
1.OOH advertisers will finally wake up and notice mobile in general, and mobile augmented reality specifically, is marching into their sandbox. Free Advice: Companies working on a mobile AR marketing strategy would do good to get advice from OOH and DOOH marketers. Smart agencies and media management firms will begin to coordinate mobile and OOH strategies and some will even formalize this within their management structure.
2.Mixed-reality AR will spur a renewed interest in 3D virtual worlds, which will experience a revival.
3. Marker based Augmented Reality will not prove to be the savior of print publications. The novelty will die quickly (but marker based AR will be huge in OOH). However, tablet devices will come to the rescue of what we know of as “print.” (In a few more years time time actual paper publications will become the domain of Art periodicals and some high-end fashion magazines as they are frequently produced in much higher resolution [as high as 1200 dpi] than news-weeklies and other mainstream publications [300 dpi]. This will not occur in 2010, but will take a few more years to play out).
4. Before the year is out, a translucent AR tablet device will be available on the consumer market. The concept as shown at left is of the Red Dot Award winning design of Mac Funamizu. With transparent OLED perfected by multiple vendors and begging for a consumer application, I expect to see this form factor show up on the market quite soon.
5. Someone will introduce a pair of AR glasses in the under $500 price range. The performance will be disappointing and they will look silly. But they will be loved by hardcore geeks willing to go gargoyle. Their awkwardness in public places will make the “talking to himself” guy with the bluetooth earpiece seem positively normal. However, their oddity will stir enough commotion to put them on the mass-culture radar inspiring at least one joke from a late night host. They will be a modest financial success and will encourage others to enter the field. It will take another year, as the technology matures, for Apple to enter the market. Like the Apple/Harman Kardon speaker system, and the Apple/Nike iPod Sports Kit, Apple will bring in a high profile partner from the designer eyewear market (candidates include Calvin Klein, Ray Ban, etc.).
Internet - 2010 Predictions
1. In broadcast, media is expensive, and though production and creative development costs are high, they’re still cheap compared to media expenditures. Comparatively, online display media is dirt cheap. The shift in money allocation should be (modestly… at first) out of broadcast media and into digital production/creative (more sophisticated websites and higher quality online video content). This already happens — my anecdotal observation is that digital production and creative development spending is currently underreported. What I’ve observed for years in this business is that when money is needed for production of a digital project that was not budgeted for, clients will regularly pilfer from the largest pie — their broadcast media dollars — because ~$80k here and ~$20k there is hardly missed from a ~$20M+ pie. So my prediction is that marketers will begin to formalize this cannibalization of their broadcast media budgets and this shift will begin to be more accurately reflected in the numbers reported.
2. Apple will enter the search business.
3. MySpace will reemerge as much as a quasi-digital-music-label as a social media network, organizing all its bands’ tracks into a single searchable database with relational recommendations, and become a more serious player in the downloadable digital music business.
4. At the close of 2010 we will still not have an open-standard portable user profile across the major social media platforms. But Facebook Connect will begin to emerge as a default proprietary standard. By 2011/12 Congress will hold hearings on the issue of personal data portability and/or the DOJ will look into whether Facebook merits an investigation for anticompetitive behavior.
5. As content grows, and the demands on our time combined with the failure of search technology to meet our needs, content curation “filters” will emerge as a new form of “search” business model, particularly in niche search categories. Some will be people-driven, some will be algorithm-driven, the best will be a hybrid. This could emerge as a “feature” or “tool” in social media platforms like Facebook.
Predictions for the decade ahead.
Because we’re at the close of not just a year, but a decade, I decided to throw in another five focusing on society impacting technological change in the coming decade. Making predictions about the future can be hazardous, and more so the further out one looks. I’m wagering that the safest way to make such predictions about the future, short of inventing it yourself, is to extrapolate from existing trends and place some bets on which ones will win out. That is what I’ve employed as methodology. Though they are all outside the realm of my expertise, I have confidence in these predictions primarily because, in some capacity, they all already exist.
1. An amputee athlete that would have previously been a competitor in the paralympics will win Gold in the regular Olympics. Their medal will be contested by other “unenhanced” athletes and cause a crisis of identity of what it means to be “authentically human.” This will become a mass culture meme. There will be a headline grabbing story of a child self-amputating their own arm or leg out of envy for the cool bionic prosthetics. There will be copycats.
2. Upright bi-ped robots will walk among us. There will be a high-profile lawsuit resulting from a death from a malfunctioning robot, without malicious intent (or any sentient intent at all) — will the owner or the manufacturer be held liable, and in what capacity — That will open the legal can-of-worms over the future possibility of a robot deliberately taking a life. Our legal system will be in knots over artificial intelligence, and several generations behind the technology.
3. Late in the decade, video contact lenses will finally be out of the lab and available to consumers (video built into regular eye-frames will, by this point, already be considered a common feature.).
4. By decade’s end, at least 50% of cars on U.S. roads will be electric. Many of the same companies at the helm of the petroleum industry will diversify into other forms of alternative energy production — solar, wind, etc. — but they will no longer be referred to as “alternative” as they will simply be mainstream. You may be recharging your electric car at an Exxon/Mobil station powered by solar.
5.Desalination will grow to become a huge global industry.
So there you have it. Please feel free to leave your own predictions in the comments.
Saturday was the first annual ARDevCamp (Augmented Reality Developer Camp). Originating in San Francisco, with an East coast camp quickly organized in New York City. It was embraced with such enthusiasm that smaller satellite camps were organized in Amsterdam, Netherlands; Brisbane, Australia; Seoul, South Korea; Manchester, UK and others.
I’ve said many times that the community surrounding AR, especially mobile AR, is like Déjà vu from the early days of the web. There is something unique happening here, something technologically revolutionary, and the word is spreading.
The best thing about these events is meeting other people that share a common interest, that share your enthusiasm. I can talk about this with friends and business acquaintances… to a point. But I’m the one talking. Some have a top-line knowledge, but with most I’m sharing information and they’re asking questions. Among this circle, I’m in the median. A little more knowledge than some, but a lot less than many others. Seeing new things, picking up new knowledge.
The only formal presentation was Ohan Oda, showing his Goblin XNA project, part of his Ph.D. study at Columbia. One one his colleagues, Dr. Steven K. Feiner, Professor of Computer Science at Columbia shed additional light on the project. Matt Shapoff was industrious enough to get this live-streamed from his laptop:
Ohan Oda was kind enough to share his slide presentation.
The real highlight was finally meeting Noah Zerkin, whose worked I’ve followed and with whom I’ve corresponded, but only now just met. He was generous enough to let me demo a Microvision Nomad HMD. This older discontinued model was a monochrome display in red, which had a look eerily reminiscent of the Terminator.
Noah does NASA funded research in the Neurology Department Human Aerospace Lab at Mount Sinai Medical Center. He has particular expertise in dataglove technology, but is also a connoisseur of video eyewear.
After posting a tweet regarding the Microvision display, I received a message from a friend in California who works for NASA, inviting me to demo the Mars seamless virtual reality CAVE at NASA’s Jet Propulsion Laboratory. A “CAVE” system uses similar stereoscopic 3D technology as an IMAX theater wearing polarized lenses, but with screens projected onto the walls of a cube that the viewer stands inside of… and a “seamless” CAVE has a curved wall interior. I plan to take him up on his generous offer as soon as I can get back out West.
As the evening wound down, Steven Feiner, Tish Shute and Heidi Hysell appeared intent to talk augmented reality game-play into the night, but I had other plans.
My good friend Al Risi had the run of the Gibson Guitars NYC Showroom, in the former space of The Hit Factory recording studio, for a private birthday party for both himself and Jessica Daponte… and a few hundred of their closest friends. Our highlight was hearing jam sessions with both Kenny Kramme and Ric Agudelo. Two good friends, both drummers, neither of whom I had ever seen play.
Much of Sunday was spent following the Humanity + Summit (streamed live on TechZulu). This was the first event organized by The World Transhumanist Association since they changed their name to HumanityPlus (with the acquisition of H+ magazine, to which I contribute).
I watched all four of the speaker in the Artificial Intelligence series. Ben Goertzel’s presentation was particularly entertaining. Although it was streamed, it does not (yet anyway) appear to have been posted anywhere. Hopefully they will eventually post the lectures on the HumanityPlus website. There were many speakers, Alex Lightman, RU Sirius and Brian Selzer among others, whose talks I would still like the opportunity to see. If and when they do so I will update this page. Two clips:
Brian Selzer: Reinventing Reality with AR.
Ed Lantz: Pervasive Projections and Social Space.
You can view 36 videos from the Humanity + Summit in this directory.
And this officially begins the holiday season party week. How will anything live up to this weekend?
Eyeglasses, as we know them today, have been around for about 800 years, give or take. In-eye lenses, for just over a hundred and modern contact lens for about 50. In our time of exponential technological advancement I don’t expect to wait half a century for this technology to mature, but we’re going to see augmented reality optics in a form-factor similar to eyeglasses long before we’re placing them directly onto our cornea.
I do, however, believe that the “through the looking glass” trend of AR applications for mobile devices that are coming on the market today will be a short lived, stop-gap solution until the adoption of AR eyewear. If mobile AR indeed takes off (and I believe it will), people will quickly tire of holding their smartphones out in front of their faces. This will ultimately lead to interesting partnerships between fashion eyewear manufactures and consumer electronics companies, not unlike the partnership between Nike and Apple in the personal fitness electronics space. For now we have several electronic manufactures, many who are most use to dealing with military clients, doing their best to design consumer focussed video eyewear. With mixed results.
Vuzix, the only manufacturer already in the market with a consumer level AR offering, is due to launch two new stereoscopic pairs with AR functionality before the holidays (Wrap 920 and Wrap 310 shown on top row, above). The launch date was moved back once, the 920s were originally planned for a Spring debut. Vuzix technology is similar to that used in standard video eyewear, but an attachment will add cameras to “see” in front of the lens, and play the camera’s video feed with data overlaid— not dissimilar to the AR apps currently available on the Android and iPhone now.
This is also the method that had been employed most extensively for augmented reality in commercial and research environments. WorldViz sells a VideoVision attachment for the NVIS nVisor. While these are state of the art in the AR eyewear space, the attachment alone costs about $12,000.00, and this does not include the NVIS unit which itself cost between $20,000.00 to $30,000.00, depending on the model. And neither price includes the software to run this rig. That’ll cost you extra. Sensics also has a commercial-grade offer in this space. While I don’t know anything about their pricing, I can vouch that they are equally as fashionable:
Besides their, um… avant garde styling, what distinguishes the Sensics and WorldViz/NVIS units from consumer-grade offerings is that both of these models display in HD. The experience is immersive. If you’ve ever played with regular (non-AR) video eyewear (whether 3D stereoscopic or simply 2D video glasses), you’re likely aware of the disappointing resolution. In the sub-$750 market, 640x480 is still standard fair. If you’re willing to spring for $1000+, then 800x600 is the top end of the consumer market. Resolution for the Vuzix Wrap models has not yet been disclosed (nor has the price), but I speculate that they will likely be 800x600.
Recent advancements by OEMs like Kopin (supplier to Vuzix, MyVu and others) and eMagin have increased microdisplay resolutions up to 1280x1024 on screens smaller than 1 inch. Both are now offering paired assemblies for stereoscopic eyewear. Kopin is actively seeking partnerships to take their newest technology to market in consumer products, while eMagin maintains their own consumer devision (branded 3Dvisor), marketing HMDs for the gaming market.
While there are many industrial applications for this technology in the fields of architecture/engineering, aviation and other training environments (not to mention entertainment/gaming), most of this research has been funded by military contracts. Peter Wood, a CD I worked with in the past, once said, “How great it would be if we could gain all the technological advances of World War III without having to fight it.” At risk of opening a debate over military policy, the reality is that (as has always been the case) a great number of recent technological advancements spring from research initially financed by military contracts and military related research. Indeed, as with most of these companies, Kopin’s largest clients are US military contractors.
What I find most interesting in this space is that, much like the market in TV displays, there are currently many different technologies emerging that are going to compete with one another. This should be a boon for the consumer— resulting in faster innovation and lowering costs.
Lumus Optical has another approach. Lumus, the Israeli supplier of military video components, has engineered a solution that projects a partially translucent image onto the inside of a specially designed eyeglass lens. The model (shown at top with the two Vuzix models) is a functional concept pair. Lumus is attempting to partner with consumer retailer who could buy the lens assembly, and incorporate it into their own form-factor.
In 2008 Apple was awarded a patent for a translucent augmented reality eyewear design that cleverly employs prisms and mirrors to move a laser projection from between the lens, and cast it onto the lens facing the eye. Apple, true to its culture, has not said a word about the filing or any forthcoming products employing such technology.
Microvision, manufacturer of laser pico-projector components, has also been awarded video vision patents, theirs for retinal laser scanner technology, over a decade ago now. With an existing clientele primarily in military suppliers, their pico-projectors have begun making inroads into consumer mobile devices. Perhaps encouraged by this consumer market success, they are aggressively seeking partners to take their eyewear technology into the consumer space.
Another technology with tremendous promise is transparent (and flexible) OLED. Everyone from Samsung to GE to Kadak to Philips and others have been experimenting with lightweight, flexible and transparent OLEDs. The potential here is obvious, as it is not difficult to imagine simply building a transparent curved OLED into the lens of a pair of glasses, obviating the need for much of the engineering gymnastics required to get the other display types in front of the eye without creating unnecessary bulk and obstruction. Thus far I’ve not seen a single prototype employing this technique, though the Samsung representative touting the technology in the video (below) made at the 2009 CES does mention it as a possible application. The second video is a demonstration by Sony of the malleability of their flexible OLED.
Unlike some other overhyped memes of the moment, these are proven technologies, some of which are already on the market, others that we can expect to see on the market in the next 12 to 24 months.
To read about the evolution of Virtual Reality eyewear, with an overview of virtual world marketing techniques, read the GigantiCo article, “Virtual Reality: Part 1”.
I’ve just witnessed one of the fastest meme-burns I’ve ever seen.
I learned about these video contact lenses from a New York Times article posted to Kurzweil AI back in April (And Washington University’s original press release dropped on January 17th.). I confess, I even tweeted this story myself back in June (while attending CAT, in response to a comment made by Mike Geiger). But the story laid dormant for most of the last six or seven months. Until a few weeks ago when IEEE Sprectrum did an in depth, four page story on the technology that was picked up by Dana Oshiro at ReadWriteWeb. With RWW’s large readership, the story took-off on Twitter. By Thursday WIRED Gadget Lab jumped on the bandwagon and set Twitter and the tech/media blogshpere ablaze. Robert Rice, Chairman of the AR Consortium, threw some cold water on the euphoria, in an attempt to reel people back in to reality… to little effect. I don’t wish to in anyway downplay the research being done by Babak Parviz and his team. Incrementally this will improve, and when it does I’ll be first in line. But that is at least a decade if not more away.
I write this as a sort of prologue to my next story, one I’ve been researching for a while, regarding Augmented Reality eyewear, I invite you to follow on to the next article, “Eye for an iPhone”.
When discussing Location-Aware Mobile Augmented Reality with clients or friends they are often initially mystified by how it works without using any form of tagging or QR codes. In short, this video is a visualization of the first conversation I usually have when the subject comes up. I’ve created it as a simple explanation to demystify the technology for those who are just becoming familiar with it.
The visual shown is not of any specific AR application, it is only meant to be a general representation of the underlying technology.
Many of the links in this article are for video demos. Rather than having a string of 30+ videos cluttering and breaking up the article, I’ve chosen to set up a separate video page. When you click a video link, it will open a second window. You can view the related video as well as navigate all of the other videos from this window. If your monitor is large enough to permit, I would even suggest leaving the second window open for the videos to cue each video when needed, as you read through the article. To differentiate the video links from other links, links to videos are each followed by a “¤”. To open the window now, click here ¤.
While social media in general, and Facebook and Twitter specifically, have been monopolizing mainstream media’s coverage of online trends, augmented reality is getting a lot of inside-the-industry exposure, mostly for its undeniable wow factor. But that wow factor is a double edged sword, and advertising has a way of turning trends into fads, just before they move on to the next brand new thing. So for this article I wish to focus on practical applications and augmented reality with clear end user benefits. I’ve deliberately chosen not to address entertainment and gaming related executions as it is beyond the scope of this article and frankly merits dedicated attention all its own. And perhaps I’ll do just that in a future article.
Can it Save the Car? The automotive industry was an early adopter. Due to the manufacturing process, the CAD models already exist and the technology is very well adaptive to showing off an automobile from a god’s eye view. Mini ¤ may have been first off the pole position, with Toyota ¤, Nissan ¤ and BMW ¤ tailgating close behind. Some implementation of AR will soon replace (or augment) the “car customizer” feature that is in some form standard on all automobile websites.
The kind of augmentation that is so applicable to automotive is also readily adaptable to many other forms of retail. Lego ¤ is experimenting with in store kiosks that feature the assembled kit when the respective box is held before the camera. Because legos are a “kit” the technology is very applicable in-store, however I find the eCommerce opportunities much more compelling. Ray-Ban ¤ has developed a “virtual mirror” that lets you try on virtual sunglasses from their website. Holition ¤ is marketing a similar implementation for jewelry and watches. HairArt is a virtual hairstyle simulator developed for FHI Heat ¤, maker of hair-care products and hairstyling tools. While demonstrating potential, some attempts are less successful ¤ than others (edit: I just learned of a better execution of an AR Dressing Room by Fraunhofer Institut). One of the most practical, useful implementations I’ve seen is for the US Post Office ¤— A flat rate shipping box simulator (best seen). These kind of demonstration and customization applications will soon be pervasive in the eCommerce space and in retail environments. TOK&STOK ¤, a major Brazilian furniture retailer, is using in-store kiosks to view furniture arrangements, though I personally find theirs to be a poor implementation. A better method would be to use the same symbol tags to place the AR objects right into your home, from the camera connected to your PC. And that’s just what one student creative team has proposed as an IKEA ¤ entry for their Future Lions submission at this years Cannes Lions Advertising Festival. A quite sophisticated version of this same concept has also been developed by Seac02 ¤ of Italy.
To Tag, or not to Tag? A couple of years ago I wrote here about QR codes. A couple of weeks ago, while attending the Creativity and Technology Expo, I was given a private demo of Nokia’s Point & Find ¤. This is basically the same technology as QR codes, but uses a more advanced image recognition that doesn’t require the code. Candidly I wasn’t terribly impressed. The interface is poor and the implementation is so focused on selling to advertisers that they seemed oblivious to how people will actually want to use it, straightjacketing what could be a cool technology. Hopefully future versions will improve. Most implementations of augmented reality rely on one of two techniques— either a high degree of place-awareness, or some form of object recognition. Symbols similar to QR codes are most often used when the device is not place-aware, though some like Nokia’s Point & Find don’t require a symbol tag. Personally, even if the technology no longer requires it, I feel the symbol or tag-code is a better implementation when used for marketing. We are still far from a point where everything is tagged, so people won’t know to inspect if a tag-code is not present. Furthermore, the codes place around on posters and printed material help build awareness for the technology itself. Everything covered here thus far has been recognition based augmented reality.
Through the Looking Glass Location-aware augmented reality usually refers to some form of navigational tool. This is particularly noteworthy with new applications coming to market for smartphones. As BlackBerry hits back at the iPhone, Android’s list of licensees grows and the Palm brings a genuine contender back to the table with the new Palm Pre, there is huge momentum in the smartphone market that not even the recession can slow down. I personally think the name “smartphone” is misleading as these devices are far beyond being a mere ‘phone’. Even a very smart one. They are full-on computers that, among many other features, happen to include a phone. In my prior article on augmented reality I focused on the iPhone’s addition of a magnetometer (digital compass). This gave the iPhone the final piece of spacial self-awareness needed to develop AR applications like those coming fast and furious to the Android platform. Think of it like this— The GPS makes the phone aware of its own longitudinal and latitudinal coordinates on the earth, the compass tells it which direction it is facing, and the accelerometer (digital level-meter) determines the phone’s degree from perpendicular to the ground (this is what lets the phone’s browser know whether to be in portrait or landscape mode). Through this combination of measures the device can determine precisely where in the world it is looking. There is already a fierce race to market in this highly competitive space. Applications like Mobilizy’s Wikitude ¤ (Android), Layar ¤ (Android) and other proof of concepts seeking funding like Enkin ¤ (Android) and SekaiCamera ¤ (iPhone) are jockeying for the mindshare of early adopters. Others have developed proprietary AR navigational apps such as IBM’s Seer ¤ (Android) for the 2009 Wimbledon games. Two months ago when Nine Inch Nails released their NIN Access ¤ iPhone app, there was no iPhone on the market with a built-in compass, so the capability for this level of augmentation was not yet available, but a look at the application’s “nearby” feature gives a hint at the kind of utility and community that could be built around a band or a brand using this kind of AR. View a demo of Loopt ¤, and only a little imagination is needed to see how social networking can be enhanced by place awareness, now add person-specific augmentation tied to a profile and the creepy stalker potential is brought to full fruition, depending on your perspective. And there are other well established players in the automotive navigation space that have a high potential for crossover. The addition of a compass to the iPhone paved the way for an app version of TomTom ¤. Not to be outdone, a Navigon ¤ press release has announced that they too have an iPhone app in development. How long before location-aware automotive navigation developers choose to enter the pedestrian navigation space?
Some Assembly Required It seems everyone wants some AR business from IKEA. Another spec project by a student at the University of Singapore proposes an assembly instruction manual for IKEA ¤ furniture. In a more sophisticated application on the same line of thought, BMW ¤ is experimenting with augmented reality automotive maintenance and repair technology. Note in that video that he is not doing this in front of his laptop camera, nor is he holding his smartphone in front of his face. He’s wearing special AR eyewear. The potential for hands-free instruction and tutorial is as obvious as it is unlimited. Consider any product you purchase that comes with instructions (You do read the instructions, right?). A municipal construction crew repairing a broken water pipe could effectively have X-Ray vision, seeing where all the pipes are under the road, based on schematics supplied to their eyewear from city records.
Seeing is Believing When it comes to Virtual Reality, I’ve had a mantra that none of this will really take off until we’re in there versus looking at there. I believe augmented reality will be the catalyst that pushes digital eyewear into the marketplace. Virtual World applications are, by their nature, not location dependent. In many ways that’s the point— you can be anywhere. And sitting at your computer or game console and looking at a screen is a well established all-purpose interface. Place-aware augmented reality, on the other hand, is location dependent— Walking down the street holding your smartphone in front of your face is not a long-term solution. In only a short couple of years, a bluetooth earpiece ¤ has gone from being the goofy guy walking down the street who looks like he’s talking to himself, to a common everyday accessory, even fashionable. What works for your ears is now coming to your eyes— a hands-free visual interface in the form of eyewear. Some variation of this concept has been around for a long time ¤. Slow to improve, even most contemporary models are less fashionable than a Geordi LaForge’s visor, but slowly they are improving.
The Vuzix Wrap 920AV (at left), prototype premiered at the 2009 CES in Las Vegas, are the newest consumer class digital eyewear marketed for augmented reality applications. WIRED Magazine’s Gadget Lab feels their most significant feature, “comes from the fact that the company finally hired a designer aware of current aesthetic tastes.” Significant to the 920AV’s is that: A. They boast ‘see-thru’ video lens that readily lend themselves to augmented reality applications, and B. They are stereoscopic (meaning they have a separate video channel for each eye, required for 3D). They are meant to hit the market in the Fall, and are being pushed as an iPhone compatible device. If they are smart, they will do a bundled play with a “killer app” such as SekaiCamera or similar product. They have the potential to be the ‘must have’ gift for the 2009 holiday season. Not to oversell them, I have not personally demoed them yet so I don’t know if they will deliver on the hype, but they look as though they will be first to market, and their product will be the leading contender in the immediate future. Here is a demonstration of a prior Vuzix model ¤ (behold the fashion statement). Using symbol-tag based augmented reality, this man places a yacht in his living room.
If the quality of the user experience fails to live up to expectations, Vuzix has many pretenders to the crown. Fast followers like Lumus (at left) and others are trying to get products to market as well. Then there are MyVu, Carl Zeiss, i-O Display Systems and others who have video eyewear products and are likely candidates to come forward with AR offerings. Add to that a technical patent awarded to Apple last year for an AR eyewear solution of their own and it is clear this could quickly become a crowded and competitive product category. This video titled Future of Education ¤, while speculative, is a splendidly produced and rather accurate projection of where the technology is going.
Where to From Here? We’re moving in this direction at exponential speed, the pace of progress is only going to keep moving faster. As we see the convergence of augmented reality with mobile and mobile with ear and eyewear, there are another set of convergences just over the horizon. We’re on the threshold of realtime language translation ¤. This is an ingredient technology and, like a spell-checker, will soon be baked in to all communications devices, first of which will be our phones. The Nintendo Wii brought motion capture into our homes, and technologies like Microsoft’s Project Natal ¤ are converging motion capture with three dimensional optical recognition, so no device is needed. And everything, both real and virtual, will soon be integrated into the semantic web. Intelligent agents will assist us with many tasks. While most of this intelligence will occur behind the curtain, as humans we like to personify our technology. It won’t be long before our personal digital assistant could be given the human touch. How human?
NOTE: In the references below, I’ve included a list of firms that have created some of the pieces shown here or the technologies used.
Apple iPhone Apps reports on new iPhone features, attributing credit to an anonymous leak from inside Apple. I would like to focus on one specific feature. They report, with skepticism:
-Revolutionary combination of the camera, GPS, compass, orientation sensor, and Google maps
The camera will work with the GPS, compass, orientation sensor and Google maps to identify what building or location you have taken a picture of. We at first had difficulties believing this ability. However, such a “feature” is technically possible. If the next generation iPhone was to contain a compass then all of the components necessary to determine the actually plane in space for an image taken. The GPS would be used to determine the physical location of the device. The compass would be used to determine the direction the camera was facing. And the orientation sensor would be used to determine the orientation of the camera relative to the gravity. Additionally the focal length and focus of the camera could even assist is determining the distance of any focused objects in the picture. In other words, not only would the device know where you are, but it could determine how you are tilting it and hence it would know EXACTLY where in space your picture was composed. According to our source, Apple will use this information to introduce several groundbreaking features. For example, if you were to take a picture of the Staples Center in Los Angeles, you will be provided with a prompt directing you to information about the building, address, and/or area. This information will include sources such as wikipedia. This seems like quite an amazing service; and a little hard to believe, however while the complexity of such a service may be unrealistic, such is actually feasible with the sensors onboard the next generation iPhone.
And why “unrealistic”? Every piece of this technology already exists in the wild. This is not a great technological leap. This is merely smart convergence.
There are already two applications on the Google Android platform that have these features. One is a proof-of-concept called Enkin, developed by Max Braun and Rafael Spring (students of Computational Visualistics from Coblenz Germany, currently doing robotics research at Osaka University in Japan). The second, Wikitude by Mobilizy, is already in full-blown commercial release (an Austrian company, founded by Philip Breuss-Schneeweis and Martin Lechner).
WIKITUDE DEMONSTRATION:
ENKIN, PROOF-OF-CONCEPT:
It is only one short step further to let users geo-tag their photos. Many social photo/map applications available for the iPhone already incorporate such a feature. Building this into the realtime viewfinder would not be a great challenge. By example, the proof-of-concept for this already exists in the form of Microsoft’s Photosynth (silverlight browser plugin required).
Social Media apps could tap into this utility to network members in real space. At the most basic level, Facebook and/or LinkedIn apps could overlay member’s with their name and profile information.
The next logical extension of this will be to place the information directly into your field of vision.
The OOH marketing opportunities are immense. Recent campaigns for General Electric in the US, and the Mini Cooper in Germany show where this is going. Suddenly the work done by Wayne Piekarski at the University of South Australia’s Wearable Computer Lab is no longer so SciFi (now being commercialized as WorldViz). At January’s CES, Vuzix debuted their new 920AV Model of eyewear, which includes an optional stereoscopic camera attachment to combine virtual objects with your real environment. Originally scheduled for a Spring release, their ship-date has now been pushed back to Fall (their main competitor, MyVu, does not yet have an augmented reality model). If the trend finally takes, expect to see more partnerships with eyewear manufactures.
Initially through the viewfinder of your smartphone, and eventually through the lens of your eyewear, augmentation will be the point of convergence for mobile-web, local-search, social media, and geo-targeted marketing. Whether Apple makes the full leap in one gesture with the release of their Next-Gen iPhone, or gets there in smaller steps depends upon both the authenticity/acuracy of this leak, and the further initiative of third-party software and hardware developers to take advantage of it. Innovation and convergence will be the economic drivers that reboot our economy.
EDIT: The only capability Apple actually needs to add to the iPhone in order for this proposed augmented reality to be implemented is a magnetometer (digital compass). Google Android models already have this component. Charlie Sorrel of WIRED Magazine’s Gadget Lab has separately reported this feature through leaks of a developer screen shot, and on May 22nd Brian X. Chen, also reporting for WIRED Magazine’s Gadget Lab, put the probability of a magnetometer being included in the new iPhone at 90%. Once the iPhone has an onboard compass, augmented reality features will begin to appear, whether through Apple’s own implementation or from third party developers.
UPDATE: Since the time of this writing, the iPhone 3GS has been released, and it does indeed include an magnetometer.