Skip to main content

An app for every tree in Central Park by 2015?

Sometimes I like to think of humans carrying smartphones as Imperial Probe Droids capable of quantifying the world around us.  After all, millions of prosumers use these devices to snap photos, record audio, shoot video, map the position of things and even record our paths.  Smartphones can and do double as truly capable reconnaissance tools.



Much of the information collected through smartphones is then made available on the internet where it can be pulled into a variety of very useful graphs, web pages and applications.  There is tremendous business, consumer, and social demand in place to incentivize these flows.  This pull force is getting stronger as we collectively discover new ways to unlock the value of this data.

A powerful example of this effect is Google Earth.  Since its birth as Keyhole (2001), Google acquisition (2004) and ongoing evolution, Google Earth has steadily added content and increased its resolution.  One can now view weather, traffic and demographic data and even 3d representations of landmarks, trees and buildings (the company I manage, in3d.com, specializes in constructing these) using this contextual 3d map

It is also possible to add information to the objects embedded on Google Earth in the form of custom layers, Google Places pages, or links to websites or custom location/thing apps.  Fueled by Google, a growing number of geosocial startups, businesses looking to differentiate their locations and a growing population of Google Earth enthusiasts, the number of 3d objects paired with rich data is exploding, resulting something that closely resembles Jim Spohrer's augmented reality World Board, the internet of things (a scenario that originally envisioned cheap networked sensors scattered all over the place), or even a virtual version of Bruce Sterling’s spime concept.  (If you're not familiar with these concepts then they're definitely worth a read.)

So the year is 2010, all of the above is possible, the number of smartphones is rapidly rising, and there’s tremendous demand in place to map and link the world.  The next step to proper evaluation of the Central Park scenario now requires consensus on what constitutes an app.

Although most folks probably define apps as programs that run on smartphones, the definition is in fact a bit broader:
Wikipedia: A web application is an application that is accessed over a network such as the Internet or an intranet. The term may also mean a computer software application that is hosted in a browser-controlled environment (e.g. a Java applet)[citation needed] or coded in a browser-supported language (such as JavaScript, combined with a browser-rendered markup language like HTML) and reliant on a common web browser to render the application executable.
The cost of generating an app generally ranges from free (simple widgets, Yahoo pipes) to tens of thousands of dollars fro full-fledged iPhone or Android apps.  But, in general, these figures are dropping as more developers come online, HTML5 enables the insertion of basic apps into web pages, and big companies make it easier for non-technical people to create useful apps.

With over 100,000 Android apps, 225,000 iPhone apps and countless other smartphone-viewable pages that also function as apps, it’s obvious that by 2015 there will be many millions of apps, some useful, many not useful.  

The question at hand is whether or not each and every tree in NYC’s Central Park (there are upwards of 25,000) will have its own app, or website that contains apps, that can be easily accessed in the year 2015.  Here are some of the trends that critically support this scenario:

  • Rise of the Prosumer: With increasingly more and less expensive means to produce content (higher resolution picture and video quality on smartphones by 2015 + new devices such as panoramic lenses or auto-object tagging AI) and a web marketplace for this content, it’s a safe bet to believe there will be many millions more people playing the prosumer game circa 2015. Especially significant will be the growing number of Super Quantifiers (the 3d equivalent of hardcore Wikipedia contributors, few but powerful!) looking to map everything around them for reputation, $ or other social currency. 
  • Crowd-Sourced Photosynthing: Google, Microsoft and a handful of other companies are in an escalating war to most quickly map the world (no surprise as this is absolutely critical to the future of search).  This battle has spread from basic maps, to Street View, and finally to 3d.  3d components are generated by stitching together satellite, aerial, and ground-level photographs.  This process is now taking a big leap forward as new rapid photosynthing processes are developed.  Photosynthing is currently less effective than building 3d models of buildings and trees from carefully taken photographs, but in the next few years we expect it to overtake standard 3d model generation in efficiency.  When that happens, circa 2013/2014, it will be possible to grab public geo-tagged photographs of a given space and to automagically create fine 3d models of everything in that space.  With millions of people taking photographs of central park from various angles, it’s reasonable to believe that there will be sufficient data available to crowd-source a high resolution 3d map of all the trees in Central Park in the year 2015.  Throw into the mix better location positioning, higher-rez aerial photography and perhaps cash incentives for photo-snapping consumers (Google, Microsoft or 3d Party Quantification Company) and it becomes even more likely that a 3d Central Park model is likely to exist by 2015. 
  • Google Things? To grow its advertising base, Google will continue to steadily add value to its Google Places and drive adoption.  It’s reasonable to believe that by 2015 every single Google Places page will either 1) be made available as an app in and of itself, or 2) contain one or many custom apps (thanks to HTML5 or a succeeding web language).  I find it likely that Places will be expanded to include Things or Objects.  Google is, after all, in the business of organizing the world's information and making it universally accessible and useful.  ... If Google doesn’t do this, Microsoft, Facebook or some new start-up likely will.  But doesn't Google Things make sense as the next iteration of Google Goggles?
      With a 3d model of every tree available on Google Earth and the ability to easily add a Places or Things  page associated with any geospatially located object, the next logical question then is whether or not it makes sense for Google to generate a custom Places page for each tree in Central Park. 

      Why would a company like Google or Microsoft do such a thing?  Would people demand it?  

      Here are a few reason why I think this is likely:

      • The Benefits of Simulation: Simulations help people to monitor and manage places. Many groups including the City of New York, park managment, citizen groups looking to preserve Central Park, tourism agencies and educational institutions will see the benefits of a simulated, true-to-scale, true-to-object Central Park - local World Board.
      • Search Wars: The demand for increasingly better Search will drive Google, Microsoft, Facebook, Apple, etc, into every niche that is not defended. Wherever there is information, they'll be there.
      • Key Gaming Catalyst: 3d simulations can serve as the robust scaffolding for new applications and games.  It’s developer heaven.  If these simulations enable lots of new fun games then there will be a large class of people that demand to play them and thus demand the mapping of each and every tree in Central Park.  Imagine Grand Theft Auto with a closer-to-exact map of NYC? 
      • Super Quantifiers: With technology dropping in price, Super Quantifiers will probably quantify the areas around them regardless of wht the rest of society thinks, unless their behavior can be restricted by legislation first.  There are compulsive mappers out there.
         
            But then again, all things future are uncertain, and it’s possible that the 3d Quantification of Central Park will not progress as quickly as I imagine.  Some reasons for this may include:

            • Social Quantification Backlash: At some point the world will realize that rapid, rampant quantification may not be in its interests.  Privacy, security, or plain lifestyle concerns could stop the other trends in their tracks. Events leading up to 2015 could turn people against graph stewards like Google, Facebook, Apple and Microsoft. 
            • Revenue Control: The whole process could be slowed if NYC determines that it wants to control the revenue derived from simulations of Central Park.  This could lead to slow negotiations with Google or Microsoft, or strict regulations that slow the process. 
            • Technology: If the world enters a harsh depression, then it’s conceivable that technological progress will slow by 2015.  That said, the necessary building blocks for tree mapping and apping are already in place.  They don’t have to progress all that much for this to remain a viable scenario.
                Conclusion: So long as there's no social will to regulate against deep quantification of our surroundings, it's highly likely that by 2015 we'll have created 3d versions of all the trees in Central Park via Crowd-Sourced Photosynthing.  It then becomes an almost trivial matter to pair each object of interest contained in these simulations with its own web app or equivalent.

                The implications of such a scenario are profound.  It's a confirmation of the idea that the rate and resolution of our world-modeling behavior is increasing in direct proportion to advancing computing, sensing and social media technologies.  As we capture more data and get better at patching it together into cohesive simulations tools like Google Earth will grow more valuable (and dangerous).  They will then serve as platforms for social commenting, interaction, commerce and gaming.  But along the way the value chain will probably transform and new social behaviors will emerge.  

                Over the next 5 years the web will rapidly spread into the world.  This will not necessarily require the abundant, cheap sensors typically referenced in conversations about The Internet of Things (which is more about direct object-to-object communication).  Instead, it's more likely that prosumers will enrich rich virtual mirror worlds and then access them via geo-coordinates at home or on the go.  

                Here comes Sphorer's World Board, sprouting first in densely populated public areas, like Central Park.

                P.S. I'm not arguing that all of the tree apps in Central Park circa 2015 will necessarily be used very often, just that the means will be there to establish such systems at uber-low cost.  It'll be fascinating to watch use cases emerge.  There will be many we cannot anticipate.


                Thanks to Venessa Miemis for the conversation that inspired me to write this post and to TakuyaMurata for the iPhone user photo - Creative Commons Share Alike 2.0.



                Popular posts from this blog

                Building Human-Level A.I. Will Require Billions of People

                The Great AI hunger appears poised to quickly replace and then exceed the income flows it has been eliminating. If we follow the money, we can confidently expect millions, then billions of machine-learning support roles to emerge in the very near-term, majorly limiting if not reversing widespread technological unemployment. Human-directed  machine learning  has  emerged  as the  dominant  process  for the creation of  Weak AI  such as language translation, computer vision, search, drug discovery and logistics management. I ncreasingly, it appears  Strong AI , aka  AGI  or "human-level" AI, will be achieved by bootstrapping machine learning at scale, which will require billions of  humans  in-the-loop .  How does human-in the-loop machine learning work? The process of training a neural net to do something useful, say the ability to confidently determine whether a photo has been taken indoors or outside, r...

                Donald Trump, Entertainer-in-Chief

                The days of the  presidential  presidency are behind us.   JFK was the  first TV President . He and his successors exuded a distinctly  presidential vibe as they communicated confidently to the masses, primarily through color video, usually behind a podium or in high-power settings, on a monthly or sometimes weekly basis. Donald Trump is the first Web & Reality TV President.  He spent a decade as host and producer of the hit show  The Apprentice  and exudes a distinctly colloquial vibe across cable and the web. Trump prefers titanic business settings like board rooms and communicates to the masses at a daily or even hourly rate, even after the election. Twitter is his pulpit. Trump is a seasoned, self-aware, master content producer AND actor.  In sports, the equivalent is a player/coach, a Peyton Manning or LeBron.  He's calculatedly sloppy and unpredictable, which appears to boost his authenticity and watchability. Most impo...

                IBM Watson AI XPrize Pits AI vs. Human/AI Teams

                XPRize and IBM have announced the IBM Watson AI XPRIZE , a multi-stage Cognitive Computing Competition  with  a $5 million purse that challenges "teams from around the world to develop and demonstrate how humans can collaborate with powerful cognitive technologies to tackle some of the world’s grand challenges." Interestingly, the competition will be open to human/AI hybrid and exclusively AI entrants alike. The contest will culminate in 2020 after a series of IBM's annual "World of Watson" prelim events and draw attention to the human-empowering aspects of Artificial Intelligence.  May the smartest neural array carry the day. Pre-registration is open now at  xprize.org/AI , and detailed guidelines will be announced on May 15, 2016. TED Blog XPrize Announcement