Friday, October 30, 2009

Is Facebook Graph a Counter to Google Wave Federation? Absolutely.

From Beta, to Alpha to Roadmaps - recent moves by Google, Facebook and Twitter demonstrate that platform release is accelerating.

Facebook and Google have me convinced that they're among the most foresighted of companies in the social media space.  Understanding the value and nature of prosumers, developers, structured content, open source, and broader tech-info convergence, the gargantuan yet speedy pair (+ quickly growing thirdborn Twitter) are jockeying to connect to more data, brains and meaningful partnerships.  The result is fierce, healthy competition that's accelerating the pace and manner of social software platform releases.


Yesterday, Facebook announced a set of smart maneuvers clearly spurred on by Google's aggressively open strategy, including one called Open Graph (you can tell just by the name that this is a Zuckerberg baby) that will allow website builders everywhere to build Facebook-style pages, complete with many of the platform's bells and whistles - a very logical follow up to Google Friend Connect.

This morning, it's rumored that Google will announce the opening of their Wave servers for federation later in the day.  Much like the release of Wave itself, the move, initially promised when Google announced Wave at I/O on May 28, comes very early in the Wave life cycle and allows any skilled-enough third party developers to use build custom websites, apps and back-ends using the platform.

What will they build?  Based on my experience with Wave (Team Whizzlr took 6th place at a Wave Campout in August for our real-time massively multiplayer quiz game) I can say that Wave will be a remarkable tool for a fairly narrow set of uses, at least initially.  It rocks as a platfrom for complex communication in a single place, real-time or longitudinal, making it ideal for 1) functional tasks like document collaboration (Google Docs +), focus grouping, surveying, invitation management, reddit-in-email (Kudos to the GTUG team that designed Blip Appeal - in-stream up/down voting for Wave Blips), dynamic web commenting that takes place simultaneously wherever Wave extensions are placed, new forms of blogging, etc, and also 2) fun activities like PMOGs, massive real-time quiz competitions, Fantasy Football clones run inside tyour email but also on another site, and other casual apps that currently sit atop the Facebook or iPhone platforms.

Put another way, the Wave platform uses HTML 5 cacheing to allow developers to shrink different web applications that we're accustomed to experiencing discretely, combine them in a single location, replicate Wave functionality WHEREVER they choose (thanks to Wave federation), and to mix and mingle all of these Wavelets.  From a systems standpoint this looks like a clear path to MetaSystem Transition (MST)in the browser-enabled web app world:
Wikipedia - A metasystem transition is the emergence, through evolution, of a higher level of organization or control. Prime examples are the origin of life, the transition from unicellular to multicellular organisms, and the emergence of symbolic thought. A metasystem is formed by the integration of a number of initially independent components, such as molecules, cells or individiduals, and the emergence of a system steering or controlling their interactions. As such, the collective of components becomes a new, goal-directed individual, capable of acting in a coordinated way. This metasystem is more complex, more intelligent, and more flexible in its actions than the initial component systems.
This, I believe, is the way the Google Brass and Wave Team regard this new platform.  It is their confidence in this model, mixed with excitement from a certain class of clammoring developers that recognize this long-term potential, plus their successful experiences with open-sourcing code (Android, Chrome, App Engine) that spurred them to announce the unpolished product in Mat and release Wave in Alpha this past summer.  -- They certainyl got a lot of flack for it.

Whether or not this play bears fruit (I've been a believer from the start, mostly because I love what it could mean for the web), social media thinkers like Zuckerberg and Facebook's strategic team clearly must view Wave as an assault into their niche and future niches they'd like to dominate, not to mention a big play to convert developers to Google App Engine disciples. They have to take the possibility of a Wave Tsunami seriously, even if the likelihood is moderatley low.

So, is it a coincidence that Facebook has announced Open Graph and a slew of developer-focused goodies on the day prior to the Google Wave Federation?  Probably not.

Is it a Open Graph a necessary (defensive + offensive) response to Google's maneuvering in the Wave space?  Absolutely.

(That said, this emerging battle is at the same time component of a larger war between the two.  How I would love to speak candidly with their strategists/futurists...)


Earlier this year I was openly wondering about Facebook's prosumer strategy.  Since then I've seen Facebook make some truly brilliant moves, mostly in response to the growing Google and Twitter threat, that reveal just how much they do realize the fundamental importance of prosumers and developers (the two are very narrowly separated).

Thus, we've seen these companies move from launching products in Beta (Google is the pbvious trailblazer here), to Alpha and now to laying out 6-month roadmpas for developers and users (Facebook yesterday announced this), in just a few short years.

Make no mistake about it, fueled by Moore's Law, Zuckerberg's Second Law, and Exponential Data Proliferation, this behavior is a manifestation of convergent accelerating change.  As such, expect other industries to follow suit (especially those dominated by massive players) as their operations are increasingly virtualized and they too can act in a more fluid manner.  Newspapers, film studios, gaming companies, health care providers, and so forth, are all on the queue.

In the meantime, players like Google, Facebook and Twitter that strategize according to these theoretical acceleration and systems principles have a serious advantage.  Do not underestimate the power of such simulations and nerds.

Thursday, October 29, 2009

Google Navigation = I'm Getting a Droid

I've been sitting on the iPhone:Android fence for a while now, but no longer.  The impending release of Google's100% free, absolutely rocking Navigation System has tipped me in the direction of the Motorola Droid.




The new service is an awesome demonstration of the potent products that can spring from Google's rich, structured data core.  Could the company monetize this directly?  Absolutely.  But they won't because it's even more important for Google to 1) encourage Android phone purchases by offering this amazing feature (expect this to last a short while then migrate to iPhone as well), 2) popularize a new platform that sucks in structured data (much like the free and similarly sweet 1-800-GOOG-411) and 3) generate good will toward the G-Brand.

Expect increasingly more babies from Google's fertile data womb in the near future. 

Facebook Keeping Pace With Google's Open Platform Maneuvers

Not content to simply rely on its explosive growth curve (as MySpace did under NewsCorp prior to the shakeup), Facebook yesterday made three big announcements aimed at wooing more high-end and low-end developers:
  • Open Graph: Part of Facebook platform, a vague announcement about a new API that will allow website builders everywhere to build Facebook-style pages, complete with many of the platform's bells and whistles.  A logical follow up to Google Friend Connect, this lines up with Mark Zuckerberg's January comments on the decentralization of the platform and follows Google's strategic open API lead, reinforcing that walled graden large scale social media is on its way out.
  • Developer Access to User Email Addresses: A minor move with major impications.  This will allow Facebook app developers the ability to reach out to users directly, massively increasing the value proposition for certain apps, especially those hooked into big companies looking for more marketing value via FB.
The moves reinforce the trend of increasing openness of large-scale social platforms.  Google and Twitter have been trailblazing.  Now Facebook has made up serious ground. 

MySpace is refocused on music and, based on a possible deal with Facebook, seems to be moving in the right direction. 

Microsoft and Yahoo, though they're seriously focused on developers are working to grock the seriousness of the prosumer game and Mandate of Kevin. But rest assured, they'll be announcing similar changes shortly because there will be no other option.  This will require a shift in internal culture that may result in deeper level shakeups at these not quite fast-follower entities.

Viva los prosumer!

Thursday, October 22, 2009

Carving Up the Social Graph Turkey

After much deal-making and jockeying in the previous quarters, Facebook, Twitter, Microsoft and Google at last revealed their near-term Social Search plays at today's Web 2.0 Summit.
  • Facebook announced the impending launch of its own social search platform + a deal with minority investor Microsoft that brings FB status updates to Bing.  
  • Google announced a new Social Search capability that pulls friend-relevant data from most core social networks with the notable exception of Facebook + a deal with Twitter to bring real-time tweets to the search engine.
  • Microsoft announced the Facebook/Bing deal + a Twitter deal virtually identical to Google's.
  • Twitter stuck to its open-expansion-uber-alles strategy, announcing it's willing to play nice with anyone who will help it fend off Facebook from its niche.
The moves clearly demonstrate the increasing value of structured social data (aka the emerging social graph) to search services and should silence skeptics that have complained about the valuation of large social networks.

They also demonstrate one form of massive disruption to search markets: an all-out race to subsume new pools of structured data.  It's obvious that Facebook and Microsoft, who recently unveiled a deal to subsume computational search engine Wolfram Alpha, see this as one of the more effective strategies for countering Google's seach dominance. 

Prediction: Once these big deals are wrapped up I expect that we'll witness a slew of search-access deals with companies that control pools of unique search-relevant data  (e.g. IBM,Technorati, Second Life, stallite mapping services), perhaps eventually resulting in granular opt-in controls (similar to Google AdSense) for smaller niche federations looking to monetize their proprietary data.

Just think of search engines as big brains competing to integrate modules of novel, search-relevant structured data.

Viewing search from this perspective brings much necessary context to Google's long-term search growth plan, explaining why seemingly disparate initiatives like Maps, Earth, Google 411, books, etc are actually part of a cohesive strategy that will consistently add value to the company's core search offering over the coming years.  It is this deliberately planned integration that Google appears poised to retain its dominance.  Thus, minus an Earth-shattering search AI breakthrough, direct competitors like Microsoft, Yahoo and Facebook must acquire or grow their own pools of unique, relevant and integratable structured data if they are to keep pace.

At the same time, expect new entrants in regions such as Russia, China and India to either license their data to the hungry big boys or focus on the expansion of their native search efforts.  All that's required is some magic translation pixie dust.  Oh snap, Google appears to have that technology marketed cornered.

Conclusion: The social graph turkey is but a single, albeit core, item on the long table of search.  Expect many more scrumptious, exotic foods to emerge from the data kitchen.  Google does and has been adjusting its digestive technologies accordingly.

Friday, October 16, 2009

Control Over Perceived Environment (COPE)

What is intelligence?


"Intelligence" is a pervasive and useful, yet problematic term with no true measure, despite the fact that psychologists and other cognitive scholars having been working on this non-stop for roughly 150 years.  It's a readily understood, good-enough meme that helps us put labels on brains and to organize them, yet remains a crude, dull operating tool that leads to much confusion, miscommunication and errant simulation among its bipedal, meme-hoarding user junkies.


The highly elastic meaning of the word is especially irksome in technical discussions.  Note how difficult it is to ascribe definitions of intelligence to various systems: 
  • individuals - are we talking about g, social intelligence or Gardner's multiple intelligences?
  • groups - is the group stifling individual excellence? how can one effectively measure crowd wisdom? are cultures more or less intelligent in different environments?
  • AI - when does an AI truly become intelligent? how do we accurately compare AI to human intelligence? is the Turing Test representative of human intelligence or just humans' ability to estimate intelligence? is Google slowly becoming more intelligent? (Different researchers will provide vastly different answers to these questions.)
  • biological systems - how do you measure the intelligence of a Mycelium network or a field mouse? where do systems boudaries stop? what are the criteria for higher intelligence? can punctuated equilibrium and species death actually generate more intelligence? 
  • or the planet - how smart and resilient is our planet? is technology making our entire planet smarter? to what extent do different species and sytems interact and cooprerate? weak Gaiia? strong Gaiia?
  • or even the universe - is the universe performing computation? does intelligence emerge from simple parts? is local intelligence a manifestation of universal intelligence? is the universe a simulation?  if so, then what does that mean for the broader context of intelligence?
These diverging views of "intelligence" can make it a chore to achieve consensus when communicating about capability, complexity, computation systems growth and, of course, "intelligence" itself.  Nevertheless, the meme is vague enough and useful enough in different situations to continue replicating from brain to brain.  Like other widely adopted cultural memes, it is resilient!


So where does that leave us memesters in our search for consensus on "intelligence"?  Rather than simply pointing out that it's an inefficient meme, I believe we need to discover/generate beneficial  new memes to outcompete/augment the outdated terminology and occupy its space in our mental simulations. In other words, you replace a broken meme, you don't fix the old one that's loaded with confusion.


To date, my forays into the space have netted two general models of "intelligence" that jive most harmoniously with my personal take on the subject:
  • First, James Flynn's assesment that environment and genes conspire to generate humans with superior abstraction abilities tuned to the problem sets that society encourages them to interface with.  I see this approach, encapsulated in the Dickens-Flynn model, as a recent big step toward Evo Devo compatibility.  It also allows for the the software-like behavior of memes, which Flynn calls abstractions (see Piaget's relevant work on abstractions).  Flynn's observations line up nicely with both the concept of memes & temes advanced by Dawkins and Blackmore, as well as philosopher Terence McKenna's theory that culture is in fact an operating system.  This means the abstract thought frameworks that we drill into our children during critical periods, including math, science, biology, maps, businesses, social networks, new language, etc, are in fact a form of software that affects our IQ and ability to navigate the world.   (Note that Flynn is also the discoverer of the documeted steady rise of IQ now commonly referred to as the Flynn Effect.)
  • Second, Harvard thought-guru Steven Pinker's comprehensive body of findings that support his assessment that brains are essentially computers that operate using easy-to-understand conceptual metaphors, which correspnd nicely with Flynn's notion of abstractions.  Pinker is also inherently very Evo Devo in his approach and offers up appropriate props to memeticians.
Both models are very compatible with broader systems thinking and a still relevant systems-first model I scribbled circa 2004 while trying to make sense of concepts like technology, information and knowledge (each useful but vague in their own right).  I like to call it COPE.


COPE stands for Control Over Perceived Environment and is designed to overarch definitions/theories of
human, social, cognitive, software, biological, planetary, univeral, cosmological "intelligence".  Rooted in the belief that intelligence is an emergent property of complex adaptive (CAD) or living systems, the idea is simple:
  1. Draw an arbitrary boundary (to the best of your ability) around any chunk of any system - this becomes your subject/entity.
  2. Determine (to the best of your ability) what is required for this system to survive, expand, replicate, evolve, develop.
  3. Measure (to the best of your ability) how this system uses space, time, energy, matter (STEM), information, and compexity to increase #2 - the likelihood of survival, expansion, replication, evolution, development.
  4. Cross-reference these STEM, info and complexity scores with other systems to interpolate salient data points that help refine actionable abstractions.
For example, rather than measuring IQ according to a paper-based test, which can net some useful data, a COPE-inspired test would allow humans access to info, tech, other people and the broader system during the exam.  (Akin to on-the-job analysis that more serious companies perform before hiring someone.)  Such tests might measure for the total efficiency of a given operation according to how little STEM and $ the subject requires to perform it, thus establishing a more robust estimation of problem solving ability.


The counter-argument to performing such a test is inefficiency.  To date, it's been to costly or impossible to measure individual performance in such a comprehensive manner.  In this context, IQ test have been remarkably successful at netting complex results, that have then helped refine our concept of intelligence,via a relatively simple low-cost process.


Enter accelerating growth in technology, partnered with similarly explosive growth in data proliferation and communication.


The Quantification Principle (STEM Compression Correlates with Increased Quantification Ability): Thanks to constantly evolving technologies like the web, telephone, money, video, brain scanning, we are able to more efficiently (more quickly, at lower material cost, lower energy cost) MEASURE systems and/or systems slices.  The accelerating growth in these systems correlates with accelerating growth in our systems quantifications and analysis abilities (eg, IBM, Johnson Controls, Total Information Awareness, Google Search, Web as Databse).  (Interestingly, it also correlates with the Flynn Effect (steadily rising IQs), and, more importantly, our collective ability to maintain/expand COPE.)


By expanding the scope and complexity of that which we can measure, and by using analysis and science to better our understanding of what we're testing for, it's clear that we can more robustly analyze the behavioral efficiency, aka COPE ability, of various systems, including brains, thus expanding on the notion of the IQ test itself. 


Furthermore, I contend that this is an Evo Devo inevitability, that if better measurment of behavior/intelligence proves beneficial to systems agents (eg, humans), that they will devote resources to attain this advantage.


In other words - if it can be counted, it will be counted.  Which means that, barring disruption, we are destined to get better at generating and cross-referencing COPE scores.


The Incompleteness Problem: Nevertheless, no matter how efficient we get at measuring COPE, we still run into the problem of System Closure. According to Godel, system closure is a mathematical impossibility.  Systems overlap with other systems. Different systems of varying scale and complexity constantly interact with and impact different systems of varying scale and complexity.  So, technically, there can be no true measure of any system, unless the entire system is measured perfectly (a feat that seems unlikely prior to a universal convergence, pervasive unified consciousness, or some mind-blowing external intervention).


At the same time, it appears we are destined to inexorably expand and refine our simulation of the system in which we reside, using abstractions like COPE as powerful tool gradually expand and refine our concepts of intelligence, consciousness, information, knowledge, wisdom, capability, life, etc.


From this perspective, better, more robust measuring abilities and conventions appear to be inevitable and critical tools for progress in science and our broader COPE ability.  (Very chicken-or-the-egg like.)


Moving forward, as we continue to improve our models of multi-threaded convergence, including "intelligence" growth, it's clear that a central and catalytic part of the dynamic will be the regular updating of our fundamental definition of and measures for the elusive property currently known as intelligence.


By figuring out what we really mean by smart, we will get smarter... which will hopefully result in more quickly established meme-consensus and more productive discussions of these sorts of topics in the near-future, thus making room for a new class of memes chock full of their own inefficiencies.


Big shout-out to Lisa Tansey who challenged my thinking and encouraged me to write this piece at Fusion 2009.

Thursday, October 15, 2009

Finland: Connectivity is a Human Right

Following in the footsteps of the French, Finland's Ministry of Transport and Communications has decreed that as of July 2010 every Finnish citizen will have"the right to a one-megabit broadband connection" as an intermediary step toward 100 Mb/person in 2015.

If one views brains as supercomputer-equivalents critical to the convergent growth of technology, information, communication and human capabilities, as I do, it becomes obvious that such national policies are beneficial and necessary -- and maddening that we've not made more progress on these issues here in the United States.

Connectivity is not just a stabilizing social force, as Thomas Barnett has pointed out, it is a glue that's critical to convergent growth.  It's now high time for more nations to get hip to the idea that their full network of brains makes possible regular value creation and should be optimized for higher use.  Hopefully emerging models of individual/social "intelligence" and multi-threaded systems growth will diffuse quickly enough for other countries, large and small, to quickly follow suit.


Wednesday, October 14, 2009

Google's Long Prosumer March Continues as Building Maker for Google Earth is Unveiled

As usual, Google is keeping it's eye on the (rapidly expanding) prosumer prize, this time strengthening the base of its Geo-Quantification efforts through the public release of Building Maker, a program w/ complementary toolkit that encourages citizens like you and me to add renderings of buildings to any of 50 designated Google Earth cities.
We like to think of Building Maker as a cross between Google Maps and a gigantic bin of building blocks. Basically, you pick a building and construct a model of it using aerial photos and simple 3D shapes – both of which we provide. When you're done, we take a look at your model. If it looks right, and if a better model doesn't already exist, we add it to the 3D Buildings layer in Google Earth. You can make a whole building in a few minutes.
Here's a demo video from the Google beta:




Google lists some additional consequences of participating in the program:
  • Building Maker is an online tool, and it runs entirely in your web browser (Google Chrome, Firefox, Safari, Internet Explorer, etc.)
  • Before you can add a building to Google Earth, you need to sign in to your Google Account (so you get credit for what you contribute).
  • Models you create with Building Maker "live" in the Google 3D Warehouse (a giant, online repository of 3D models).
  • You can use Google SketchUp (our free, general-purpose 3D modeling tool) to edit or otherwise modify anything you make with Building Maker.
  • Make sure you have the latest version of Google Earth installed on your computer.
  • If you're on a Mac, you need to download the Google Earth plug-in directly.
Note the strategic synergy in herent in this latest GoEogle push (point by point):
  • Building Maker will likely be optimized for Chrome, leading participating prosumers to adopt Chrome.
  • Building Maker is yet another service that requires a Google account.  Google's goal seems to be to get everyone on the planet signed up and exposed to their other offerings, particularly increasingly coveted value-adding prosumers.
  • By "live in Google's 3D Warehouse" I suspect that means these models are open to use by others.  This repository is immensely valuable, even if/when granular use (Creative Commons) permissions are implemented, to not only Google Earth, but into other 3D initiatives and the developement of object search, etc.  3D will play a big role in the web of 2010-2020 and Google is setting up its plays.
  • Sketch-Up adoption benefits Google and creates a layer of separation between objects created in that language and other efforts to generate 3D object databases (Microsoft, Yahoo, IBM, Apple, Second Life can't be far behind - note: these dynamics will continue to make Linden Labs a more coveted acquisition target).  My guess is that this also incrementally increases the likelihood that more serious programmers will adopt Google App Engine if Google Earth and Sketch-Up are made to play nicely with this core framework, one that Google wants to blow up big.
  • Encourages the development and use of Google Earth, potentially the most valuable and critical  prosumer platform of the near-future.
 Google Earth remains the runaway leader in 3D quantification and serves as scaffolding increasingly capable of adding structure to Google's growing body of information.  By encouraging prosumers to add more value to this system, Google continues to put distance between it and its mapping/search competitors.  This ultimately bodes well for prosumers who will no doubt reap the benefits of this battling as other companies realize they need a large number of fairly capable brains deployed all over the world to compete in the quantification game - the race to add structure, generate actionable knowledge from the proliferating mass of data.

Maybe it's time to get a nicer camera.

Tuesday, October 13, 2009

Laser-Generated 3D Maps

Great Britain's national mapping agency, Ordnance Survey, has posted a beautifully detailed 3D map of Bournemouth that was generated through the use of accurate laser technology.  The organization says this map "is made from 700 million individual points of light."

Take a look at the elegant results:


 


Experts at Ordnance Survey are optimistic that geographic laser scanning technology will "revolutionise the future of personal navigation, tourism and the planning process as well as aiding architects, and the emergency and security services", but I'll be watching to see how their technology scales and performs versus competitors, particularly Google Earth's flesh-out-our-scaffolding approach.

It's also interesting to note that this forward step in 3D quantification is being pushed through a government agency.  Expect increasingly more governments to get serious about such initiatives because of the short and long-term value they can produce for the national economy.  And if they can't develop the technology in-house, then expect them to lean on companies like IBM and Google for smarter infrastructure.

Big thanks to commenter paulous99 for pointing out that Ordance Survey is in fact a UK govt org.

via Garry Golden