Skip to main content

Building Human-Level A.I. Will Require Billions of People

The Great AI hunger appears poised to quickly replace and then exceed the income flows it has been eliminating. If we follow the money, we can confidently expect millions, then billions of machine-learning support roles to emerge in the very near-term, majorly limiting if not reversing widespread technological unemployment.

Human-directed machine learning has emerged as the dominant process for the creation of Weak AI such as language translation, computer vision, search, drug discovery and logistics management. Increasingly, it appears Strong AI, aka AGI or "human-level" AI, will be achieved by bootstrapping machine learning at scale, which will require billions of humans in-the-loop

How does human-in the-loop machine learning work? The process of training a neural net to do something useful, say the ability to confidently determine whether a photo has been taken indoors or outside, requires feeding it input content, in this case thousands of different photographs, allowing it to generate its own model of the photographs, correcting, re-generating and improving the model until the program has achieved a high enough confidence to perform the sorting behavior automatically. This neural model can then be applied to other content and ultimately requires less correction by humans in the future. Thus, work has been done and added to the broader body of machine learning knowledge.

One can then imagine that, over time, as these models are encoded, fewer and fewer humans will be needed to train up useful AI... Wrong!

Rather, as the companies now trailblazing AI (Google, Amazon, Apple, Microsoft, Facebook, Tesla, Uber, etc) have generated more value through machine learning, they've realized that 1) machine learning can be applied to infinitely more domains/problems, 2) that more complex, creative problems require more human-in-the-loop intervention, and 3) that more value can be created by integrating the machine learning they've already done - a cumulative effect, eg Google's recent breakthrough in translation, which ultimately required billions or trillions of human-in-the-loop (including you, if you ever used Google Translate) machine learning cycles to finally break through to another level of automatic functionality. 

To recap, machine learning requires 1) some well-educated machine learning professionals, 2) many more less-educated machine learning guides and 3) access to large swaths of structured content, 4) access to previously encoded machine learning. And the market-driven desire to apply it to new problems sets is growing very, very quickly. 

With technological unemployment growing as a U.S. and global problem, and economic stratification rapidly increasing, many have been wondering how the general human population will earn a living in the transformed economy. A few years ago I argued that users of social networks could soon start getting paid by the parent companies. Now that a the basic business model surrounding human-directed machine learning, AI and digitized content is emerging, that scenario can be advanced. 

As the Great AI Race heats up and more companies, countries and other actors come to realize the narrow and broader potential of human-in-the-loop machine learning, the demand for machine learning pros, machine learning guides and content workers will grow proportionately, driving up their share of the pie as they help to build more intelligent superstructures brick by brick.

The growing competition is also driving up the value of content itself - especially large bodies of structured content. Over time, content producers (including users of search engines and social networks who add value simply through their interactions with those systems) can expect to receive more value for their work or property.

As AI-generated revenues continue to grow, additional billions, even trillions of dollars will flow to super-lucrative machine learning processes and, ultimately, into the digital pockets of the masses essential to building the different aspects of AI. 

The amount of value shared with users will depend on the size of the pie. With Kurzweil's Law of Accelerating Returns in full effect, that pie is likely to grow MASSIVELY. The limits to growth appear to be our finite ability to capture, sort and export information about our lives and the universe around us. In theory, the total pie is limited only by the total information contained in our universe. 

From one perspective, this process can be viewed as a market-driven acceleration of science. From another, it's an evolution of the economy from Industrial Age to Knowledge Age. Looking at the big picture, it sure looks like mass-scale Human/AI symbiosis that ultimately drives up machine, human and planetary intelligence by digitizing the vast universe of information surrounding us.

Seems pretty natural to me.


Popular posts from this blog

Annotating the Physical World - How Much Augmented Reality Cake Will Layar Take?

Imagine pointing your iphone at different locations around you to reveal geographically pertinent annotations and/or other media that people have deposited there. Now there's an app for that. In futurist circles, this basic world-as-web scenario has been discussed for years (I even worked on one such forecasting project ), if not decades. The simplest version of the concept has always been an application that intuitively and instantly blends real-time first-person physical world experience with the valuable data contained Wikipedia, Yelp or other websites, allowing you to instantly access stats about restaurants, concert venues, parks, car dealerships, schools, businesses, etc, that you encounter in your view. Such an app could, for example, provide information about a certain shrub in your yard, allowing quick access to species data, historical photos and related ads from the local lawncare services. Now, thanks to the convergence of smart phones and real-time geo-sensing,

Google Earth Adds Virtual Time Travel, Moves a Step Closer to Gelernter's Mirror World Vision

Not only did Google add an ocean to its Earth platform today , the company also enabled "Historical Imagery", a new feature that brings to life a crude version of what Yale computer scientist David Gelernter 's 1992 prediction of the planet on a “time toggle”. The Google Blog: Until today, Google Earth displayed only one image of a given place at a given time. With this new feature, you can now move back and forth in time to reveal imagery from years and even decades past, revealing changes over time. Try flying south of San Francisco in Google Earth and turning on the new time slider (click the "clock" icon in the toolbar) to witness the transformation of Silicon Valley from a farming community to the tech capital of the world over the past 50 years or so. Along with a new 3d Mars feature, the additions have increased the scope and resolution of the largest publicly accessible simulation of our physical system, thus expanding the Google's information sc