Skip to main content

Control Over Perceived Environment (COPE)

What is intelligence?


"Intelligence" is a pervasive and useful, yet problematic term with no true measure, despite the fact that psychologists and other cognitive scholars having been working on this non-stop for roughly 150 years.  It's a readily understood, good-enough meme that helps us put labels on brains and to organize them, yet remains a crude, dull operating tool that leads to much confusion, miscommunication and errant simulation among its bipedal, meme-hoarding user junkies.


The highly elastic meaning of the word is especially irksome in technical discussions.  Note how difficult it is to ascribe definitions of intelligence to various systems: 
  • individuals - are we talking about g, social intelligence or Gardner's multiple intelligences?
  • groups - is the group stifling individual excellence? how can one effectively measure crowd wisdom? are cultures more or less intelligent in different environments?
  • AI - when does an AI truly become intelligent? how do we accurately compare AI to human intelligence? is the Turing Test representative of human intelligence or just humans' ability to estimate intelligence? is Google slowly becoming more intelligent? (Different researchers will provide vastly different answers to these questions.)
  • biological systems - how do you measure the intelligence of a Mycelium network or a field mouse? where do systems boudaries stop? what are the criteria for higher intelligence? can punctuated equilibrium and species death actually generate more intelligence? 
  • or the planet - how smart and resilient is our planet? is technology making our entire planet smarter? to what extent do different species and sytems interact and cooprerate? weak Gaiia? strong Gaiia?
  • or even the universe - is the universe performing computation? does intelligence emerge from simple parts? is local intelligence a manifestation of universal intelligence? is the universe a simulation?  if so, then what does that mean for the broader context of intelligence?
These diverging views of "intelligence" can make it a chore to achieve consensus when communicating about capability, complexity, computation systems growth and, of course, "intelligence" itself.  Nevertheless, the meme is vague enough and useful enough in different situations to continue replicating from brain to brain.  Like other widely adopted cultural memes, it is resilient!


So where does that leave us memesters in our search for consensus on "intelligence"?  Rather than simply pointing out that it's an inefficient meme, I believe we need to discover/generate beneficial  new memes to outcompete/augment the outdated terminology and occupy its space in our mental simulations. In other words, you replace a broken meme, you don't fix the old one that's loaded with confusion.


To date, my forays into the space have netted two general models of "intelligence" that jive most harmoniously with my personal take on the subject:
  • First, James Flynn's assesment that environment and genes conspire to generate humans with superior abstraction abilities tuned to the problem sets that society encourages them to interface with.  I see this approach, encapsulated in the Dickens-Flynn model, as a recent big step toward Evo Devo compatibility.  It also allows for the the software-like behavior of memes, which Flynn calls abstractions (see Piaget's relevant work on abstractions).  Flynn's observations line up nicely with both the concept of memes & temes advanced by Dawkins and Blackmore, as well as philosopher Terence McKenna's theory that culture is in fact an operating system.  This means the abstract thought frameworks that we drill into our children during critical periods, including math, science, biology, maps, businesses, social networks, new language, etc, are in fact a form of software that affects our IQ and ability to navigate the world.   (Note that Flynn is also the discoverer of the documeted steady rise of IQ now commonly referred to as the Flynn Effect.)
  • Second, Harvard thought-guru Steven Pinker's comprehensive body of findings that support his assessment that brains are essentially computers that operate using easy-to-understand conceptual metaphors, which correspnd nicely with Flynn's notion of abstractions.  Pinker is also inherently very Evo Devo in his approach and offers up appropriate props to memeticians.
Both models are very compatible with broader systems thinking and a still relevant systems-first model I scribbled circa 2004 while trying to make sense of concepts like technology, information and knowledge (each useful but vague in their own right).  I like to call it COPE.


COPE stands for Control Over Perceived Environment and is designed to overarch definitions/theories of
human, social, cognitive, software, biological, planetary, univeral, cosmological "intelligence".  Rooted in the belief that intelligence is an emergent property of complex adaptive (CAD) or living systems, the idea is simple:
  1. Draw an arbitrary boundary (to the best of your ability) around any chunk of any system - this becomes your subject/entity.
  2. Determine (to the best of your ability) what is required for this system to survive, expand, replicate, evolve, develop.
  3. Measure (to the best of your ability) how this system uses space, time, energy, matter (STEM), information, and compexity to increase #2 - the likelihood of survival, expansion, replication, evolution, development.
  4. Cross-reference these STEM, info and complexity scores with other systems to interpolate salient data points that help refine actionable abstractions.
For example, rather than measuring IQ according to a paper-based test, which can net some useful data, a COPE-inspired test would allow humans access to info, tech, other people and the broader system during the exam.  (Akin to on-the-job analysis that more serious companies perform before hiring someone.)  Such tests might measure for the total efficiency of a given operation according to how little STEM and $ the subject requires to perform it, thus establishing a more robust estimation of problem solving ability.


The counter-argument to performing such a test is inefficiency.  To date, it's been to costly or impossible to measure individual performance in such a comprehensive manner.  In this context, IQ test have been remarkably successful at netting complex results, that have then helped refine our concept of intelligence,via a relatively simple low-cost process.


Enter accelerating growth in technology, partnered with similarly explosive growth in data proliferation and communication.


The Quantification Principle (STEM Compression Correlates with Increased Quantification Ability): Thanks to constantly evolving technologies like the web, telephone, money, video, brain scanning, we are able to more efficiently (more quickly, at lower material cost, lower energy cost) MEASURE systems and/or systems slices.  The accelerating growth in these systems correlates with accelerating growth in our systems quantifications and analysis abilities (eg, IBM, Johnson Controls, Total Information Awareness, Google Search, Web as Databse).  (Interestingly, it also correlates with the Flynn Effect (steadily rising IQs), and, more importantly, our collective ability to maintain/expand COPE.)


By expanding the scope and complexity of that which we can measure, and by using analysis and science to better our understanding of what we're testing for, it's clear that we can more robustly analyze the behavioral efficiency, aka COPE ability, of various systems, including brains, thus expanding on the notion of the IQ test itself. 


Furthermore, I contend that this is an Evo Devo inevitability, that if better measurment of behavior/intelligence proves beneficial to systems agents (eg, humans), that they will devote resources to attain this advantage.


In other words - if it can be counted, it will be counted.  Which means that, barring disruption, we are destined to get better at generating and cross-referencing COPE scores.


The Incompleteness Problem: Nevertheless, no matter how efficient we get at measuring COPE, we still run into the problem of System Closure. According to Godel, system closure is a mathematical impossibility.  Systems overlap with other systems. Different systems of varying scale and complexity constantly interact with and impact different systems of varying scale and complexity.  So, technically, there can be no true measure of any system, unless the entire system is measured perfectly (a feat that seems unlikely prior to a universal convergence, pervasive unified consciousness, or some mind-blowing external intervention).


At the same time, it appears we are destined to inexorably expand and refine our simulation of the system in which we reside, using abstractions like COPE as powerful tool gradually expand and refine our concepts of intelligence, consciousness, information, knowledge, wisdom, capability, life, etc.


From this perspective, better, more robust measuring abilities and conventions appear to be inevitable and critical tools for progress in science and our broader COPE ability.  (Very chicken-or-the-egg like.)


Moving forward, as we continue to improve our models of multi-threaded convergence, including "intelligence" growth, it's clear that a central and catalytic part of the dynamic will be the regular updating of our fundamental definition of and measures for the elusive property currently known as intelligence.


By figuring out what we really mean by smart, we will get smarter... which will hopefully result in more quickly established meme-consensus and more productive discussions of these sorts of topics in the near-future, thus making room for a new class of memes chock full of their own inefficiencies.


Big shout-out to Lisa Tansey who challenged my thinking and encouraged me to write this piece at Fusion 2009.

Popular posts from this blog

Building Human-Level A.I. Will Require Billions of People

The Great AI hunger appears poised to quickly replace and then exceed the income flows it has been eliminating. If we follow the money, we can confidently expect millions, then billions of machine-learning support roles to emerge in the very near-term, majorly limiting if not reversing widespread technological unemployment. Human-directed  machine learning  has  emerged  as the  dominant  process  for the creation of  Weak AI  such as language translation, computer vision, search, drug discovery and logistics management. I ncreasingly, it appears  Strong AI , aka  AGI  or "human-level" AI, will be achieved by bootstrapping machine learning at scale, which will require billions of  humans  in-the-loop .  How does human-in the-loop machine learning work? The process of training a neural net to do something useful, say the ability to confidently determine whether a photo has been taken indoors or outside, r...

Donald Trump, Entertainer-in-Chief

The days of the  presidential  presidency are behind us.   JFK was the  first TV President . He and his successors exuded a distinctly  presidential vibe as they communicated confidently to the masses, primarily through color video, usually behind a podium or in high-power settings, on a monthly or sometimes weekly basis. Donald Trump is the first Web & Reality TV President.  He spent a decade as host and producer of the hit show  The Apprentice  and exudes a distinctly colloquial vibe across cable and the web. Trump prefers titanic business settings like board rooms and communicates to the masses at a daily or even hourly rate, even after the election. Twitter is his pulpit. Trump is a seasoned, self-aware, master content producer AND actor.  In sports, the equivalent is a player/coach, a Peyton Manning or LeBron.  He's calculatedly sloppy and unpredictable, which appears to boost his authenticity and watchability. Most impo...

IBM Watson AI XPrize Pits AI vs. Human/AI Teams

XPRize and IBM have announced the IBM Watson AI XPRIZE , a multi-stage Cognitive Computing Competition  with  a $5 million purse that challenges "teams from around the world to develop and demonstrate how humans can collaborate with powerful cognitive technologies to tackle some of the world’s grand challenges." Interestingly, the competition will be open to human/AI hybrid and exclusively AI entrants alike. The contest will culminate in 2020 after a series of IBM's annual "World of Watson" prelim events and draw attention to the human-empowering aspects of Artificial Intelligence.  May the smartest neural array carry the day. Pre-registration is open now at  xprize.org/AI , and detailed guidelines will be announced on May 15, 2016. TED Blog XPrize Announcement