Feeds:
Posts
Comments

Posts Tagged ‘computer’

Recently there’s been hubbub around Artificial Intelligence (AI) and our impending doom if we don’t do something about it. While it’s fun to scare each other in that horror/sci-fi movie kind of way, there isn’t much substance behind the various arguments floating about regarding AI.

The fears people generally have are about humans losing control and more specifically about an unrestrained AI exacting its own self-derived and likely world-dominating objectives on humankind and the earth. These fears aren’t unjustified in a broad sense. They simply don’t apply to AI, either the artificial nor the intelligence part. More importantly, the fears have nothing to do with AI but instead with the age-old fear that humankind might not be the point of all existence.

We do not have a functional description of intelligence, period. We have no reliable way to measure it. Sure we have IQ or other tests of “ability” and “awareness” and the Turing Test but none of this actually tell you anything about “intelligence” or what might actually be going on with things we consider intelligent. We can measure whether some entity accomplishes a goal or performs some behavior reliably or demonstrates new behavior acquisition in response to changes in its environment. But none of those things establish intelligence and the presence of an intelligent being. They certainly don’t establish something as conscious or self-aware or moral or purpose driven nor any other reified concept we choose as a sign of intelligence.

The sophisticated fear peddlers will suggest of the above that it’s just semantics and we all know what we mean in a common sense way of what intelligence is. This is simply not true. We don’t. Colleges can’t fully count on the SAT, phrenology turned out to be horribly wrong, the Turing Test isn’t poor by definition, and so on. Go ahead, do your own research on this. No one can figure out just exactly what intelligence is and how to measure it. Is it being able to navigate a particular situation really well? Is it the ability to assess the environment? Is it massive pattern matching? Is it particular to our 5 senses? One sense? Is it brain based? Is it pure abstract thinking? What exactly does it mean to be intelligent?

It’s worse for the concept of artificial. It’s simply the wrong term. Artificial comes from very old ideas about what’s natural and what’s not natural. What’s real is what the earth itself does and not what is the work of some humankind process or a machine. Artificial things are made of metals and powered by electricity and coal and involve gears and circuits. Errrrr. Wait… many things the earth makes are made of metal and their locomotion comes from electricity or the burning of fuel, like coal. In fact, humans were made by the earth/universe. The division between artificial and natural is extremely blurry and is non-existent for a reified concept like intelligence. We don’t need the term artificial nor the term intelligence. We need to know what we’re REALLY dealing with.

So here we are… being pitched a fearsome monster of AI which has zero concrete basis, no way to observe it, and zero examples of its existence as described in most discussions. But still the monster is in the closet, right?

For the sake of argument (which is all these counter-factual future predictions of doom are) let’s assume there is some other entity/set of entities that is more “intelligent” that humans. We will need to get a sense of what that would look like.

Intelligence could be loosely described by the presence of complex, nuanced behaviors exhibited in response to the environment. Specifically an entity is observed as intelligent if it responds to changing conditions (internal as well as environmental) effectively. It must recognize changes, be able to adjust to those changes, and evaluate the consequences of made changes as well as any changes the environment has made in response.

What seems to be the basis of intelligent behavior (ability to respond to complex contingencies) in humans comes from the following:

  • Genetic and Epigenetic effects/artifacts evolved from millions of years of evolutionary experiments e.g. body structures, fixed action patterns
  • Sensory perception from 5 basic senses e.g. sight, touch, etc
  • Ability to pattern match in a complex nervous system e.g. neurological system, various memory systems
  • Cultural/Historical knowledge-base e.g. generationally selected knowledge trained early into a new human through child rearing, media and school
  • Plastic biochemical body capable of replication, regeneration and other anti-fragile effects e.g. stem cells, neuro-plasticity
  • more really complex stuff we have yet to uncover

Whatever AI we think spells our demise most likely will have something like above (something functionality equivalent). Right? No? While there is a possibility that there exists some completely different type of “intelligent” being what I’m suggesting is that the general form of what would be akin to “intelligent” would have these features:

  • Structure that has been selected for fitness over generations in response to its environment and overall survivability
  • Multi-modal Information perception/ingestion
  • advanced pattern recognition and storage
  • Knowledge reservoir (previously explored patterns) to pull from that reduces the need for a new entity to explore the space of all possibilities for survival
  • Resilient, plastic and replication mechanism capable of abstract replication and structural exploration

And distilling that down even more raw abstractions:

  • multi-modal information I/O
  • complex pattern matching
  • large memory
  • efficient replication with random mutation and mutation from selected patterns

What are the implementation limits of those abstracted properties? Turns out, we don’t know. It’s very murky. Consider a rock. Few of us would consider a plain old rock as intelligent. But why not? What doesn’t it fulfill in the above? Rocks adjust to their environment – think of erosion and the overall rock cycle. Their structures contain eons of selection and hold a great deal of information – think of how the environment is encoded in generational build of a rock, it’s crystal structure and so forth. Rocks have an efficient replication scheme – again, think of the rock cycle, the make of a rock being able to be adsorbed into other rocks and so forth.

Perhaps you don’t buy that a rock is intelligent. There’s nothing in my description of intelligence or the reified other definitions of intelligence that absolutely says a rock isn’t intelligent. It seems to fulfill on the basics… it just does so over timescales we’re not normally looking at. A rock won’t move through a maze very quickly or solve a math problem in our life time. I posit though it does do these things over long expanses of time. The network of rocks that form mountains and river beds and ocean bottoms and the entire earths crust and the planets of our solar system exhibit the abstract properties above quite adeptly. Again just at spacetime scales we’re not used to talking about in these types of discussions.

I could go on to give other examples such as ant colonies, mosquitoes, dolphins, pigs, The Internet and on and on. I doubt many of these examples will convince many people as the reified concept of “intelligence as something” is so deeply embedded in our anthropocentric world views.

And so I conclude those that are afraid and want others to be afraid of AI are actually afraid of things that have nothing to do with intelligence – that humans might not actually be the alpha and omega and that we are indeed very limited, fragile creatures.

The first fear from anthropocentrists is hard to dispel. It’s personal. All the science and evidence of observation of the night sky and the depths of our oceans makes it very clear humans are a small evolutionary branch of an unfathomably large universe. But to each person we all struggle with our view from within – the world literally, to ourselves, revolves around us. Our vantage point is such that from our 5 senses everything comes into us. Our first person view is such that we experience the world relative to ourselves. Our memories are made of our collections of first person experiences. Our body seems to respond from our own relation to the world.

And so the fear of an AI that could somehow spell our own “top of the food chain” position makes sense while still being unjustified. The reality is… humans aren’t the top of any chain. The sun will one day blow up and quite possibly take all humans out with it. Every day the earth spins and whirls and shakes with winds, heat, snow, quakes and without intention takes humans out in the process. And yet we don’t fear those realities like folks are trying to get us to fear AI. The key to this fear seems to be intention. We’re afraid of anything that has the INTENT, the purpose, the goal of taking humans and our own selves out the central position of existence.

And where, how, when, why would this intent arise? Out of what does any intent arise? Does this intent actually exist? We don’t really have any other data store to try to derive an answer to this other than humanity’s and our own personal experience and histories. Where has the intent to dominate or destroy come from in humans? Is that really our intent when we do it? Is it a universal intent of humankind? Is it something intrinsically tied to our make up and relation to the world? Even if the intent is present, what are its antecedents? And what of this intent? If the intent to dominate others arises in humans how are we justified in fearing its rise in other entities?

Intent is another reified concept. It doesn’t really exist or explain anything. It is a word that bottles up a bunch of different things going on. We have no more intention than the sun. We behave. We process patterns and make adjustments to our behavior – verbal and otherwise. Our strategies for survival change based on contingencies and sometimes our pattern recognition confuses information – we make false associations about what is threatening our existence or impeding our basic needs and wants (chasing things that activate our adrenaline and dopamine production…). It’s all very complex. Even our “intentions” are complex casual thickets (a concept I borrow from William Wimsatt).

In this complexity it’s all incredibly fragile. Fragile in the sense that our pattern recognition is easily fooled. Our memories are faulty. Our bodies get injured. Our brains are only so plastic. And the more or survival depends on rote knowledge the less plastic our overall machinery can be. Fragile, as refereed to here, is a very general concept about the stability of any given structure or pattern – belief system, behavioral schedule, biochemical relations, rock formations… any structure.

The fear involved in fragility and AI is really about less complex entities that are highly specialized in function, sensory scope and pattern matching ability. The monster presented is a machine or group of machines hell-bent on human subordination and destruction with the weaponry and no-fragility in its function or intent – it cannot be diverted by itself nor an outside force.
OR

We fear the accidental AI. The AI that accidentally happens into human destruction as a goal.

In both cases it is not intelligence we fear but instead simple exploitation of our own fragility.

And yet, as appealing as a fear this seems to be, it is also unjustified. There’s nothing that suggests even simple entities can carry out single “purpose” goals in a complex network. The complexity of the network itself prevents total exploitation by simple strategies.

Consider the wonderful game theoretic models that consider very simple games like the Prisoner’s Dilemma and how it turns out simple Tit-For-Tat exploitation models simply do work over the long-term as domination strategies. It turns out that even in very simple situations domination and total exploitation turns out to be a poor survival strategy for the exploiter.

So domination itself becomes nuanced strategy subject to all sorts of entropy, adjustments and complexity.

Maybe this isn’t a convincing rebuttal. After-all what about a simple idea that what if someone created really simplistic AI and armed it with nuclear warheads. Certainly even a clumsy system (at X time or under Y conditions, nuke everything) armed with nukes would have the capability to destroy us all. Even this isn’t a justified fear. In the first place, it wouldn’t be anything at all AI like in any sense if it were so simple. So fearing AI is a misplaced fear. The fear is more about the capability of a nuke. Insert bio-weapons or whatever else WMD one wants to consider. In all of those cases it has nothing to do with the wielder of the weapon and it’s intelligence and everything to do about the weapon.

However, even having a total fear of WMDs is myopic. We simply do not know what the fate of humankind would be nor of the earth should some entity launch all out strategies of mass destruction. Not that we should attempt to find out but it seems a tad presumptuous for anyone to pretend to be able to know what exactly would happen at the scales of total annihilation.

Our only possible data point for what total annihilation of a species on earth might be that of dinosaurs and other ancient species. We have many theories suggesting about their mass extinction from the earth, but we don’t really know, and this extinction took a very long time, was selective and didn’t end up in total annihilation (hell, and likely lead to humanity… so…) [ see this for some info: http://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_extinction_event].

The universe resists simple explanations and that’s likely a function of the fact that the universe is not simple.

Complex behavior of adaptation and awareness is very unlikely to bring about total annihilation of humankind. A more likely threat is from simple things that accidentally exploit fragility (a comet striking the earth, blotting out the sun so that anything that requires the sun to live goes away). It’s possible we could invent and/or have invented simple machines that can create what amounts to a comet strike. And what’s there to fear even in that? Only if one believes that humanity, as it is, is the thing to protect is any fear about its annihilation by any means, “intelligent” or not, justified.

Protecting humanity as it is is a weak logical position as well because there’s really no way to draw a box around what humanity is and whether there’s some absolute end state. Worse than that, it strikes me personally, that people’s definition of humanity when promoting various fears of technology or the unknown is decidedly anti-aware and flies counter to a sense of exploration. That isn’t a logical position – it’s a choice. We can decide to explore or decide to stay as we are. (Maybe)

The argument here isn’t for an unrestrained pursuit of AI. Not at all. AI is simply a term and not something to chase or prevent unto itself – it literally, theoretically isn’t actually a thing. The argument here is for restraint through questioning and exploration. The argument is directly against fear at all. Fear is the twin of absolute belief – the confusion of pattern recognition spiraling into a steady state. A fear is only justified by an entity against an immutable pattern.

For those that fear AI, then you must, by extension, fear the evolution of human capability – and the capability of any animal or any network of rocks, etc. And the reality is… all those fears will never result in any preventative measures against destruction. Fear is stasis. Stasis typically leads to extinction – eons of evidence make this clear. Fear-peddlers are really promoting stasis and that’s the biggest threat of all.

Read Full Post »