Feeds:
Posts
Comments

Posts Tagged ‘big data’

Recently there’s been hubbub around Artificial Intelligence (AI) and our impending doom if we don’t do something about it. While it’s fun to scare each other in that horror/sci-fi movie kind of way, there isn’t much substance behind the various arguments floating about regarding AI.

The fears people generally have are about humans losing control and more specifically about an unrestrained AI exacting its own self-derived and likely world-dominating objectives on humankind and the earth. These fears aren’t unjustified in a broad sense. They simply don’t apply to AI, either the artificial nor the intelligence part. More importantly, the fears have nothing to do with AI but instead with the age-old fear that humankind might not be the point of all existence.

We do not have a functional description of intelligence, period. We have no reliable way to measure it. Sure we have IQ or other tests of “ability” and “awareness” and the Turing Test but none of this actually tell you anything about “intelligence” or what might actually be going on with things we consider intelligent. We can measure whether some entity accomplishes a goal or performs some behavior reliably or demonstrates new behavior acquisition in response to changes in its environment. But none of those things establish intelligence and the presence of an intelligent being. They certainly don’t establish something as conscious or self-aware or moral or purpose driven nor any other reified concept we choose as a sign of intelligence.

The sophisticated fear peddlers will suggest of the above that it’s just semantics and we all know what we mean in a common sense way of what intelligence is. This is simply not true. We don’t. Colleges can’t fully count on the SAT, phrenology turned out to be horribly wrong, the Turing Test isn’t poor by definition, and so on. Go ahead, do your own research on this. No one can figure out just exactly what intelligence is and how to measure it. Is it being able to navigate a particular situation really well? Is it the ability to assess the environment? Is it massive pattern matching? Is it particular to our 5 senses? One sense? Is it brain based? Is it pure abstract thinking? What exactly does it mean to be intelligent?

It’s worse for the concept of artificial. It’s simply the wrong term. Artificial comes from very old ideas about what’s natural and what’s not natural. What’s real is what the earth itself does and not what is the work of some humankind process or a machine. Artificial things are made of metals and powered by electricity and coal and involve gears and circuits. Errrrr. Wait… many things the earth makes are made of metal and their locomotion comes from electricity or the burning of fuel, like coal. In fact, humans were made by the earth/universe. The division between artificial and natural is extremely blurry and is non-existent for a reified concept like intelligence. We don’t need the term artificial nor the term intelligence. We need to know what we’re REALLY dealing with.

So here we are… being pitched a fearsome monster of AI which has zero concrete basis, no way to observe it, and zero examples of its existence as described in most discussions. But still the monster is in the closet, right?

For the sake of argument (which is all these counter-factual future predictions of doom are) let’s assume there is some other entity/set of entities that is more “intelligent” that humans. We will need to get a sense of what that would look like.

Intelligence could be loosely described by the presence of complex, nuanced behaviors exhibited in response to the environment. Specifically an entity is observed as intelligent if it responds to changing conditions (internal as well as environmental) effectively. It must recognize changes, be able to adjust to those changes, and evaluate the consequences of made changes as well as any changes the environment has made in response.

What seems to be the basis of intelligent behavior (ability to respond to complex contingencies) in humans comes from the following:

  • Genetic and Epigenetic effects/artifacts evolved from millions of years of evolutionary experiments e.g. body structures, fixed action patterns
  • Sensory perception from 5 basic senses e.g. sight, touch, etc
  • Ability to pattern match in a complex nervous system e.g. neurological system, various memory systems
  • Cultural/Historical knowledge-base e.g. generationally selected knowledge trained early into a new human through child rearing, media and school
  • Plastic biochemical body capable of replication, regeneration and other anti-fragile effects e.g. stem cells, neuro-plasticity
  • more really complex stuff we have yet to uncover

Whatever AI we think spells our demise most likely will have something like above (something functionality equivalent). Right? No? While there is a possibility that there exists some completely different type of “intelligent” being what I’m suggesting is that the general form of what would be akin to “intelligent” would have these features:

  • Structure that has been selected for fitness over generations in response to its environment and overall survivability
  • Multi-modal Information perception/ingestion
  • advanced pattern recognition and storage
  • Knowledge reservoir (previously explored patterns) to pull from that reduces the need for a new entity to explore the space of all possibilities for survival
  • Resilient, plastic and replication mechanism capable of abstract replication and structural exploration

And distilling that down even more raw abstractions:

  • multi-modal information I/O
  • complex pattern matching
  • large memory
  • efficient replication with random mutation and mutation from selected patterns

What are the implementation limits of those abstracted properties? Turns out, we don’t know. It’s very murky. Consider a rock. Few of us would consider a plain old rock as intelligent. But why not? What doesn’t it fulfill in the above? Rocks adjust to their environment – think of erosion and the overall rock cycle. Their structures contain eons of selection and hold a great deal of information – think of how the environment is encoded in generational build of a rock, it’s crystal structure and so forth. Rocks have an efficient replication scheme – again, think of the rock cycle, the make of a rock being able to be adsorbed into other rocks and so forth.

Perhaps you don’t buy that a rock is intelligent. There’s nothing in my description of intelligence or the reified other definitions of intelligence that absolutely says a rock isn’t intelligent. It seems to fulfill on the basics… it just does so over timescales we’re not normally looking at. A rock won’t move through a maze very quickly or solve a math problem in our life time. I posit though it does do these things over long expanses of time. The network of rocks that form mountains and river beds and ocean bottoms and the entire earths crust and the planets of our solar system exhibit the abstract properties above quite adeptly. Again just at spacetime scales we’re not used to talking about in these types of discussions.

I could go on to give other examples such as ant colonies, mosquitoes, dolphins, pigs, The Internet and on and on. I doubt many of these examples will convince many people as the reified concept of “intelligence as something” is so deeply embedded in our anthropocentric world views.

And so I conclude those that are afraid and want others to be afraid of AI are actually afraid of things that have nothing to do with intelligence – that humans might not actually be the alpha and omega and that we are indeed very limited, fragile creatures.

The first fear from anthropocentrists is hard to dispel. It’s personal. All the science and evidence of observation of the night sky and the depths of our oceans makes it very clear humans are a small evolutionary branch of an unfathomably large universe. But to each person we all struggle with our view from within – the world literally, to ourselves, revolves around us. Our vantage point is such that from our 5 senses everything comes into us. Our first person view is such that we experience the world relative to ourselves. Our memories are made of our collections of first person experiences. Our body seems to respond from our own relation to the world.

And so the fear of an AI that could somehow spell our own “top of the food chain” position makes sense while still being unjustified. The reality is… humans aren’t the top of any chain. The sun will one day blow up and quite possibly take all humans out with it. Every day the earth spins and whirls and shakes with winds, heat, snow, quakes and without intention takes humans out in the process. And yet we don’t fear those realities like folks are trying to get us to fear AI. The key to this fear seems to be intention. We’re afraid of anything that has the INTENT, the purpose, the goal of taking humans and our own selves out the central position of existence.

And where, how, when, why would this intent arise? Out of what does any intent arise? Does this intent actually exist? We don’t really have any other data store to try to derive an answer to this other than humanity’s and our own personal experience and histories. Where has the intent to dominate or destroy come from in humans? Is that really our intent when we do it? Is it a universal intent of humankind? Is it something intrinsically tied to our make up and relation to the world? Even if the intent is present, what are its antecedents? And what of this intent? If the intent to dominate others arises in humans how are we justified in fearing its rise in other entities?

Intent is another reified concept. It doesn’t really exist or explain anything. It is a word that bottles up a bunch of different things going on. We have no more intention than the sun. We behave. We process patterns and make adjustments to our behavior – verbal and otherwise. Our strategies for survival change based on contingencies and sometimes our pattern recognition confuses information – we make false associations about what is threatening our existence or impeding our basic needs and wants (chasing things that activate our adrenaline and dopamine production…). It’s all very complex. Even our “intentions” are complex casual thickets (a concept I borrow from William Wimsatt).

In this complexity it’s all incredibly fragile. Fragile in the sense that our pattern recognition is easily fooled. Our memories are faulty. Our bodies get injured. Our brains are only so plastic. And the more or survival depends on rote knowledge the less plastic our overall machinery can be. Fragile, as refereed to here, is a very general concept about the stability of any given structure or pattern – belief system, behavioral schedule, biochemical relations, rock formations… any structure.

The fear involved in fragility and AI is really about less complex entities that are highly specialized in function, sensory scope and pattern matching ability. The monster presented is a machine or group of machines hell-bent on human subordination and destruction with the weaponry and no-fragility in its function or intent – it cannot be diverted by itself nor an outside force.
OR

We fear the accidental AI. The AI that accidentally happens into human destruction as a goal.

In both cases it is not intelligence we fear but instead simple exploitation of our own fragility.

And yet, as appealing as a fear this seems to be, it is also unjustified. There’s nothing that suggests even simple entities can carry out single “purpose” goals in a complex network. The complexity of the network itself prevents total exploitation by simple strategies.

Consider the wonderful game theoretic models that consider very simple games like the Prisoner’s Dilemma and how it turns out simple Tit-For-Tat exploitation models simply do work over the long-term as domination strategies. It turns out that even in very simple situations domination and total exploitation turns out to be a poor survival strategy for the exploiter.

So domination itself becomes nuanced strategy subject to all sorts of entropy, adjustments and complexity.

Maybe this isn’t a convincing rebuttal. After-all what about a simple idea that what if someone created really simplistic AI and armed it with nuclear warheads. Certainly even a clumsy system (at X time or under Y conditions, nuke everything) armed with nukes would have the capability to destroy us all. Even this isn’t a justified fear. In the first place, it wouldn’t be anything at all AI like in any sense if it were so simple. So fearing AI is a misplaced fear. The fear is more about the capability of a nuke. Insert bio-weapons or whatever else WMD one wants to consider. In all of those cases it has nothing to do with the wielder of the weapon and it’s intelligence and everything to do about the weapon.

However, even having a total fear of WMDs is myopic. We simply do not know what the fate of humankind would be nor of the earth should some entity launch all out strategies of mass destruction. Not that we should attempt to find out but it seems a tad presumptuous for anyone to pretend to be able to know what exactly would happen at the scales of total annihilation.

Our only possible data point for what total annihilation of a species on earth might be that of dinosaurs and other ancient species. We have many theories suggesting about their mass extinction from the earth, but we don’t really know, and this extinction took a very long time, was selective and didn’t end up in total annihilation (hell, and likely lead to humanity… so…) [ see this for some info: http://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_extinction_event].

The universe resists simple explanations and that’s likely a function of the fact that the universe is not simple.

Complex behavior of adaptation and awareness is very unlikely to bring about total annihilation of humankind. A more likely threat is from simple things that accidentally exploit fragility (a comet striking the earth, blotting out the sun so that anything that requires the sun to live goes away). It’s possible we could invent and/or have invented simple machines that can create what amounts to a comet strike. And what’s there to fear even in that? Only if one believes that humanity, as it is, is the thing to protect is any fear about its annihilation by any means, “intelligent” or not, justified.

Protecting humanity as it is is a weak logical position as well because there’s really no way to draw a box around what humanity is and whether there’s some absolute end state. Worse than that, it strikes me personally, that people’s definition of humanity when promoting various fears of technology or the unknown is decidedly anti-aware and flies counter to a sense of exploration. That isn’t a logical position – it’s a choice. We can decide to explore or decide to stay as we are. (Maybe)

The argument here isn’t for an unrestrained pursuit of AI. Not at all. AI is simply a term and not something to chase or prevent unto itself – it literally, theoretically isn’t actually a thing. The argument here is for restraint through questioning and exploration. The argument is directly against fear at all. Fear is the twin of absolute belief – the confusion of pattern recognition spiraling into a steady state. A fear is only justified by an entity against an immutable pattern.

For those that fear AI, then you must, by extension, fear the evolution of human capability – and the capability of any animal or any network of rocks, etc. And the reality is… all those fears will never result in any preventative measures against destruction. Fear is stasis. Stasis typically leads to extinction – eons of evidence make this clear. Fear-peddlers are really promoting stasis and that’s the biggest threat of all.

Read Full Post »

We all are programmers.   And I want to explain what programming really is.  Most people think of it as a writing instructions that a computer will then interpret and go do what those instructions say.   In only the simplest sense is this fully encompassing of what programming is.

 

Programming in the broadest sense is a search through computational universe for interesting patterns that can be interpreted by other patterns.   A few definitions are in order.   A pattern is simply some set of data pulled from the computational universe (from my own investigations/research/logic everything is computational).  Thus a pattern could be a sentence of English words or a fragment of a program written in Java or DNA strands or a painting or anything else.   Some patterns are able to interact with other patterns (information processing) such as a laptop computer can interpret Microsoft Office documents or a replicated set of DNA (a human) can interpret Shakespeare and put on a play.   A program is simply a pattern that interacts with other programs.

 

When we write programs we are simply searching through the space of symbolic representations in whatever programming language.   When a program doesn’t work/doesn’t do what we want, we haven’t found a pattern of symbols that’s interpreted the way we prefer or the processing pattern can interpret.  We sometimes call that “bugs” in the software.   Underneath it all it’s simply another program, just not the one we want.

 

I call it a search to bring particular activities to mind.  When we say we write a program or create a program it seems to engender only a limited set of methods to find programs by a limited set of people, called programmers.   Calling it a search reflects reality AND opens our eyes to the infinite number of ways to find interesting patterns and to interpret them.   The space of programs is “out there”, we just have to mine it for the programs/patterns we wish to interpret.

 

Programs/patterns that become widely used owe that use to the frequency that those patterns can be interpreted.  For example, Windows or MacOS have billions of interpreting machines in which their programs can be interpreted.   Or on an even bigger scale, DNA “programs” have trillions of interpreters on just this planet alone.

 

Using a program is nothing more than interpreting it.  When you type a document in MS Word the OS is interpreting your keystrokes, refreshing the screen with pixels that represent your words, all while MS word itself is checking for grammar put in place by programmers who interpreted a grammar reference and so on and so on.   For sufficiently complex programs we aren’t able to say if a program “does the right thing.”.  Only simple programs are completely verifiable.   This is why programs exist only as patterns that are interpreted.

 

Humans have become adept at interpreting patterns most useful for the survival of human genes.  With the advent of digital computers and related patterns (tech) we are now able to go beyond the basic survival of our genes and instead mine for other patterns that are “interesting” and interpretable by all sorts of interpreters.  I don’t know where the line is on good for survival and not, but it’s really not a useful point here.  My point is that with computers we’re able to just let machines go mining the space of existence in much grander ways and interpreting those results.   Obvious examples include the SETI project mining for signs of aliens, LHC mining the space of particle collisions, Google search mining the space of webpages and now human roadways, Facebook mining everyone’s social graph and so on.  Non obvious examples include artists mining the space of perceptively interesting things, doctors mining the space of symptoms, and businesses mining the space of sellable products and so on.

 

Let me consider in a little more detail that last one.  Every business is a program.  It’s a pattern (a pattern of patterns) interpreting the patterns closest to it (competition and the industry) and finding patterns for its customers (persons or government or companies or other patterns) to buy (currency is just patterns interpreted).   Perhaps before computers and the explosion of “digital information” it wasn’t so obvious this is what it is.  But now that so much of the world is now digital and electronic how many businesses actually deal with physical goods and paper money?  How many businesses have ever seen all their employees or customers?  How many businesses exist really only has brief “ideas”?   What are all these businesses if not simply patterns of information interpreted as “valuable?”.  And isn’t every business at this point basically coming down to how much data it can amass and interpret better/more efficiently than the competition? How are businesses funded other than algorithmic trading algorithms trading the stock market in high frequency making banks and VCs wealthy so their analysts can train their models to identify the next program, er, business to invest in…..

 

When you get down to it, everything is programming.  Everything we do in life, every experience is programming.  Patterns interpreting patterns.  

 

The implications of this are quite broad.   This is why I claim the next major “innovation” we all will really notice is an incredible leap in the capability of “programming languages”.   I don’t know exactly what they will look or feel like but as the general population desires to have more programmability of the world in a “digital” or what I call “abstract” way the programming languages will have to become patterns themselves that are generally more easily interpreted (written by anyone!).   The more the stuff we buy and sell is pure information (think of a future in which we’re all just trading software and 3d printer object designs (which is what industrial manufacturers basically do)) the more we all will not want to wait for someone else to reprogram the world around us, we all will want to do it.   Education, health care, transportation, living, etc. is all becoming more and more modular and interchangeable, like little chunks of programs (usually called libraries or plugins).   So all these things we traditionally think as “the real world” are actually becoming little patterns we swap in and out of.  Consider how many of you have taken an uber from your phone, stayed at an airbnb, order an eBook from amazon, sent digital happy birthday and so on…. Everything is becoming a symbolic representation more and more easily programmed to be just how we want.

 

And so this is why big data is all the rage.  Not because it’s a cool fad or some new tech thing… it’s because it’s the ONLY THING.   All of these “patterns” and “programs” I’m talking about taken on the whole are just the SPACE OF DATA for us to mine all the patterns.   The biggest program of all is EVERYTHING in EXISTENCE.  On a smaller scale the more complicated and complex a program is the more it looks indistinguishable from a huge pile of data.   The more clever we find our devices the more it turns out that it’s an inseparable entanglement of data and programs (think of the autospell on your phone… when it messes up your spelling it’s just the data it’s gleaned from you….).  Data = programs.  Programs = data.   Patterns = patterns.   Our world is becoming a giant abstract ball of data, sometimes we’re symbolizing but more and more often we’re able to directly compute (interpret natively/without translation) with objects as they exist (genetic modification, quantum computing, wetware, etc.).   In either case it’s all equivalent… only now we’re becoming aware of this equivalence if not in “mind” then in behavior or what we expect to be able to do.

 

Face it. You are a programmer.   and you are big data.

Read Full Post »

Some people hate buzzwords, like Big Data.   I’m ok with it.  Because unlike many buzzwords it actually kind of describes exactly what it should.   It’s a world increasingly dependent on algorithmic decision making, data trails, profiles, digital finger prints, anomaly tracking… not everything we do is tracked, but enough is that it definitely exceeds our ability to process it and do genuinely useful things with it.

Now, is it because of the tools/technology that makes Big Data so challenging to businesses?   I suppose somewhat.  I think it it’s more behavioral than anything.  Humans are very good at intuitive pattern recognition.   We’re taking in Big Data every second – through our senses, working around our neural systems and so on.    We do it this without being “aware”.   With explicit Data Collection and explicit Analysis like we do in business we betray our intuitions or rather our intuition betrays us.

How so?

We often go spelunking through big data intuiting things that aren’t real.  We’re collecting so much data that it’s pretty easy to find patterns, whether they matter or not.  We’re so convinced there’s something to find there, we often Invent A Pattern.

With the ability to collect so much data our intuition tells us if we collect more data we’ll find more patterns.  Just Keep Collecting.

And then!  we have another problem.   we’re somewhat limited by our explicit training.

We’re so accustomed to certain interfaces with explicitly collected data – Spreadsheets, Relational Database GUIs, Stats programs, that we find it hard to imagine working with data in any other way.   We’re not very good at transcoding data into more useful forms and our tools weren’t really built to make that easier.   We’re now running into this “A Picture is Worth a Thousand Words” or some version of Computational Irreducibility.   Our training has taught us to go looking for shortcuts or formulas to compress Big Data into Little Formula (you know take a dataset of 18 variables and reduce it to a 2-axis chart with an up and to the right linear regression line).

Fact is, that’s just not how it works.   Sometimes Big Data needs a Big Picture cause it’s a really complicated network of interactions.  Or it needs a full simulation and so on.

Another way to put this… businesses are so accustomed to the idea of Explainability.   Businesses thrive on Business Plans, Forecasts, etc.   so they force a overly simplistic reductionist analysis of the business and drive everything against that type of plan.   Driving against that type of plan ends up shaping internal tools and products to be equally reductionist.

To get the most out of Big Data we literally have to retrain ourselves against our deepest built in approaches to data collection and analysis.   First, don’t get caught up in specific toolsets.   Re-imagine what it means to analyze data.   How can we transcode data into a different picture that illuminates real, useful patterns without reducing it to patterns we can explain?

Sometimes, the best way to do this is to give away the data to hoards and hoards of humans and see what crafty things they do with it.  Then step back and see how it all fits together.

I believe this is what Facebook has done.  Rather than analyze the graph endlessly for their own product dev efforts, they gave the graph out to others and saw what they created with it.   That has been a far more efficient, parallel processing of that data.

It’s almost like flipping the idea of data analysis and business planning on its head.   You figure out what the data “means” by seeing how people put it to use in whatever ways they like.

Read Full Post »