Feeds:
Posts
Comments

Archive for the ‘computer science’ Category

And I have to start this essay with a simple statement that it is not lost on me that all of the above is 100% derived from my own history, studies, jobs, art works, and everything else that goes into me.  So maybe this is just a theory of myself or not even a theory, but yet another expression in a life time of expressions.   At the very least I enjoyed 20 hrs of re-reading some great science, crafting what I think is a pretty neat piece of art work, and then summarizing some pondering.   Then again, maybe I’ve made strides on some general abstract level.  In either case, it’s just another contingent reconfiguration of things.

At the end I present all the resources I read and consulted during the writing (but not editing) and the making of the embedded 19×24 inch drawing and ink painting (which has most of this essay written and drawn into it).   I drank 4 cups of coffee over 5 hrs, had 3 tacos and 6 hotwings during this process. Additionally I listened to “The Essential Philip Glass” while sometimes watching the movie “The Devil Wears Prada” and the latest SNL episode.

——————-  

There is a core problem with all theories and theory at large – they are not The t=Truth and do not interact in the universe like the thing they refer to.   Theories are things unto themselves.  They are tools to help craft additional theories and to spur on revised dabbling in the world.

FullSizeRender (4)

We have concocted an unbelievable account of reality across religious, business, mathematical, political and scientific categories.  Immense stretches of imagination are required to connect the dots between the category theory of mathematics to radical behaviorism of psychology to machine learning in computer science to gravitational waves in cosmology to color theory in art.  The theories themselves have no easy bridge – logical, spiritual or even syntactically.

Furthering the challenge is the lack of coherence and interoperability of measurement and crafting tools.   We have forever had the challenge of information exchange between our engineered systems.   Even our most finely crafted gadgets and computers still suffer from data exchange corruption.   Even when we seem to find some useful notion about the world it is very difficult for us to transmit that notion across mediums, toolsets and brains.

And yet, therein lies the the reveal!

A simple, yet imaginative re-think provides immense power.   Consider everything as network.  Literally the simplest concept of a network – a set of nodes connected by edges.   Consider everything as part of a network, a subnetwork of the universe.  All subnetworks are connected more or less to the other subnetworks.   From massive stars to a single boson, all nodes in a network and those networks of networks.   Our theories are networks of language, logic, inference, experiment, context.  Our tools are just networks of metals, atoms, and light.   It’s not easy to replace your database of notions reinforced over the years with this simple idea.

But really ask yourself why that is so hard but you can believe that blackholes collide and send out gravitational waves that slightly wobble spacetime 1.3 billion light years away or if you believe in the Christian God, consider how that’s believable and that woman was created from a guy named Adam’s rib.    It’s all a bit far fetched but we buy these other explanations because the large network of culture and tradition and language and semiotics has built our brains/worldviews up this way.

Long ago we learned that our senses are clever biological interpreters of internal and external context.  Our eyes do not see most of “reality” – just a pretty course (30 frames per second) and small chunk of electromagnetic waves (visible light).   in the 1930s we learned that even mathematics itself and the computers we’d eventually construct can not prove many of the claims they will make, we just have to accept those claims. (incompleteness and halting problem.).

These are not flaws in our current understanding or current abilities.  These are fundamental features of reality – any reality at all.  In fact, without this incompleteness and clever loose interpretations of information between networks there would be no reality at all – no existence.   This is a claim to return to later.

In all theories at the core we are always left with uncertainty and probability statements.   We cannot state or refer to anything for certain, we can only claim some confidence that what we’re claiming or observing might, more or less, be a real effect or relation.   Even in mathematics with some of the simplest theorems and their logical proofs we must assume axioms we cannot prove – and while that’s an immensely useful trick it certainly doesn’t imply that any of the axioms are actually true and refer to anything that is true or real.

The notion of probability and uncertainty is no easy subject either.   Probability is a measure of what?   It is a measure belief (Bayes) that something will happen given something else?  Is it a measure of lack of information – this claim is only X% of the information?  Is it a measure of complexity?

IMG_4369

Again, the notion of networks is incredibly helpful.  Probability is a measure of contingency.   Contingency, defined and used here, is a notion of connectivity of a network and nodes within the network.  There need be no hard and fast assignment of the unit of contingency – different measures are useful and instructive for different applications.  There’s a basic notion at the heart of all of them: contingency is a cost function of going from a configuration to another configuration of the network.

And that leads to another startling idea.   Spacetime itself is just a network.  (obvious intuition from my previous statement) and everything is really just a spacetime network.    Time is not the ticks on a clock nor an arrow marching forward.  Time is nothing but a measure of steps to reconfigure a network from state A to some state B.   Reconfiguration steps are not done in time, they are time itself.

(most of my initial thinking comes from Wolfram and others working on this long before my thinking about it: http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/ – Wolfram and others have done a ton of heavy lifting to translate the accepted theories and math into network terms).

This re-framing of everything into network thinking requires a huge amount of translation of notions of waves, light, gravity, mass, fields, etc into network conventions.  While attempting to do that in blog form is fun and I’ve attempted to keep doing it, the reality of the task is that no amount of writing about this stuff will make a sufficient proof or even useful explanation of the idea to people.

Luckily, it occurred to me (a contingent network myself!) that everyone is already doing this translation and even more startling it couldn’t go any other way.   Our values and traditions started to be codified into explicit networks with the advent of written law and various cultural institutions like religion and formal education.   Our communities have now been codified into networks by online social networks.  Our location and travels have been codified by GPS satellites and online mapping services.  Our theories and knowledge are being codified into Wikis, Programs (Wolfram Alpha, Google Graph, Deep Learning networks, etc).   Our physical interpretations of the world have been codified into fine arts, pop arts, movies and now virtual and augmented realities.   Our inner events/context are being codified by wearable technologies.    And now the cosmos has unlocked gravitational waves for us so even the mystery of black holes and dark matter will start being codified into knowledge systems.

It’s worth a few thoughts about Light, Gravity, Forces, Fields, Behavior, Computation.

  • Light (electromagnetic wave-particles) is the subnetwork encoding the total configurations of the entire universe and every subnetwork.
  • Gravity (and gravitational wave-particles) is the subnetwork of how all the subnetworks over a certain contingency level (mass) are connected.
  • Other 3 fundamental Forces (electromagnetics, weak nuclear, strong nuclear) are also just subnetworks encoding how all subatomic particles are connected.
  • Field is just another term for network, hardly worth a mention.
  • Behavior observations are partially encoded subnetworks of the connections between subnetworks.  They do not encode the entirety of a connection except for the smallest, most simple networks.
  • Computation is time is the instruction set is a network encoding how to transform one subnetwork to another subnetwork.

These re-framed concepts allow us to move across phenomenal categories and up and down levels of scale and measurement fidelity.  They open up improved ways of connecting the dots between cross-category experiments and theories.   Consider radical behaviorism and schedules of reinforcement combined with the Probably Approximately Correct learning theory in computer science against a notion of light and gravity and contingency as defined above.

What we find is that learning and behavior based on schedules of reinforcement is actually the only way a subnetwork (say, a person) and a network of subnetworks (a community) could encode the vast contingent network (internal and external environments, etc).   Some schedules of reinforcement maintain responses better than others, and again here we find the explanation.  Consider a Variable Ratio schedule reinforcing a network.  (see here for more details: https://en.wikipedia.org/wiki/Reinforcement#Intermittent_reinforcement.3B_schedules).   A variable ratio (a variations/compositions on this) schedule is a richer contingent network itself that say a fixed ratio network.  That is, as a network encoding information between networks (essentially a computer program and data) the variable ratio has more algorithmic content to keep associations linked after many related network configurations.

Not surprisingly this is exactly the notion of gravity explained above.  Richer, more complex networks with richer connections to other subnetworks have much more gravity – that is they attract more subnetworks to connect.  They literally curve spacetime.

To add another wrinkle in theory, it has been observed in a variety of categories that the universe seems to prefer computational efficiency.  Nearly all scientific disciplines from linguistics to evolutionary biology to physics to chemistry to logic end up with some basic notion of “Path of Least Effort” (https://en.wikipedia.org/wiki/Principle_of_least_effort).  In the space of all possible contingent situations networks tend to connect in the computationally most efficient way – they encode each other efficiently.  That is not to say it happens that way all the time.  In fact, this idea led me to thinking that while all configurations of subnetworks exist, the most commonly observed ones (I use a term: robust) are the efficient configurations.  I postulate this explains mathematical constructs such as the Platonic solids and transcendental numbers and likely the physic constants.  That is, in the space of all possible things, the mean of the distribution of robust things are the mathematical abstractions.  While we rarely experience a perfect circle, we experience many variations on robust circular things… and right now the middle of them is the perfect circle.

IMG_4366

Now, what is probably the most bizarre idea of all:  nothing is actually happening at the level of the universe nor at the level of a photon.  The universe just is.   A photon, which is just a single massless node, everything happens to it all at once, so nothing happens.

That’s right, despite all the words and definitions above with all the connotations of behavior and movement and spacetime… experience and happening and events and steps and reconfigurations are actually just illusions, in a sense, of subnetworks describing other subnetworks.   The totality of the universe includes every possible reconfiguration of the universe – which obviously includes all theories, all explanations, all logics, all computations, all behavior, all schedules in a cross product of each other.   No subnetwork is doing anything at all, it simply IS and is that subnetwork within the specific configuration of universe as part of the wider set of the whole.

This sounds CRAZY.   until you look back on the history of ideas, this notion has come up over and over regardless of the starting point, the condition of the observational tools, the fads of language and business of the day.  It is even observable in how so many systems “develop” first as “concrete” physical, sensory things… they end up yielding time and time again to what we call the virtual – strangely looping recursive networks.   Here I am not contradicting myself, instead… this is what exists within the fractal nature of the universe (multiverse!) it is self similar all the way up and down scales and across all configurations (histories).

Theories tend to be ignored unless they are useful.   I cannot claim utility for everyone on this theory.  I do find it helpful for myself in moving between disciplines and not getting trapped in syntactical problems.   I find confirmation of my own cognitive bias in the fact that the technology of loosely connecting the dots like GPS, hyperlinks, search engine, social media, citation analysis, Bayes, and now deep learning/PAC have yielded tremendous expansion of information and re-imaging of the world.

IMG_4355

Currency, writing, art, music are not concrete physical needs and yet they mediate our labor, property, government, nation states.   Even things we consider “concrete” like food and water are just encodings of various configurations.  Food can be redefined in many ways and has been over the eons as our abstracted associations drift.   Water seems like a concrete requirement for us, but us is under constant redefinition.  Should people succeed in creating human-like (however you define it) in computers or the Internet it’s not clear water would be any more concrete than solar power, etc.

Then again, if I believe anything I’ve said above, it all already exists and always has.

 

———————————–

 

Chaitin on Algorithmic Information, just a math of networks.
https://www.cs.auckland.ac.nz/~chaitin/sciamer3.html

Platonic solids are just networks
https://en.m.wikipedia.org/wiki/Platonic_solid#Liquid_crystals_with_symmetries_of_Platonic_solids

Real World Fractal Networks
https://en.m.wikipedia.org/wiki/Fractal_dimension_on_networks#Real-world_fractal_networks

Correlation for Network Connectivity Measures
http://www.ncbi.nlm.nih.gov/pubmed/22343126

Various Measurements in Transport Networks (Networks in general)
https://people.hofstra.edu/geotrans/eng/methods/ch1m3en.html

Brownian Motion, the network of particles
https://en.m.wikipedia.org/wiki/Brownian_motion

Semantic Networks
https://en.wikipedia.org/wiki/Semantic_network

MPR
https://en.m.wikipedia.org/wiki/Mathematical_principles_of_reinforcement

Probably Approximately Correct
https://en.m.wikipedia.org/wiki/Probably_approximately_correct_learning

Probability Waves
http://www.physicsoftheuniverse.com/topics_quantum_probability.html

Bayes Theorem
https://en.m.wikipedia.org/wiki/Bayes%27_theorem

Wave
https://en.m.wikipedia.org/wiki/Wave

Locality of physics
http://www.theatlantic.com/science/archive/2016/02/all-physics-is-local/462480/

Complexity in economics
http://www.abigaildevereaux.com/?p=9%3Futm_source%3Dshare_buttons&utm_medium=social_media&utm_campaign=social_share

Particles
https://en.m.wikipedia.org/wiki/Graviton

Gravity is not a network phenomenon?
https://www.technologyreview.com/s/425220/experiments-show-gravity-is-not-an-emergent-phenomenon/

Gravity is a network phenomenon?
https://www.wolframscience.com/nksonline/section-9.15

Useful reframing/rethinking Gravity
http://www2.lbl.gov/Science-Articles/Archive/multi-d-universe.html

Social networks and fields
https://www.researchgate.net/profile/Wendy_Bottero/publication/239520882_Bottero_W._and_Crossley_N._(2011)_Worlds_fields_and_networks_Becker_Bourdieu_and_the_structures_of_social_relations_Cultural_Sociology_5(1)_99-119._DOI_10.11771749975510389726/links/0c96051c07d82ca740000000.pdf

Cause and effect
https://aeon.co/essays/could-we-explain-the-world-without-cause-and-effect

Human Decision Making with Concrete and Abstract Rewards
http://www.sciencedirect.com/science/article/pii/S1090513815001063

The Internet
http://motherboard.vice.com/blog/this-is-most-detailed-picture-internet-ever

Read Full Post »

From within the strange loop of self-reference the question “What is Data?” emerges.  Ok, maybe more practically the question arises from our technologically advancing world where data is everywhere, spouting from everything.  We claim to have a “data science” and now operate “big data” and have evolving laws about data collection and data use.   Quite an intellectual infrastructure for something that lacks identity or even a remotely robust and reliable definition.  Should we entrust our understanding and experience of the world to this infrastructure?   This question seems stupid and ignorant.  However, we have taken up a confused approach in all aspects of our lives by putting data ontologically on the same level as real, physical, actual stuff.    So now the question must be asked and must be answered and its implications drawn out.

Data is and Data is not.   Data is not data.   Data is not the thing the data represents or is attached to.   Data is but a ephemeral puff of exhaust from an limitless, unknowable universe of things and their relations. Let us explore.

Observe a few definitions and usage patterns:

Data According to Google

Data According to Google

https://www.google.com/webhp?sourceid=chrome-instant&rlz=1CAZZAD_enUS639US640&ion=1&espv=2&ie=UTF-8#q=data+definition

The latin roots point to the looming mystery.  “Give” -> “Something Given”.   Even back in history data was “something”.   Almost an anti-definition.

Perhaps we can find clues from clues:

Crossword Puzzle Clues for

Crossword Puzzle Clues for “Data”

http://www.wolframalpha.com/input/?i=data&a=*C.data-_*Word-

Has there been a crossword puzzle word with broader or more ambiguity than that?   “Food for thought?”  seems to hit the nail on the head.   The clues boil down to data is: numbers, holdings, information, facts, figures, fodder, food, grist, bits.   Sometimes crunched and processed, sometimes raw.  Food for thoughts, disks, banks, charts and computers.

????????????????????????

Youtube usually can tell us anything, here’s a video directly answering What Is Data:

Strong start in that video, Qualitative and Quantitative… and then by the end the video unwinds the definitions to include basically everything.

Maybe a technical lesson on data types will help elucidate the situation:

Data Types

Perhaps sticking to computers as a frame of reference helps us.   Data is stuff stored in a database specified by data types.  What exactly is stored?   Bits on a magnetic or electric device (hard drive or memory chip) are arranged according to structure defined by this “data” which is defined or created or detected by sensors and programs…   So is the data the bit?  the electric symbol?  the magnetic structures on the disk?  a pure idea regardless of physical substrate?

The confusing self-referential nature of the situation is wonderfully exploited by Tupper’s formula:

Tupper's formula

http://mathworld.wolfram.com/TuppersSelf-ReferentialFormula.html

What exactly is that?  it’s a pixel rendering (bits in memory turned into electrons shot a screen or LED excitations) of a formula (which is a collection of symbols) that when fed through a brain or a computer programmed by a brain end up producing a picture of a formula….

The further we dig the less convergence we seem to have.   Yet we have a “data science” in the world and employ “data scientists” and we tell each other to “look at the data” to figure out “the truth.”

Sometimes philosophy is useful in such confusing situations:

Information is notoriously a polymorphic phenomenon and a polysemantic concept so, as an explicandum, it can be associated with several explanations, depending on the level of abstraction adopted and the cluster of requirements and desiderata orientating a theory.

http://plato.stanford.edu/entries/information-semantic/

Er, that doesn’t seem like a convergence.  By all means we should read that entire essay, it’s certainly full of data.

Ok, maybe someone can define Data Science and in that we can figure out what is being studied:

https://beta.oreilly.com/ideas/what-is-data-science

That’s a really long article that points to data science as a duct taped loosely linked set of tools, processes, disciplines, activities to turn data into products and tell stories.   There’s clearly no simple definition or identification of the actual substance of data found there or in any other description of data science readily available.

There’s a certain impossibility of definition and identification looming.   Data isn’t something concrete.  It’s “of” everything.  It appears to be a shadowy representational trace of phenomena and relations and objects that is itself encoded in phenomena and relations and objects.

There’s a wonderful aside in the great book “Things to Make and Do in the Fourth Dimension” by Matt Parker

Finite Nature of Data

Finite Nature of Data

https://books.google.com/books?id=wK2MAwAAQBAJ&lpg=PP1&dq=fourth%20dimension%20math&pg=PP1#v=onepage&q=fourth%20dimension%20math&f=false

Data seems to have a finite, discrete property to it and yet is still very slippery.  It is reductive – a compression of the infinite patterns in the universe, it is also a pattern. Compressed traces of actual things.   Data is wisps of existence, a subset of existence.   Data is an optical and sensory illusion that is an artifact of the limitedness of the sensor and irreducibility of connections between things.

Data is not a thing.   It is of things, about things, traces of things, made up of things.

There can be no data science.   There is no scientific method possible.   Science is done with data, but cannot be done on data.  One doesn’t do experiments on data, experiments emit and transcode data, but data itself cannot be experimental.

Data is art.   Data is an interpretive literature.  It is a mathematics – an infinite regress of finite compressions.

Data is undefined and belongs in the set of unexplainables: art, infinity, time, being, event.

Data = Art Data = Art

Read Full Post »

We have a problem.

As it stands now the present and near future of economic, social and cultural development primarily derives from computers and programming.   The algorithms already dominate our society – they run our politics, they run our financial system, they run our education, they run our entertainment, they run our healthcare.    The ubiquitous tracking of everything that can possible be tracked determined this current situation.   We must have programs to make things, to sell things, to exchange things.

punchcard

The problem is not necessarily the algorithms or the computers themselves but the fact that so few people can program.    And why?   Programming Sucks.

Oh sure, for those that do program and enjoy it, it doesn’t suck. As Much.   But for the 99%+ of the world’s population that doesn’t program a computer to earn a living it’s a terrible endeavour.

Programming involves a complete abstraction away from the world and all surroundings.  Programming is disembodied – it is mostly a thought exercise mixed with some of the worst aspects of engineering.   Mathematics, especially the higher order really crazy stuff was long ago unapproachable and completely disembodied requiring no physical engineering or representation at all.  Programming, in most of its modern instances, consequences very far away from its creative behavior.  That is, in most modern system it takes days, weeks, months years to personally feel the results deeply of what you’ve built.    Programming is ruthless.  It’s unpredictable.   It’s 95% or more reinventing the wheel and configuring environments to even run the most basic program.  It’s all set up, not a lot of creation.   So few others understand it they can’t appreciate the craft during the act (only the output is appreciated counted in users and downloads).

There are a couple of reasons why this is the case – a few theoretical/natural limits and a few self-imposed, engineering and cultural issues.

First the engineering and cultural issues.   Programming languages and computers evolved rather clumsily built mostly by programmers for other programmers – not for the majority of humans.    There’s never been a requirement to make programming itself more humanistic, more embodied.    Looking back on the history of computers computing was done always in support of something else, not for its own sake.   It was done to Solve Problems.   As long as the computing device and program solved the problem the objective was met.   Even the early computer companies famously thought it was silly to think everyone one day might actually use a personal computer.   And now we’re at a potentially more devastating standstill – it’s absurd to most people to think everyone might actually need to program.    I’ll return to these issues.

Second the natural limits of computation make for a very severe situation.   There are simply things that are non-computable.   That is, we can’t solve them.   Sometimes we can PROVE we can’t solve them but that doesn’t get us any closer to solving some things.    This is sometimes called the Halting Problem.  The idea is basically that for a sufficiently complex program you can’t predict whether the program will halt or not.   The implication is simply you must run the program and see if it halts.  Again, complexity is the key here.  If these are relatively small, fast programs with a less than infinite number of possible outcomes then you can simply run the program across all possible inputs and outputs.   Problem is… very few programs are that simple and certainly not any of the ones that recommend products to you, trade your money on wall street, or help doctors figure out what’s going on in your body.

STOP.

This is a VERY BIG DEAL.    Think about it.   We deploy millions of programs a day with completely non-deterministic, unpredictable outcomes.  Sure we do lots of quality assurance and we test everything we can and we simulate and we have lots of mathematics and experience that helps us grow confident… but when you get down to it, we simply don’t know if any given complex program has some horrible bug in it.

This issue rears its head an infinite number of times a day.   If you’ve ever been mad at MS Word for screwing up your bullet points or your browser stops rendering a page or your internet doesn’t work or your computer freezes… this is what’s going on.  All of these things are complex programs interacting with other programs and all of them have millions (give or take millions) of bugs in them.  Add to it that all of these things are mutable bits on your computer that viruses or hardware issues can manipulate (you can’t be sure the program you bought is the program you currently run) and you can see how things quickly escape our abilities to control.

This is devastating for the exercise of programming.  Computer scientists have invented a myriad of ways to temper the reality of the halting problem.   Most of these management techniques makes programming even more mysteries and challenging due to the imposition of even more rules that must be learned and maintained.   Unlike music and writing and art and furniture making and fashion we EXPECT and NEED computers to do exactly what we program them to do.   Most of the other stuff humans do and create is just fine if it sort of works.  It still has value.  Programs that are too erratic or worse, catastrophic, are not only not valuable we want to eliminate them from the earth.   We probably destroy some 95%+ of the programs we write.

The craft of programming is at odds with its natural limits.   Our expectations and thus the tools we craft to perform program conflict with the actuality.  Our use of programs exceeds their possibilities.

And this really isn’t due to computers or programming, but something more fundamental: complexity and prediction.    Even as our science shows us more and more that prediction is an illusion our demands of technology and business and media run counter.    This fundamental clash manifests itself in programming, programming languages, the hardware of computers, the culture of programming.  It is at odds with itself and in being so conflicted is unapproachable to those that don’t have ability to stare maddeningly into a screen flickering with millions of unknown rules and bugs.   Mastery is barely achievable except for a rare few.   And without mastery enjoyment rarely comes – the sort of enjoyment that can sustain someones attention long enough to do something significant.

I’ve thought long and hard about how to improve the craft of programming.   I’ve programmed a lot, lead a lot of programming efforts, delivered a lot of software, scrapped a lot more.  I’ve worked in 10+ languages.  I’ve studied mathematics and logic and computer science and philosophy.  I’ve worked with the greatest computer scientists.  I’ve worked with amazing business people and artists and mathematicians.   I’ve built systems large and small in many different categories.  In short, I’ve yet to find a situation in which programming wasn’t a major barrier to progress and thinking.

The solution isn’t in programming languages and in our computers.  It’s not about Code.org and trying to get more kids into our existing paradigm. This isn’t an awareness or interest problem.   The solution involves our goals and expectations.

We must stop trying to solve every problem perfectly.  We must stop trying to predict everything.   We must stop pursuing The Answer, as if it actually exists.  We must stop trying to optimize everything for speed and precision and accuracy. And we must stop applying computerized techniques to every single category of activity – at least in a way where we expect the computer to forever to the work.

We must create art.  Programming is art.  It is full of accidents and confusions and inconsistencies.   We must turn it back to an analog experience rather than a conflicted digital.    Use programming to explore and narrate and experiment rather than answer and define and calculate.

The tools that spring from those objectives will be more human.  More people will be able to participate.  We will make more approachable programs and languages and businesses.

In the end our problem with programming is one of relation – we’re either relating more or less to the world around us and as computers grow in numbers and integration we need to be able to commune, shape and relate to them.

Read Full Post »

Recently there’s been hubbub around Artificial Intelligence (AI) and our impending doom if we don’t do something about it. While it’s fun to scare each other in that horror/sci-fi movie kind of way, there isn’t much substance behind the various arguments floating about regarding AI.

The fears people generally have are about humans losing control and more specifically about an unrestrained AI exacting its own self-derived and likely world-dominating objectives on humankind and the earth. These fears aren’t unjustified in a broad sense. They simply don’t apply to AI, either the artificial nor the intelligence part. More importantly, the fears have nothing to do with AI but instead with the age-old fear that humankind might not be the point of all existence.

We do not have a functional description of intelligence, period. We have no reliable way to measure it. Sure we have IQ or other tests of “ability” and “awareness” and the Turing Test but none of this actually tell you anything about “intelligence” or what might actually be going on with things we consider intelligent. We can measure whether some entity accomplishes a goal or performs some behavior reliably or demonstrates new behavior acquisition in response to changes in its environment. But none of those things establish intelligence and the presence of an intelligent being. They certainly don’t establish something as conscious or self-aware or moral or purpose driven nor any other reified concept we choose as a sign of intelligence.

The sophisticated fear peddlers will suggest of the above that it’s just semantics and we all know what we mean in a common sense way of what intelligence is. This is simply not true. We don’t. Colleges can’t fully count on the SAT, phrenology turned out to be horribly wrong, the Turing Test isn’t poor by definition, and so on. Go ahead, do your own research on this. No one can figure out just exactly what intelligence is and how to measure it. Is it being able to navigate a particular situation really well? Is it the ability to assess the environment? Is it massive pattern matching? Is it particular to our 5 senses? One sense? Is it brain based? Is it pure abstract thinking? What exactly does it mean to be intelligent?

It’s worse for the concept of artificial. It’s simply the wrong term. Artificial comes from very old ideas about what’s natural and what’s not natural. What’s real is what the earth itself does and not what is the work of some humankind process or a machine. Artificial things are made of metals and powered by electricity and coal and involve gears and circuits. Errrrr. Wait… many things the earth makes are made of metal and their locomotion comes from electricity or the burning of fuel, like coal. In fact, humans were made by the earth/universe. The division between artificial and natural is extremely blurry and is non-existent for a reified concept like intelligence. We don’t need the term artificial nor the term intelligence. We need to know what we’re REALLY dealing with.

So here we are… being pitched a fearsome monster of AI which has zero concrete basis, no way to observe it, and zero examples of its existence as described in most discussions. But still the monster is in the closet, right?

For the sake of argument (which is all these counter-factual future predictions of doom are) let’s assume there is some other entity/set of entities that is more “intelligent” that humans. We will need to get a sense of what that would look like.

Intelligence could be loosely described by the presence of complex, nuanced behaviors exhibited in response to the environment. Specifically an entity is observed as intelligent if it responds to changing conditions (internal as well as environmental) effectively. It must recognize changes, be able to adjust to those changes, and evaluate the consequences of made changes as well as any changes the environment has made in response.

What seems to be the basis of intelligent behavior (ability to respond to complex contingencies) in humans comes from the following:

  • Genetic and Epigenetic effects/artifacts evolved from millions of years of evolutionary experiments e.g. body structures, fixed action patterns
  • Sensory perception from 5 basic senses e.g. sight, touch, etc
  • Ability to pattern match in a complex nervous system e.g. neurological system, various memory systems
  • Cultural/Historical knowledge-base e.g. generationally selected knowledge trained early into a new human through child rearing, media and school
  • Plastic biochemical body capable of replication, regeneration and other anti-fragile effects e.g. stem cells, neuro-plasticity
  • more really complex stuff we have yet to uncover

Whatever AI we think spells our demise most likely will have something like above (something functionality equivalent). Right? No? While there is a possibility that there exists some completely different type of “intelligent” being what I’m suggesting is that the general form of what would be akin to “intelligent” would have these features:

  • Structure that has been selected for fitness over generations in response to its environment and overall survivability
  • Multi-modal Information perception/ingestion
  • advanced pattern recognition and storage
  • Knowledge reservoir (previously explored patterns) to pull from that reduces the need for a new entity to explore the space of all possibilities for survival
  • Resilient, plastic and replication mechanism capable of abstract replication and structural exploration

And distilling that down even more raw abstractions:

  • multi-modal information I/O
  • complex pattern matching
  • large memory
  • efficient replication with random mutation and mutation from selected patterns

What are the implementation limits of those abstracted properties? Turns out, we don’t know. It’s very murky. Consider a rock. Few of us would consider a plain old rock as intelligent. But why not? What doesn’t it fulfill in the above? Rocks adjust to their environment – think of erosion and the overall rock cycle. Their structures contain eons of selection and hold a great deal of information – think of how the environment is encoded in generational build of a rock, it’s crystal structure and so forth. Rocks have an efficient replication scheme – again, think of the rock cycle, the make of a rock being able to be adsorbed into other rocks and so forth.

Perhaps you don’t buy that a rock is intelligent. There’s nothing in my description of intelligence or the reified other definitions of intelligence that absolutely says a rock isn’t intelligent. It seems to fulfill on the basics… it just does so over timescales we’re not normally looking at. A rock won’t move through a maze very quickly or solve a math problem in our life time. I posit though it does do these things over long expanses of time. The network of rocks that form mountains and river beds and ocean bottoms and the entire earths crust and the planets of our solar system exhibit the abstract properties above quite adeptly. Again just at spacetime scales we’re not used to talking about in these types of discussions.

I could go on to give other examples such as ant colonies, mosquitoes, dolphins, pigs, The Internet and on and on. I doubt many of these examples will convince many people as the reified concept of “intelligence as something” is so deeply embedded in our anthropocentric world views.

And so I conclude those that are afraid and want others to be afraid of AI are actually afraid of things that have nothing to do with intelligence – that humans might not actually be the alpha and omega and that we are indeed very limited, fragile creatures.

The first fear from anthropocentrists is hard to dispel. It’s personal. All the science and evidence of observation of the night sky and the depths of our oceans makes it very clear humans are a small evolutionary branch of an unfathomably large universe. But to each person we all struggle with our view from within – the world literally, to ourselves, revolves around us. Our vantage point is such that from our 5 senses everything comes into us. Our first person view is such that we experience the world relative to ourselves. Our memories are made of our collections of first person experiences. Our body seems to respond from our own relation to the world.

And so the fear of an AI that could somehow spell our own “top of the food chain” position makes sense while still being unjustified. The reality is… humans aren’t the top of any chain. The sun will one day blow up and quite possibly take all humans out with it. Every day the earth spins and whirls and shakes with winds, heat, snow, quakes and without intention takes humans out in the process. And yet we don’t fear those realities like folks are trying to get us to fear AI. The key to this fear seems to be intention. We’re afraid of anything that has the INTENT, the purpose, the goal of taking humans and our own selves out the central position of existence.

And where, how, when, why would this intent arise? Out of what does any intent arise? Does this intent actually exist? We don’t really have any other data store to try to derive an answer to this other than humanity’s and our own personal experience and histories. Where has the intent to dominate or destroy come from in humans? Is that really our intent when we do it? Is it a universal intent of humankind? Is it something intrinsically tied to our make up and relation to the world? Even if the intent is present, what are its antecedents? And what of this intent? If the intent to dominate others arises in humans how are we justified in fearing its rise in other entities?

Intent is another reified concept. It doesn’t really exist or explain anything. It is a word that bottles up a bunch of different things going on. We have no more intention than the sun. We behave. We process patterns and make adjustments to our behavior – verbal and otherwise. Our strategies for survival change based on contingencies and sometimes our pattern recognition confuses information – we make false associations about what is threatening our existence or impeding our basic needs and wants (chasing things that activate our adrenaline and dopamine production…). It’s all very complex. Even our “intentions” are complex casual thickets (a concept I borrow from William Wimsatt).

In this complexity it’s all incredibly fragile. Fragile in the sense that our pattern recognition is easily fooled. Our memories are faulty. Our bodies get injured. Our brains are only so plastic. And the more or survival depends on rote knowledge the less plastic our overall machinery can be. Fragile, as refereed to here, is a very general concept about the stability of any given structure or pattern – belief system, behavioral schedule, biochemical relations, rock formations… any structure.

The fear involved in fragility and AI is really about less complex entities that are highly specialized in function, sensory scope and pattern matching ability. The monster presented is a machine or group of machines hell-bent on human subordination and destruction with the weaponry and no-fragility in its function or intent – it cannot be diverted by itself nor an outside force.
OR

We fear the accidental AI. The AI that accidentally happens into human destruction as a goal.

In both cases it is not intelligence we fear but instead simple exploitation of our own fragility.

And yet, as appealing as a fear this seems to be, it is also unjustified. There’s nothing that suggests even simple entities can carry out single “purpose” goals in a complex network. The complexity of the network itself prevents total exploitation by simple strategies.

Consider the wonderful game theoretic models that consider very simple games like the Prisoner’s Dilemma and how it turns out simple Tit-For-Tat exploitation models simply do work over the long-term as domination strategies. It turns out that even in very simple situations domination and total exploitation turns out to be a poor survival strategy for the exploiter.

So domination itself becomes nuanced strategy subject to all sorts of entropy, adjustments and complexity.

Maybe this isn’t a convincing rebuttal. After-all what about a simple idea that what if someone created really simplistic AI and armed it with nuclear warheads. Certainly even a clumsy system (at X time or under Y conditions, nuke everything) armed with nukes would have the capability to destroy us all. Even this isn’t a justified fear. In the first place, it wouldn’t be anything at all AI like in any sense if it were so simple. So fearing AI is a misplaced fear. The fear is more about the capability of a nuke. Insert bio-weapons or whatever else WMD one wants to consider. In all of those cases it has nothing to do with the wielder of the weapon and it’s intelligence and everything to do about the weapon.

However, even having a total fear of WMDs is myopic. We simply do not know what the fate of humankind would be nor of the earth should some entity launch all out strategies of mass destruction. Not that we should attempt to find out but it seems a tad presumptuous for anyone to pretend to be able to know what exactly would happen at the scales of total annihilation.

Our only possible data point for what total annihilation of a species on earth might be that of dinosaurs and other ancient species. We have many theories suggesting about their mass extinction from the earth, but we don’t really know, and this extinction took a very long time, was selective and didn’t end up in total annihilation (hell, and likely lead to humanity… so…) [ see this for some info: http://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_extinction_event].

The universe resists simple explanations and that’s likely a function of the fact that the universe is not simple.

Complex behavior of adaptation and awareness is very unlikely to bring about total annihilation of humankind. A more likely threat is from simple things that accidentally exploit fragility (a comet striking the earth, blotting out the sun so that anything that requires the sun to live goes away). It’s possible we could invent and/or have invented simple machines that can create what amounts to a comet strike. And what’s there to fear even in that? Only if one believes that humanity, as it is, is the thing to protect is any fear about its annihilation by any means, “intelligent” or not, justified.

Protecting humanity as it is is a weak logical position as well because there’s really no way to draw a box around what humanity is and whether there’s some absolute end state. Worse than that, it strikes me personally, that people’s definition of humanity when promoting various fears of technology or the unknown is decidedly anti-aware and flies counter to a sense of exploration. That isn’t a logical position – it’s a choice. We can decide to explore or decide to stay as we are. (Maybe)

The argument here isn’t for an unrestrained pursuit of AI. Not at all. AI is simply a term and not something to chase or prevent unto itself – it literally, theoretically isn’t actually a thing. The argument here is for restraint through questioning and exploration. The argument is directly against fear at all. Fear is the twin of absolute belief – the confusion of pattern recognition spiraling into a steady state. A fear is only justified by an entity against an immutable pattern.

For those that fear AI, then you must, by extension, fear the evolution of human capability – and the capability of any animal or any network of rocks, etc. And the reality is… all those fears will never result in any preventative measures against destruction. Fear is stasis. Stasis typically leads to extinction – eons of evidence make this clear. Fear-peddlers are really promoting stasis and that’s the biggest threat of all.

Read Full Post »

Computing Technology enables great shifts in perspective. I’ve long thought about sharing why I love computing so much. Previously I’m not sure I could articulate it without a great deal of confusion and vagueness, or worse, zealotry. Perhaps I still can’t communicate about it well but nonetheless I feel compelled to share. 

Ultimately I believe that everything in existence is computation but here in this essay I am speaking specifically about the common computer on your lap or in your hands connected to other computers over the Internet.

I wish I could claim that computers aren’t weapons or put to use in damaging ways or don’t destroy the natural environment. They can and are used for all sorts of destructive purposes. However I tend to see more creation than suffering coming from computers.

The essential facets of computing that gives it so much creative power are interactive programs and simulation. With a computer bits can be formed and reformed without a lot of production or process overhead. It often feels like there’s an endless reservoir of raw material and an infinite toolbox (which, in reality, is true!). These raw materials can be turned into a book, a painting, a game, a website, a photo, a story, a movie, facts, opinions, ideas, symbols, unknown things and anything else we can think up. Interactive programs engage users (and other computers) in a conversation or dance. Simulation provides us all the ability to try ideas on and see how they might play out or interact with the world. All possible from a little 3-4lb slab of plastic, metal, silicon flowing with electricity. 

Connecting a computer to the Internet multiplies this creative power through sharing. While it’s true a single computer is an infinite creative toolbox the Internet is a vast, search-able, discoverable recipe box and experimentation catalog. Each of us is limited by how much time we have each day, but when we are connected to millions of others all trying out their own expressions, experiments and programs we all benefit. Being able to cull from this vast connected catalog allows us all to try, retry, reform and repost new forms that we may never have been exposed to. Remarkable.

Is there the same creative power out in the world without computers? Yes and no. A computer is probably the most fundamental tool ever discovered (maybe we could called it crafted, but I think it was discovered.) Bits of information are the most fundamental material in the multiverse. Now, DNA, proteins, atoms, etc are also fundamental or primary (think: you can build up incredible complexity from fundamental materials). The reason I give computers the edge is that for the things we prefer to make we can make within our lifetime and often in much shorter timeframes. It would take a long time for DNA and its operating material to generate the variety of forms we can produce on a computer.

Don’t get me wrong there’s an infinite amount of creativity in the fundamental stuff of biology and some of it happens on much shorter than geological timescales. You could easily make the case that it would take a traditional computer probably longer than biology to produce biological forms as complex as animals. I’m not going to argue that. No do I ignore the idea that biology produced humans which produced the computer, so really biology is possibly more capable that the computer. That said, I think we’re going to discover over time that computation is really at the heart of everything, including biology and that computers as we know them probably exist in abundance out in the universe. That is, computers are NOT dependent on our particular biological history.

Getting out of the “can’t really prove these super big points about the universe” talk and into the practical – you can now get a computer with immense power for less than $200. This is a world changing reality. Computers are capable of outputing infinite creativity and can be obtained and operated for very modest means. I suspect that price will come down to virtually zero very soon. My own kids have been almost exclusively using Chromebooks for a year for all things education. It’s remarkably freeing to be able to pull up materials, jump into projects, research, create etc anywhere at anytime. Escaping the confines of a particular space and time to learn and work with tools has the great side effect of encouraging learning and work anywhere at anytime.

There are moments where I get slight pangs of regret about my choices to become proficient in computing (programming, designing, operating, administrating). There are romantic notions of being a master painter or pianist or mathematician and never touching another computing device again. In the end I still choose to master computing because of just how much it opens me up creatively. Almost everything I’ve been able to provide for my family and friends has come from having a joyous relationship with computing.

Most excitingly to me… the more I master computers the more I see the infinitude of things I don’t know and just how vast computing creativity can be. There aren’t a lot of things in the world that have that effect on me. Maybe I’m simply not paying attention to other things but I strongly suspect it’s actually some fundamental property of computing – There’s Always More.  

Read Full Post »

In Defense of The Question Is The Thing

I’ve oft been accused of being all vision with little to no practical finishing capability. That is, people see me as a philosopher not a doer. Perhaps a defense of myself and philosophy/approach isn’t necessary and the world is fine to have tacticians and philosophers and no one is very much put off by this.

I am not satisfied. The usual notion of doing and what is done and what constitutes application is misguided and misunderstood.

The universe is determined yet unpredictable (see complexity theory, cellular automota). Everything that happens and is has anticedents (see behaviorism, computation, physics). Initiatial conditions have dramatic effect on system behavior over time (see chaos theory). These three statements are roughly equivalent or at least very tightly related. And they form the basis of my defense of what it means to do.

“Now I’m not antiperformance, but I find it very precarious for a culture only to be able to measure performance and never be able to credit the questions themselves.” – Robert Irwin, page 90, seeing is forgetting the name of thing one sees

The Question Is The Thing! And by The Question that means the context or the situation or the environment or the purpose. and I don’t mean The Question or purpose as assigned by some absolute authority agent. It is the sense of a particular or relevative instance we consider a question. What is the question at hand?

Identifying and really asking the question at hand drives the activity to and fro. To do is to ask. The very act of seriously asking a question delivers the do, the completion. So what people mistake in me as “vision” is really an insatiable curiousity and need to ask the right question. To do without the question is nothing, it’s directionless motion and random walk. To seriously ask a question every detail of the context is important. To begin answering the question requires the environment to be staged and the materials provided for answers to emerge.

There is no real completion without a constant re-asking of the question. Does this answer the question? Did that answer the question?

So bring it to something a lot of people associate me with: web and software development. In the traditional sense I haven’t written a tremendous amount of code myself. Sure I’ve shipped lots of pet projects, chunks of enterprise systems, scripts here and there, and the occassional well crafted app and large scale system. There’s a view though that unless you wrote every line of code or contributed some brilliant algorithm line for line, you haven’t done anything. The fact is there’s a ton of code written every day on this planet and very little of it would i consider “doing something”. Most of it lacks a question, it’s not asking a question, a real, big, juicy, ambitious question.

Asking the question in software development requires setting the entire environment up to answer it. Literally the configuration of programmer desks, designer tools, lighting, communication cadence, resources, mixing styles and on and on. I do by asking the question and configuring the environment. The act of shipping software takes care of itself if the right question is seriously asked within an environment that let’s answers emerge.

Great questions tend to take the shape of How Does This Really Change the World for the User? What new capability does this give the world? How does this extend the ability of a user to X? What is the user trying to do in the world?

Great environments to birth answers are varied and don’t stay static. The tools, the materials all need to change per the unique nature of the question.

Often the question begs us to create less. Write less code. Tear code out. Leave things alone. Let time pass. Write documentation. Do anything but add more stuff that stuffs the answers further back.

The question and emergent answers aren’t timeless or stuck in time. The context changes the question or shape of the question may change.

Is this to say I’m anti shipping (or anti performance as Irwin put it)? No. Lets put it this way we move too much and ask too little and actual don’t change the world that much. Do the least amount to affect the most is more of what I think is the approach.

The question is The Thing much more than thing that results from work. The question has all the power. It starts and ends there.

Read Full Post »

The aim of most businesses is to create wealth for those working at it. Generally it is preferred to do this in a sustainable, scalable fashion so that wealth may continue to be generated for a long time. The specific methods may involve seeking public valuation in the markets, selling more and more product directly profitably, private valuation and investment and more. The aim of most technology based companies to make the primary activity and product of the business involve technology. Most common understanding of the “technology” refers to information technology, bio technology, advanced hardware and so forth – i.e. tools or methods that go beyond long established ways of doing things and/or analog approaches. So the aims of a technology company are to create and maintain sustainable, scalable wealth generation through technological invention and execution.

Perhaps there are better definitions of terms and clearer articulation of the aims of business but this will suffice to draw out an argument for how technology companies could fully embrace the idea of a platform and, specifically, a technological platform. Too often the technology in a technology company exists solely in the end product sold to the market. It is a rare technology company that embraces technological thinking every where – re: big internet media still managing advertising contracts through paper and faxes, expense reports through stapled papers to static excel spreadsheets and so on. There are even “search” engine companies that are unable to search over all of their own internal documentation and knowledge.

The gains of technology are significant when applied everywhere in a company. A technological product produced by primitive and inefficient means is usually unable to sustain its competitive edge as those with technology in their veins quickly catch up to any early leads by a first, non technical mover. Often what the world sees on the outside of a technology company is The Wizard of Oz. A clever and powerful façade of technology – a vision of smoking machines doing unthinkable things. When in reality it is the clunky, hub bub of a duct taped factory of humans pulling levers and making machine noises. If the end result is the same, who cares? No one – if the result can be maintained. It never scales to grow the human factory of tech facade making. Nor does it scale to turn everything over to the machines.

What’s contemplated here is a clever and emergent interaction of human and machine technology and how a company goes from merely using technology to becoming a platform. Consider an example of a company that produces exquisite financial market analysis to major brokerage firms. It may be that human analysts are far better than algorithms at making the brilliant and challenging pattern recognition observations about an upcoming swing in the markets. There is still a technology to employ here. Such a company should supply the human analysts with as much enhancing tools and methods to increase the rate at which human analysts can spot patterns, reduce the cost in spreading the knowledge where it needs to go and to complete the feedback loop on hits and misses. There is no limit to how deeply a company should look at enhancing the humans ability. For instance, how many keystrokes does it take for the analyst to key in their findings? How many hops does a synthesized report go through before hitting the end recipient? how does the temperature of the working space impact pattern recognition ability? Perhaps all those details are far more of an impact to the sustainable profit than tuning a minute facet in some analytic algorithm.

The point here is that there should be no facet of a business left untouched by technology enhancement. Too often technology companies waste millions upon millions of dollars updating their main technology product only to see modest or no gain at all. The most successful technology companies of the last 25 years have all found efficiencies through technology mostly unseen by end users and these become their competitive advantages. Dell – ordering and build process. Microsoft – product pre-installations. Google – efficient power sources for data centers. Facebook – rapid internal code releases. Apple – very efficient supply chain. Walmart – intelligent restocking. Amazon – everything beyond the core “ecommerce”.

In a sense, these companies recognized their underlying ”platform” soon after recognizing their main value proposition. They learned quickly enough to scale that proposition – and to spend a solid blend of energy on the scale and the product innovation. A quick aside – scale here is taken to mean how efficiently a business can provide its core proposition to the widest, deepest customer base. It does not refer solely to hardware or supply chain infrastructure, though often that is a critical part of it.

One of many interesting examples of such platform thinking is the Coors Brewing company back in its hey day. Most people would not consider Coors a “technology” company. In the 1950s though it changed many “industries” with the introduction of the modern aluminum can. This non-beer related technology reduced the cost of operations, created a recycling sub industry, reduced the problem of tin-cans damaging the beers taste and so on. It also made it challenging on several competitors to compete on distribution, taste and production costs. This isn’t the first time the Coors company put technology to use in surprising ways. They used to build and operate their own powerplants to reduce reliance on non optimal resources and to have better control over their production.

Examples like this abound. One might conclude that any company delivery product at scale can be classified as a technology company – they all will have a significant platform orientation. However, this does not make them a platform company.

What distinguishes a platform technology from simply a technology company is one in which the platform is provided to partners and customers to scale their businesses as well. These are the types of companies where their product itself becomes scale. These are the rare, super valuable companies. Google, Apple, Intel, Facebook, Microsoft, Salesforce.com, Amazon and so on. These companies often start by becoming highly efficient technically in the production of their core offering and then turn that scale and license it to others. The value generation gets attributed to the scale provider appropriately in that it becomes a self realizing cycle. The ecosystem built upon the platform of such companies demands the platform operator to continue to build their platform so they too may scale. The platform operator only scales by giving more scale innovation back to the ecosystem. Think Google producing Android, offering Google Analytics for Free and so on. Think Facebook and Open Graph and how brands rely on their facebook pages to connect and collect data. Think Amazon and its marketplace and Cloud Computing Services. Think Microsoft and the MSDN/developer resources/cloud computing. Think Apple and itunes, app store and so on.

It’s not all that easy though! There seems to come a time with all such platform companies that a critical decision must be made before it’s obvious that it’s going to work. To Open The Platform Up To Others Or Not? Will the ecosystem adopt it? How will they pay for it? Can we deal with what is created? Are we truly at scale to handle this? Are we open enough to embrace the opportunities that come out of it? Are we ready to cede control? Are we ready to create our own competitors?

That last question is the one big one. But it’s the one to embrace to be a super valuable, rare platform at the heart of a significant ecosystem. And it happens to be the way to create a path to sustainable wealth generation that isn’t a short lived parlor trick.

Read Full Post »

Older Posts »