Feeds:
Posts
Comments

Posts Tagged ‘philosophy’

And I have to start this essay with a simple statement that it is not lost on me that all of the above is 100% derived from my own history, studies, jobs, art works, and everything else that goes into me.  So maybe this is just a theory of myself or not even a theory, but yet another expression in a life time of expressions.   At the very least I enjoyed 20 hrs of re-reading some great science, crafting what I think is a pretty neat piece of art work, and then summarizing some pondering.   Then again, maybe I’ve made strides on some general abstract level.  In either case, it’s just another contingent reconfiguration of things.

At the end I present all the resources I read and consulted during the writing (but not editing) and the making of the embedded 19×24 inch drawing and ink painting (which has most of this essay written and drawn into it).   I drank 4 cups of coffee over 5 hrs, had 3 tacos and 6 hotwings during this process. Additionally I listened to “The Essential Philip Glass” while sometimes watching the movie “The Devil Wears Prada” and the latest SNL episode.

——————-  

There is a core problem with all theories and theory at large – they are not The t=Truth and do not interact in the universe like the thing they refer to.   Theories are things unto themselves.  They are tools to help craft additional theories and to spur on revised dabbling in the world.

FullSizeRender (4)

We have concocted an unbelievable account of reality across religious, business, mathematical, political and scientific categories.  Immense stretches of imagination are required to connect the dots between the category theory of mathematics to radical behaviorism of psychology to machine learning in computer science to gravitational waves in cosmology to color theory in art.  The theories themselves have no easy bridge – logical, spiritual or even syntactically.

Furthering the challenge is the lack of coherence and interoperability of measurement and crafting tools.   We have forever had the challenge of information exchange between our engineered systems.   Even our most finely crafted gadgets and computers still suffer from data exchange corruption.   Even when we seem to find some useful notion about the world it is very difficult for us to transmit that notion across mediums, toolsets and brains.

And yet, therein lies the the reveal!

A simple, yet imaginative re-think provides immense power.   Consider everything as network.  Literally the simplest concept of a network – a set of nodes connected by edges.   Consider everything as part of a network, a subnetwork of the universe.  All subnetworks are connected more or less to the other subnetworks.   From massive stars to a single boson, all nodes in a network and those networks of networks.   Our theories are networks of language, logic, inference, experiment, context.  Our tools are just networks of metals, atoms, and light.   It’s not easy to replace your database of notions reinforced over the years with this simple idea.

But really ask yourself why that is so hard but you can believe that blackholes collide and send out gravitational waves that slightly wobble spacetime 1.3 billion light years away or if you believe in the Christian God, consider how that’s believable and that woman was created from a guy named Adam’s rib.    It’s all a bit far fetched but we buy these other explanations because the large network of culture and tradition and language and semiotics has built our brains/worldviews up this way.

Long ago we learned that our senses are clever biological interpreters of internal and external context.  Our eyes do not see most of “reality” – just a pretty course (30 frames per second) and small chunk of electromagnetic waves (visible light).   in the 1930s we learned that even mathematics itself and the computers we’d eventually construct can not prove many of the claims they will make, we just have to accept those claims. (incompleteness and halting problem.).

These are not flaws in our current understanding or current abilities.  These are fundamental features of reality – any reality at all.  In fact, without this incompleteness and clever loose interpretations of information between networks there would be no reality at all – no existence.   This is a claim to return to later.

In all theories at the core we are always left with uncertainty and probability statements.   We cannot state or refer to anything for certain, we can only claim some confidence that what we’re claiming or observing might, more or less, be a real effect or relation.   Even in mathematics with some of the simplest theorems and their logical proofs we must assume axioms we cannot prove – and while that’s an immensely useful trick it certainly doesn’t imply that any of the axioms are actually true and refer to anything that is true or real.

The notion of probability and uncertainty is no easy subject either.   Probability is a measure of what?   It is a measure belief (Bayes) that something will happen given something else?  Is it a measure of lack of information – this claim is only X% of the information?  Is it a measure of complexity?

IMG_4369

Again, the notion of networks is incredibly helpful.  Probability is a measure of contingency.   Contingency, defined and used here, is a notion of connectivity of a network and nodes within the network.  There need be no hard and fast assignment of the unit of contingency – different measures are useful and instructive for different applications.  There’s a basic notion at the heart of all of them: contingency is a cost function of going from a configuration to another configuration of the network.

And that leads to another startling idea.   Spacetime itself is just a network.  (obvious intuition from my previous statement) and everything is really just a spacetime network.    Time is not the ticks on a clock nor an arrow marching forward.  Time is nothing but a measure of steps to reconfigure a network from state A to some state B.   Reconfiguration steps are not done in time, they are time itself.

(most of my initial thinking comes from Wolfram and others working on this long before my thinking about it: http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/ – Wolfram and others have done a ton of heavy lifting to translate the accepted theories and math into network terms).

This re-framing of everything into network thinking requires a huge amount of translation of notions of waves, light, gravity, mass, fields, etc into network conventions.  While attempting to do that in blog form is fun and I’ve attempted to keep doing it, the reality of the task is that no amount of writing about this stuff will make a sufficient proof or even useful explanation of the idea to people.

Luckily, it occurred to me (a contingent network myself!) that everyone is already doing this translation and even more startling it couldn’t go any other way.   Our values and traditions started to be codified into explicit networks with the advent of written law and various cultural institutions like religion and formal education.   Our communities have now been codified into networks by online social networks.  Our location and travels have been codified by GPS satellites and online mapping services.  Our theories and knowledge are being codified into Wikis, Programs (Wolfram Alpha, Google Graph, Deep Learning networks, etc).   Our physical interpretations of the world have been codified into fine arts, pop arts, movies and now virtual and augmented realities.   Our inner events/context are being codified by wearable technologies.    And now the cosmos has unlocked gravitational waves for us so even the mystery of black holes and dark matter will start being codified into knowledge systems.

It’s worth a few thoughts about Light, Gravity, Forces, Fields, Behavior, Computation.

  • Light (electromagnetic wave-particles) is the subnetwork encoding the total configurations of the entire universe and every subnetwork.
  • Gravity (and gravitational wave-particles) is the subnetwork of how all the subnetworks over a certain contingency level (mass) are connected.
  • Other 3 fundamental Forces (electromagnetics, weak nuclear, strong nuclear) are also just subnetworks encoding how all subatomic particles are connected.
  • Field is just another term for network, hardly worth a mention.
  • Behavior observations are partially encoded subnetworks of the connections between subnetworks.  They do not encode the entirety of a connection except for the smallest, most simple networks.
  • Computation is time is the instruction set is a network encoding how to transform one subnetwork to another subnetwork.

These re-framed concepts allow us to move across phenomenal categories and up and down levels of scale and measurement fidelity.  They open up improved ways of connecting the dots between cross-category experiments and theories.   Consider radical behaviorism and schedules of reinforcement combined with the Probably Approximately Correct learning theory in computer science against a notion of light and gravity and contingency as defined above.

What we find is that learning and behavior based on schedules of reinforcement is actually the only way a subnetwork (say, a person) and a network of subnetworks (a community) could encode the vast contingent network (internal and external environments, etc).   Some schedules of reinforcement maintain responses better than others, and again here we find the explanation.  Consider a Variable Ratio schedule reinforcing a network.  (see here for more details: https://en.wikipedia.org/wiki/Reinforcement#Intermittent_reinforcement.3B_schedules).   A variable ratio (a variations/compositions on this) schedule is a richer contingent network itself that say a fixed ratio network.  That is, as a network encoding information between networks (essentially a computer program and data) the variable ratio has more algorithmic content to keep associations linked after many related network configurations.

Not surprisingly this is exactly the notion of gravity explained above.  Richer, more complex networks with richer connections to other subnetworks have much more gravity – that is they attract more subnetworks to connect.  They literally curve spacetime.

To add another wrinkle in theory, it has been observed in a variety of categories that the universe seems to prefer computational efficiency.  Nearly all scientific disciplines from linguistics to evolutionary biology to physics to chemistry to logic end up with some basic notion of “Path of Least Effort” (https://en.wikipedia.org/wiki/Principle_of_least_effort).  In the space of all possible contingent situations networks tend to connect in the computationally most efficient way – they encode each other efficiently.  That is not to say it happens that way all the time.  In fact, this idea led me to thinking that while all configurations of subnetworks exist, the most commonly observed ones (I use a term: robust) are the efficient configurations.  I postulate this explains mathematical constructs such as the Platonic solids and transcendental numbers and likely the physic constants.  That is, in the space of all possible things, the mean of the distribution of robust things are the mathematical abstractions.  While we rarely experience a perfect circle, we experience many variations on robust circular things… and right now the middle of them is the perfect circle.

IMG_4366

Now, what is probably the most bizarre idea of all:  nothing is actually happening at the level of the universe nor at the level of a photon.  The universe just is.   A photon, which is just a single massless node, everything happens to it all at once, so nothing happens.

That’s right, despite all the words and definitions above with all the connotations of behavior and movement and spacetime… experience and happening and events and steps and reconfigurations are actually just illusions, in a sense, of subnetworks describing other subnetworks.   The totality of the universe includes every possible reconfiguration of the universe – which obviously includes all theories, all explanations, all logics, all computations, all behavior, all schedules in a cross product of each other.   No subnetwork is doing anything at all, it simply IS and is that subnetwork within the specific configuration of universe as part of the wider set of the whole.

This sounds CRAZY.   until you look back on the history of ideas, this notion has come up over and over regardless of the starting point, the condition of the observational tools, the fads of language and business of the day.  It is even observable in how so many systems “develop” first as “concrete” physical, sensory things… they end up yielding time and time again to what we call the virtual – strangely looping recursive networks.   Here I am not contradicting myself, instead… this is what exists within the fractal nature of the universe (multiverse!) it is self similar all the way up and down scales and across all configurations (histories).

Theories tend to be ignored unless they are useful.   I cannot claim utility for everyone on this theory.  I do find it helpful for myself in moving between disciplines and not getting trapped in syntactical problems.   I find confirmation of my own cognitive bias in the fact that the technology of loosely connecting the dots like GPS, hyperlinks, search engine, social media, citation analysis, Bayes, and now deep learning/PAC have yielded tremendous expansion of information and re-imaging of the world.

IMG_4355

Currency, writing, art, music are not concrete physical needs and yet they mediate our labor, property, government, nation states.   Even things we consider “concrete” like food and water are just encodings of various configurations.  Food can be redefined in many ways and has been over the eons as our abstracted associations drift.   Water seems like a concrete requirement for us, but us is under constant redefinition.  Should people succeed in creating human-like (however you define it) in computers or the Internet it’s not clear water would be any more concrete than solar power, etc.

Then again, if I believe anything I’ve said above, it all already exists and always has.

 

———————————–

 

Chaitin on Algorithmic Information, just a math of networks.
https://www.cs.auckland.ac.nz/~chaitin/sciamer3.html

Platonic solids are just networks
https://en.m.wikipedia.org/wiki/Platonic_solid#Liquid_crystals_with_symmetries_of_Platonic_solids

Real World Fractal Networks
https://en.m.wikipedia.org/wiki/Fractal_dimension_on_networks#Real-world_fractal_networks

Correlation for Network Connectivity Measures
http://www.ncbi.nlm.nih.gov/pubmed/22343126

Various Measurements in Transport Networks (Networks in general)
https://people.hofstra.edu/geotrans/eng/methods/ch1m3en.html

Brownian Motion, the network of particles
https://en.m.wikipedia.org/wiki/Brownian_motion

Semantic Networks
https://en.wikipedia.org/wiki/Semantic_network

MPR
https://en.m.wikipedia.org/wiki/Mathematical_principles_of_reinforcement

Probably Approximately Correct
https://en.m.wikipedia.org/wiki/Probably_approximately_correct_learning

Probability Waves
http://www.physicsoftheuniverse.com/topics_quantum_probability.html

Bayes Theorem
https://en.m.wikipedia.org/wiki/Bayes%27_theorem

Wave
https://en.m.wikipedia.org/wiki/Wave

Locality of physics
http://www.theatlantic.com/science/archive/2016/02/all-physics-is-local/462480/

Complexity in economics
http://www.abigaildevereaux.com/?p=9%3Futm_source%3Dshare_buttons&utm_medium=social_media&utm_campaign=social_share

Particles
https://en.m.wikipedia.org/wiki/Graviton

Gravity is not a network phenomenon?
https://www.technologyreview.com/s/425220/experiments-show-gravity-is-not-an-emergent-phenomenon/

Gravity is a network phenomenon?
https://www.wolframscience.com/nksonline/section-9.15

Useful reframing/rethinking Gravity
http://www2.lbl.gov/Science-Articles/Archive/multi-d-universe.html

Social networks and fields
https://www.researchgate.net/profile/Wendy_Bottero/publication/239520882_Bottero_W._and_Crossley_N._(2011)_Worlds_fields_and_networks_Becker_Bourdieu_and_the_structures_of_social_relations_Cultural_Sociology_5(1)_99-119._DOI_10.11771749975510389726/links/0c96051c07d82ca740000000.pdf

Cause and effect
https://aeon.co/essays/could-we-explain-the-world-without-cause-and-effect

Human Decision Making with Concrete and Abstract Rewards
http://www.sciencedirect.com/science/article/pii/S1090513815001063

The Internet
http://motherboard.vice.com/blog/this-is-most-detailed-picture-internet-ever

Read Full Post »

The idea of control is absurd, guns or not.   The world is far too complicated to predict events, system behaviors, or whether even your email will send when you hit the send button.  Prediction is a necessary (but not sufficient) condition of control.  And when we say “gun control” we believe we can predict who would be a responsible user of a gun and who wouldn’t.   We believe with the proper equipment features we can control what happens when a user pulls the trigger or that it’s actually the user who owns the gun… and so on.  It’s literally all based on an absurd premise.

Guns in the Game of Life

Guns in the Game of Life

And yet, control is exactly the fallacy of our political and social systems.  Guns and other tools of destruction provide the operator the illusion of control.  Lightweight, homage regulating laws provide the population the illusion of control.  These illusions really just mask the chaos of a contingent world.  Any distressed person operates under highly conflicting contingencies or has lost the ability to recognize contingencies (of behavior and consequence).   In fact, this happens to all of us all of the time.  We live under near constant confirmation and related behavioral (cognitive biases) as a result of our limited perceptive systems and neural componentry (and often sick and broken bodies).   Our system constantly pattern recognizes incorrectly (we think God helps us score touchdowns….).  These incomplete interpretations of the contingencies of the world become especially problematic in a stressed and distressed situation.   (I’ll skip a deep discussion of behavioral, physical and chemical science and just lump all of it in an idea that we are all systems ecologically looking for homeostasis/equilibrium.)

When contingencies conflict or get very confused and the environment is primed properly disaster is more likely to occur.  Priming includes a cultural dimension, accessibility of destructive tools, lack of obstacles to act, etc.   Combined with stress, illness, and chemicals (drugs/booze/etc) in a person and a more combustible situation emerges – this is the nature of probability and complexity.

Proponents of guns and various “let’s all pack heat” strategies suffer from the same delusions of control as perpetrators of mass killings and gun murders.  The world is not fundamentally controllable – in situations with guns and situations without guns.  Every person and system is a collection of contingencies – the collective probabilities of circumstance and events.   For instance, at Christmas time if you have hot colored lights plugged in, faulty electric outlets and dead, dry pine trees in your living room you have increased the chance of burning your house down.   I assure you there are lower probability of raging fire decorations you can display in your home.

The key to dealing with our uncontrollable world isn’t pretending control exists.   We either increase or reduce probabilities of events by changing ourselves and/or the environment.   Changing the contingencies is non-trivial and multifaceted.   One key is to not put too many degrees of freedom between an act and the experience of the consequences of that act.  This is a subtle but very important point.   Many studies show humans are not good at anticipating delayed consequences – delay in time and in-directness (associations) of consequences.  This truth is at the heart of addition formation, financial debt, wars, education and literacy, and so on.  You can do your own study on this truth by reminding yourself of your last Vegas trip, checking your alerts for all those idiot Candy Crush notifications from your “friends,” looking at your credit car bills or reviewing your local church (and bible!) for policies on tithing and confession and promises of heaven and hell.

Guns are so easy (very few contingencies) to obtain and use (poorly) that there is almost NO PERCEIVABLE IMMEDIATE CONSEQUENCE to gun ownership relative to THE DELAYED ULTIMATE CONSEQUENCE of gun usage.  Pulling a trigger is such a simple act…. even gun makers and the NRA know this.  It’s why they attempt to stratify guns into level of effort to use: manual, semi-automatic, automatic and so on.   The delay in round expulsion is built on the idea that if you add more work for the user the less they can kill and the more time it takes to load and fire rounds the more the prey and other contingent circumstances can adjust in response.  This is all highly consistent logic and observable phenomena.

Most systems, including individual people, operate on a strategy of efficiency AKA the path of least resistance.   We resolve our stresses efficiently (according to our own weird histories/abilities).   When guns are easy to get then that’s an outlet we go with (replace guns with drugs, TV, gambling, sex, food, yoga, etc).   We know this truth.   We’ve used it forever… Grocery stores get ya every time with this.   And so does the government.  Some things it makes hard to do or get (more contingent): health care, food stamps, driver’s licenses, info on NSA programs.   Somethings it makes easy (less contingent): paying your taxes (do it online!  send cash!), getting parking tickets, buying lottery tickets, campaign donating!

Never underestimate the power of laziness! (capitalism and governments/kings and religion don’t!)

If people generally didn’t operate this way voter turn out would be 100%, education rates would be off the charts and no one would ever buy a lottery ticket or use a slot machine again (well at least they might pull the handle instead of auto spinning).

I firmly believe in the complete disarmament and aggressive buy back and destruction of all arms – civilian and otherwise. For this country and all of them.   I believe in trying to get the probability of widespread carnage and unintended consequences as low as possible.   While compromise is inevitable my position is not one of compromise.

If you’re for guns or even a gun apologist you really just don’t trust the world and believe in control and want to maintain what you perceive as a competitive advantage to the unarmed or the less well armed.   Perhaps it is a competitive advantage, local to you.   System wide you’re increasing the chance of unintended disasters and you’re partially complicit more or less in the continuing violence against kids and students.   You are also probably ok with it or don’t believe it because the consequences of your slight increase in probability of someone else’s disaster don’t register in your pattern recognizer.

p.s.

As I said earlier… lowering the probability of gun violence takes more than gun laws.  It takes education, first and foremost.  And it takes economic opportunity, better health care, jobs, love, and everything in between.  I chose to be complicit in increasing those things at the expense of my right to bear arms.   We’re all just a small piece of a contingent and uncontrollable world and I’d rather stand in perspective and connection with people rather than behind armor, triple locked doors and concealed weaponry.

Read Full Post »

David Deutsch wrote an interesting essay back in 2012 (http://aeon.co/magazine/technology/david-deutsch-artificial-intelligence/).   His books follow similar themes and this article is a useful condensation of his ideas – most notable of which is the idea that intelligence/creativity/knowledge/universal computation is fundamentally about EXPLANATION – not instruction, not arithmetic, not reinforcement learning, etc.

He decries the lack of progress in artificial intelligence as due to flawed premises of the entire enterprise.  He makes the case that “he human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos.”   He declares self-awareness a thing and that universal computation can do it.   And he concludes that all the ingredients for artificial intelligence are encoded in DNA we just need the right idea to unlock that and use whatever idea in other substrates to create other general intelligent entities.

I’m grossly simplifying the article which is a gross simplification of his books already but that’s because most of the details are irrelevant.   Deutsch rightly ridicules the current prevailing approaches to AI and their inevitable failure to ultimately deliver intelligence.  I agree with a lot of his reasoning around why AI with current behaviorist / inductive instructions approaches is doomed.    But I disagree with him on pretty much everything else because he himself has built his arguments on flimsy premises.   He assumes, as almost all scientists and philosophers and people, that knowledge is something.  Something that is embodied, something that exists.    I agree in a very simply way that anything that is learned must be learned through trial and error by the entity learning it and that learning doesn’t happen through transcription.  But it’s not because knowledge results.

Knowledge is a reductive term that explains nothing and doesn’t really even clearly represent anything.  It’s not a concept that can even be explained categorically or through endless descriptions.  It simply is a general concept that sort of can be used to refer to lots of things.

Intelligence is the same kind of concept.  It refers to nothing in particular.   Self awareness, same.  Good and Evil, same.   Consciousness.  Free Will.  All the same.

AI won’t be coming because Real Intelligence isn’t a thing.

Learning is a slightly less reified concept than the others because it sort of gets at the point.  The point of all computation and any perceived awareness is merely connectivity in a networking/graph theory sense.   Advanced behaviors and “creativity” etc are merely effects of a hyper connected network.   Learning is CONNECTIONS.   “Knowledge” is CONNECTIONS.

Deutsch is correct that no one will be programming an AI.   If something we might call AI comes to exist it won’t be because we specifically designed it.   I would argue that it already exists, always has.   It’s highly flawed to think that humans are the only things capable of awareness and thinking.   It simply doesn’t add up.  But that’s an argument for another post.

Everything is connected.   Extensions of connections continue to evolve as more things connect to more other things in more ways.   It’s such a simple, boring concept that it doesn’t seem that it would “EXPLAIN” it all.  It doesn’t.  It won’t.  Because explanations are not the stuff of existence – intelligence or otherwise.

Thinking is not a thing.   Thoughts are not things.   There are connections between neurons and cells and organs and computers and planets and trees and galaxies and numbers and words and pictures and colors.  Where one thing ends and another begins is very not clear…. even with the “laws of physics” which Mr. Deutsch uses almost exclusively to justify everything.   The laws aren’t really laws.  Ironically.

What Are We?  There is no answer because it’s always changing.

Read Full Post »

From within the strange loop of self-reference the question “What is Data?” emerges.  Ok, maybe more practically the question arises from our technologically advancing world where data is everywhere, spouting from everything.  We claim to have a “data science” and now operate “big data” and have evolving laws about data collection and data use.   Quite an intellectual infrastructure for something that lacks identity or even a remotely robust and reliable definition.  Should we entrust our understanding and experience of the world to this infrastructure?   This question seems stupid and ignorant.  However, we have taken up a confused approach in all aspects of our lives by putting data ontologically on the same level as real, physical, actual stuff.    So now the question must be asked and must be answered and its implications drawn out.

Data is and Data is not.   Data is not data.   Data is not the thing the data represents or is attached to.   Data is but a ephemeral puff of exhaust from an limitless, unknowable universe of things and their relations. Let us explore.

Observe a few definitions and usage patterns:

Data According to Google

Data According to Google

https://www.google.com/webhp?sourceid=chrome-instant&rlz=1CAZZAD_enUS639US640&ion=1&espv=2&ie=UTF-8#q=data+definition

The latin roots point to the looming mystery.  “Give” -> “Something Given”.   Even back in history data was “something”.   Almost an anti-definition.

Perhaps we can find clues from clues:

Crossword Puzzle Clues for

Crossword Puzzle Clues for “Data”

http://www.wolframalpha.com/input/?i=data&a=*C.data-_*Word-

Has there been a crossword puzzle word with broader or more ambiguity than that?   “Food for thought?”  seems to hit the nail on the head.   The clues boil down to data is: numbers, holdings, information, facts, figures, fodder, food, grist, bits.   Sometimes crunched and processed, sometimes raw.  Food for thoughts, disks, banks, charts and computers.

????????????????????????

Youtube usually can tell us anything, here’s a video directly answering What Is Data:

Strong start in that video, Qualitative and Quantitative… and then by the end the video unwinds the definitions to include basically everything.

Maybe a technical lesson on data types will help elucidate the situation:

Data Types

Perhaps sticking to computers as a frame of reference helps us.   Data is stuff stored in a database specified by data types.  What exactly is stored?   Bits on a magnetic or electric device (hard drive or memory chip) are arranged according to structure defined by this “data” which is defined or created or detected by sensors and programs…   So is the data the bit?  the electric symbol?  the magnetic structures on the disk?  a pure idea regardless of physical substrate?

The confusing self-referential nature of the situation is wonderfully exploited by Tupper’s formula:

Tupper's formula

http://mathworld.wolfram.com/TuppersSelf-ReferentialFormula.html

What exactly is that?  it’s a pixel rendering (bits in memory turned into electrons shot a screen or LED excitations) of a formula (which is a collection of symbols) that when fed through a brain or a computer programmed by a brain end up producing a picture of a formula….

The further we dig the less convergence we seem to have.   Yet we have a “data science” in the world and employ “data scientists” and we tell each other to “look at the data” to figure out “the truth.”

Sometimes philosophy is useful in such confusing situations:

Information is notoriously a polymorphic phenomenon and a polysemantic concept so, as an explicandum, it can be associated with several explanations, depending on the level of abstraction adopted and the cluster of requirements and desiderata orientating a theory.

http://plato.stanford.edu/entries/information-semantic/

Er, that doesn’t seem like a convergence.  By all means we should read that entire essay, it’s certainly full of data.

Ok, maybe someone can define Data Science and in that we can figure out what is being studied:

https://beta.oreilly.com/ideas/what-is-data-science

That’s a really long article that points to data science as a duct taped loosely linked set of tools, processes, disciplines, activities to turn data into products and tell stories.   There’s clearly no simple definition or identification of the actual substance of data found there or in any other description of data science readily available.

There’s a certain impossibility of definition and identification looming.   Data isn’t something concrete.  It’s “of” everything.  It appears to be a shadowy representational trace of phenomena and relations and objects that is itself encoded in phenomena and relations and objects.

There’s a wonderful aside in the great book “Things to Make and Do in the Fourth Dimension” by Matt Parker

Finite Nature of Data

Finite Nature of Data

https://books.google.com/books?id=wK2MAwAAQBAJ&lpg=PP1&dq=fourth%20dimension%20math&pg=PP1#v=onepage&q=fourth%20dimension%20math&f=false

Data seems to have a finite, discrete property to it and yet is still very slippery.  It is reductive – a compression of the infinite patterns in the universe, it is also a pattern. Compressed traces of actual things.   Data is wisps of existence, a subset of existence.   Data is an optical and sensory illusion that is an artifact of the limitedness of the sensor and irreducibility of connections between things.

Data is not a thing.   It is of things, about things, traces of things, made up of things.

There can be no data science.   There is no scientific method possible.   Science is done with data, but cannot be done on data.  One doesn’t do experiments on data, experiments emit and transcode data, but data itself cannot be experimental.

Data is art.   Data is an interpretive literature.  It is a mathematics – an infinite regress of finite compressions.

Data is undefined and belongs in the set of unexplainables: art, infinity, time, being, event.

Data = Art Data = Art

Read Full Post »

The edges of existence.

Everything is an edge – an edge of an edge – an edge of an edge of an edge. Existence is an infinite regress of edges encoding, decoding and recoding other infinite regressing edge networks. The explanations for the unexplained, even in their simplicity, are infinite regresses.

A dictionary is a book of words defining words. Where does a definition end?

Human language is a loose collection of rules to be excepted and exceptions to be ruled by effect. If a communication communicates it’s acceptable?

Sensory perceptions and the instruments of perception cannot be fully perceived. Are we to believe our eyes about our eyes?

Mathematics and its objects and relations are designed to perfectly articulate all that is the case and yet hiding with infinity are infinities and transcendentals that cannot be defined, systematically discovered, nor hardly described. (http://vihart.com/transcendental-darts/)

Our science modernized from the mystics (Kepler) and numerologists (Newton) and the faithful (Leibniz) strikes out, pathetically, against leaps of faith. This science likely has led to the heating of the planet via industry which now can only be reversed by more science?

Turing conceived computers to mirror the way humans thought – conceived when our collective knowledge of brains was rather small. Ironically, within a few lines of code computers (theoretical and physical) become nearly inscrutable in terms of what they might do. Are more inscrutable machines required to create and understand more inscrutable machines?

Currency is abstracted not just from physical objects but from any tangible value other than a sustained believe that this $ will be understood and honored by some anonymous entity beyond oneself. The beliefs sustained by what most label as “the dismal science” (economics) and its backer, the state.

The desired progress of all of the above can be summarized as “prediction”. If something is predictable it is controllable is the underlying point of most modern obsessions with science, technology and information. Even though our most precise and abstracted efforts have shown prediction, by in large, is impossible. Not just for complex systems of the natural world but the very simple mathematical objects we create. https://www.youtube.com/watch?v=sHYFJByddl8

Despite all the empirical evidence over hundreds of thousands of years and the theoretical proofs of the 20th century as a whole, our culture – primarily in the US but spreading elsewhere – simply refuses to give up control through prediction. It persists, likely, because we are limited beings in energy and time and need whatever perceived advantage we can get. Right? Seeming identification of a pattern reinforces that identification when paired with the perception of reward or advantage. That is learning itself is an edge of an edge of an edge and fully infinitely regressive to its own contradiction.

Prediction and learning and control are all about probability. For a prediction to be useful it must tell us something about the probability of conditions coming to be. For us to do something based on a prediction we must believe that prediction to be as accurate at least as much as the probability of events it predicts. That is, our beliefs should only be as strong as the probability predicted. Or so logic would suggest. However, probability itself turns out, with no surprise here, to be an infinite regress. Probability is really a statement about lack of information. (Sure some people argue that chance/randomness is implicit to existence while others say it’s an artifact of our limited perceptions. In either case our ability to say anything about the existence of things comes down to ignorance and the infinite regress of existence.)

This information remains forever out of reach. It is both at the heart of everything and is the edge of everything. We cannot know. We can only play with these edges, find more of the edges, recode edges into edges. Our struggles philosophically, scientifically, spiritually and educationally come down to this straightforward non-fact. Should we continue our answer and prediction seeking efforts in spite of their impossible hope? That is a personal question that each will have to answer over and over for themselves. For me, I will, not so I can be right or in control, but because I enjoy the edge want to live outside of control. I paint to paint, not because the painting says something about reality. “The good life” is proportional to the number of edges explored, clanged to, jumped from, thrown away, revisited, and combined.

Read Full Post »

The human race began a path towards illiteracy when moving pictures and sound began to dominate our mode of communication. Grammar checking word processors and the Internet catalyzed an acceleration of the process. Smartphones, 3-D printing, social media and algorithmic finance tipped us towards near total illiteracy.

The complexity of the machines have escaped our ability to understand them – to read them and interpret them – and now, more importantly, to author them. The machines author themselves. We inadvertently author them without our knowledge. And, in cruel turn, they author us.

This is not a clarion call to arms to stop the machines. The machines cannot be stopped for we will never want to stop them so intertwined with our survival (the race to stop climate change and or escape the planet will not be done without the machines). It is a call for the return to literacy. We must learn to read machines and maintain our authorship if we at all wish to avoid unwanted atrocities and a painful decline to possible evolutionary irrelevance. If we wish to mediate the relations between each other we must remain the others of those mediations.

It does not take artificial intelligence for our illiteracy to become irreversible. It is not the machines that will do us in and subjugate us and everything else. Intelligence is not the culprit. It is ourselves and the facets of ourselves that make it too easy to avoid learning what can be learned. We plunged into a dark ages before. We can do it again.

We are in this situation, perhaps, unavoidably. We created computers and symbolics that are good enough to do all sorts of amazing things. So amazing that we just went and found ways to unleash things without all the seeming slowness of evolutionary and behavioral consequences we’ve observed played out on geological time scales. We have unleashed an endless computational kingdom of such variety rivaling that of the entire history of Earth. Here we have spawned billions of devices with billions and billions of algorithms and trillions and trillions and trillions of data points about billions of people and trillions of animals and a near infinite hyperlinkage between them all. The benefits have outweighed the downsides in terms of pure survival consequences.

Or perhaps the downside hasn’t caught us yet.

I spend a lot of my days researching, analyzing and using programming languages. I do this informally, for work, for fun, for pure research, for science. It is my obsession. I studied mathematics as an undergraduate – it too is a language most of us are illiterate in and yet our lives our dominated by it. A decade ago I thought the answer was simply this:

Everyone should learn to program. That is, everyone should learn one of our existing programming languages.

It has more recently occurred to me this is not only realistic it is actually a terrible idea. Programming languages aren’t like English or Spanish or Chinese or any human language. They are much less universal. They force constraints we don’t understand and yet don’t allow for any wiggle room. We can only speak them by typing them incredibly specific commands on a keyboard connected to a computer architecture we thought up 50 years ago – which isn’t even close to the dominate form of computer interaction most people use (phones, tablets, tvs, game consoles with games, maps and txt messages and mostly consumptive apps). Yes, it’s a little more nuanced than that in that we have user interfaces that try to allow us all sorts of flexbility in interaction and they will handle the translation to specific commands for us.

Unfortunately it largely doesn’t work. Programming languages are not at all like how humans program. They aren’t at all how birds or dogs or dolphins communicate. They start as an incredibly small set of rules that must be obeyed or something definitely will breakdown (a bug! A crash!). Sure, we can write an infinite number of programs. Sure most languages and the computers we use to run the programs written with language are universal computers – but that doesn’t make them at all as flexible and useful as natural language (words, sounds, body language).

As it stands now we must rely on about 30 million people on the entire planet to effectively author and repair the billions and billions of machines (computer programs) out there (http://www.infoq.com/news/2014/01/IDC-software-developers)

Only 30 million people speak computer languages effectively enough to program them. That is a very far cry from a universal or even natural language. Most humans can understand any other human, regardless of the language, on a fairly sophisticated level – we can easily tell each others basic state of being (fear, happiness, anger, surprise, etc) and begin to scratch out sophisticate relationships between ideas. We cannot do this at all with any regularity or reliability with computers. Certainly we can communicate with some highly specific programs some highly specific ideas/words/behaviors – but we cannot converse even remotely close with a program/machine in any general way. We can only rely on some of the 30 million programmers to improve the situation slowly.

If we’re going to be literate in the age of computation our language interfaces with computers must beome much better. And I don’t believe that’s going to happen by billions of people learning Java or C or Python. No it’s going to happen by the evolution of computers and their languages becoming far more human author-able. And it’s not clear the computers survival depends on it. I’m growing in my belief that humanity’s survival depends on it though.

I’ve spent a fair amount of time thinking about what my own children should learn in regards to computers. And I have not at all shaped them into learning some specific language of todays computers. Instead, I’ve focused on them asking questions and not being afraid of the confusing probable nature of the world. It is my educated hunch that the computer languages of the future will account for improbabilities and actually rely on them, much as our own natural languages do. I would rather have my children be able to understand our current human languages in all their oddities and all their glorious ability to express ideas and questions and forever be open to new and different interpretations.

The irony is… teaching children to be literate into todays computer programs as opposed to human languages and expresses, I think, likely to leave them more illiterate in the future when the machines or our human authors have developed a much richer way to interact. And yet, the catch-22 is that someone has to develop these new languages. Who will do it if not myself and my children? Indeed.

This is why my own obsession is to continue to push forward a more natural and messier idea of human computer interaction. It will not look like our engineering efforts today with a focus on speed and efficiency and accuracy. Instead it will will focus on richness and interpretative variety and serendipity and survivability over many contexts.

Literacy is not a complete efficiency. It is a much deeper phenomena. One that we need to explore further and in that exploration not settle for the computational world as it is today.

Read Full Post »

The Point

Everything is a pattern and connected to other patterns.   The variety of struggles, wars, businesses, animal evolution, ecology, cosmological change – all are encompassed by the passive and active identification and exploitation of changes in patterns.

What is Pattern

Patterns are thought of in a variety of ways – a collection of data points, pictures, bits and bytes, tiling.   All of the common sense notions can be mapped to the abstract notion of a graph or network of nodes and their connections, edges.   It is not important, for the sake of the early points of this essay, to worry to much about the concept of a graph or network or its mathematical or epistemological construction.   The common sense ideas that might come to mind should suffice – everything is a pattern connected to other patterns. E.g. cells are connected to other cells sometimes grouped into organs connected to other organs sometimes grouped into creatures connected to other creatures.

Examples

As can be imagined the universe has a practically infinite number of methods of pattern identification and exploitation. Darwinian evolution is one such example of a passive pattern identification and exploration method. The basic idea behind it is generational variance with selection by consequences. Genetics combined with behavior within environments encompass various strategies emergent within organisms which either hinder or improve the strategies chance of survival. Broken down and perhaps too simplistically an organism (or collection of organisms or raw genetic material) must be able to identify threats, energy sources and replication opportunities and exploit these identifications better than the competition.   This is a passive process overall because the source of identification and exploitation is not built in to the pattern selected, it is emergent from the process of evolution. On the other hand sub processes within the organism (object of pattern were considering here) can be active – such as in the case of the processing of an energy source (eating and digestion and metabolism).

Other passive pattern processes include the effects of gravity on solar systems and celestial bodies on down to their effects on planetary ocean tides and other phenomena.   Here it is harder to spot what is the identification aspect?   One must abandon the Newtonian concept and focus on relativity where gravity is the name of the changes to the geometry of spacetime.   What is identified is the geometry and different phenomena exploit different aspects of the resulting geometry.   Orbits form around a sun because of the suns dominance in the effect on the geometry and the result can be exploited by planets that form with the right materials and fall into just the right orbit to be heated just right to create oceans gurgling up organisms and so on.   It is all completely passive – at least with our current notion of how life my have formed on this planet. It is not hard to imagine based on our current technology how we might create organic life forms by exploiting identified patterns of chemistry and physics.

In similar ways the trajectory of artistic movements can be painted within this patterned theory.   Painting is an active process of identifying form, light, composition, materials and exploiting their interplay to represent, misrepresent or simply present pattern.   The art market is an active process of identifying valuable concepts or artists or ideas and exploiting them before mimicry or other processes over exploit them until the value of novelty or prestige is nullified.

Language and linguistics are the identification and exploitations of symbols (sounds, letters, words, grammars) that carry meaning (the meaning being built up through association (pattern matching) to other patterns in the world (behavior, reinforcers, etc).   Religion, by the organizers, is the active identification and exploitation of imagery, language, story, tradition, and habits that maintain devotional and evangelical patterns. Religion, by the practitioner, can be active and passive maintenance of those patterns. Business and commerce is the active (sometimes passive) identification and exploitation of efficient and inefficient patterns of resource availability, behavior and rules (asset movement, current social values, natural resources, laws, communication medium, etc).

There is not a category of inquiry or phenomena that can escape this analysis.   Not because the analysis is so comprehensive but because pattern is all there is. Even the definition and articulation of this pattern theory is simply a pattern itself which only carries meaning (and value) because of the connection to other patterns (linear literary form, English, grammar, word processing programs, blogging, the Web, dictionaries).

Mathematics and Computation

It should be of little surprise that mathematics and computation forms the basis of so much of our experience now.   If pattern is everything and all patterns are in a competition it does make some common sense that efficient pattern translation and processing would arise as a dominant concept, at least in some localized regions of existence.

Mathematics effectiveness in a variety of situations/contexts (pattern processing) is likely tied to its more general, albeit often obtuse and very abstracted, ability to identify and exploit patterns across a great deal of categories.   And yet, we’ve found that mathematics is likely NOT THE END GAME. As if anything could be the end game.   Mathematics’ own generalness (which we could read as reductionist and lack of full fidelity of patterns) does it in – the proof of incompleteness showed that mathematics itself is a pattern of patterns that cannot encode all patterns. Said differently – mathematics incompleteness necessarily means that some patterns cannot be discovered nor encoded by the process of mathematics.   This is not a hard meta-physical concept. Incompleteness merely means that even for formal systems such as regular old arithmetic there are statements (theorems) where the logical truth or falsity cannot be established. Proofs are also patterns to be identified and exploited (is this not what pure mathematics is!) and yet we know, because of proof, that we will always have patterns, called theorems, that will not have a proof.   Lacking a proof for a theorem doesn’t mean we can’t use the theorem, it just means we can’t count on the theorem to prove another theorem. i.e. we won’t be doing mathematics with it.   It is still a pattern, like any sentence or painting or concept.

Robustness

The effectiveness of mathematics is its ROBUSTNESS. Robustness (a term I borrow from William Wimsatt) is the feature of a pattern that when it is processed from multiple other perspectives (patterns) the inspected pattern maintains its overall shape.   Some patterns maintain their shape only within a single or limited perspective – all second order and higher effects are like this. That is, anything that isn’t fundamental is of some order of magnitude less robust that things that are.   Spacetime geometry seems to be highly robust as a pattern of existential organization.   Effect carrying ether, as proposed more than 100 years ago, is not.   Individual artworks are not robust – they appear different to any different perspective. Color as commonly described is not robust.   Wavelength is.

While much of mathematics is highly robust or rather describes very robust patterns it is not the most robust pattern of patterns of all. We do not and likely won’t ever know the most robust pattern of all but we do have a framework for identifying and exploiting patterns more and more efficiently – COMPUTATION.

Computation, by itself. 

What is computation?

It has meant many things over the last 150 years.   Here defined it is simply patterns interacting with other patterns.   By that definition it probably seems like a bit of a cheat to define the most robust pattern of patterns we’ve found to be patterns interacting with other patterns. However, it cannot be otherwise. Only a completely non-reductive concept would fit the necessity of robustness.   The nuance of computation is that there are more or less universal computations.   The ultimate robust pattern of patterns would be a truly universal-universal computer that could compute anything, not just what is computable.   The real numbers are not computable, the integers are.   A “universal computer” described by today’s computer science is a program/computer that can compute all computable things. So a universal computer can compute the integers but cannot compute the real numbers (pi, e, square root of 2). We can prove this and have (the halting problem, incompleteness, set theory….).   So we’re not at a completely loss of interpreting patterns of real numbers (irrational numbers in particular). We can and do compute with pi and e and square root millions of times a second.   In fact, this is the key point.   Computation, as informed by mathematics, allows us to identify and exploit patterns far more than any other apparatus humans have devised.   However, as one would expect, the universe itself computes and computes itself.   It also has no problem identifying and exploiting patterns of all infinitude of types.

Universal Computation

So is the universe using different computation than we are? Yes and no.   We haven’t discovered all the techniques of computation at play. We never will – it’s a deep well and new approaches are created constantly by the universe. But we now have unlocked the strange loopiness of it all.   We have uncovered Turing machines and other abstractions that allow us to use English-like constructs to write programs that get translated into bits for logic gates in parallel to compute and generate solutions to math problems, create visualizations, search endless data, write other programs, produce self replicating machines, figure out interesting 3D printer designs, simulate markets, generate virtual and mixed realities and anything else we or the machines think up.

What lies beneath this all though is this very abstract yet simple concept of networks.   Nodes and edges. The mathematics and algorithms of networks.   Pure relation between things. Out of the simple connection of things from things arise all the other phenomena we experience.   The network is limitless – it imposes no guardrails to what can or can’t happen. That it is a network does explain and impose why all possibilities exhibit as they do and the relative emergent levels of phenomena and experience.

The computation of pure relation is ideal.   It only supersedes (makes sense to really consider) the value of reductionist modes of analysis, creation and pattern processing when the alternative pattern processing is not sufficient in accuracy and/or has become sufficiently inefficient to provide relative value for it’s reduction.   That is, a model of the world or a given situation is only as value as it doesn’t overly sacrifice accuracy too much for efficiency.   It turns out for most day to day situations Newtonian physics suffices.

What Next

we’ve arrived at a point in discovery and creation where the machines and machine-human-earth combinations are venturing into virtual, mixed and alternate realities that current typical modes of investigation (pattern recognition and exploitation) are not sufficient. The large hadron collider is an example and less an extreme example than it was before. The patterns we want to understand and exploit – the quantum and the near the speed of light and the unimaginably large (the entire web index with self driving cars etc) – are of such a different magnitude and kind.   Then when we’ve barely scratched the surface there we get holograms and mixed reality which will create it’s own web and it’s own physical systems as rich and confusing as anything we have now. Who can even keep track of the variety of culture and being and commerce and knowledge in something such as Minecraft? (and if we can’t keep track (pattern identify) how can we exploit (control, use, attach to other concepts…)?

The pace of creation and discovery will never be less in this local region of spacetime.   While it may not be our goal it is our unavoidable fate (yes we that’s a scary word) to continue to compute and have a more computational approach to existence – the identification and exploitation of patterns by other patterns seems to carry this self-reinforcing loop of recursion and the need of ever more clarifying tools of inspection that need more impressive means of inspecting themselves…   everything in existence replicates passively or actively and at a critical level/amount of interconnectivity (complexity, patterns connected to patterns) self inspection (reasoning, introspection, analysis, recursion) becomes necessary to advance to the next generation (explore exploitation strategies).

Beyond robotics and 3d printing and self-replicating and evolutionary programs the key pattern processing concept humans will need is a biological approach to reasoning about programs/computation.   Biology is a way of reasoning that attempts to classify patterns by similar behavior/configurations/features.   And in those similarities find ways to relate things (sexually=replication, metabolism=Energy processing, etc).   It is necessarily both reductionist, in its approach to categorize, and anti-reductionist in its approach to look at everything anew. Programs / computers escape our human (and theoretical) ability to understand them and yet we need some way to make progress if we, ourselves, are to persist along side them.

And So.

It’s quite possible this entire train of synthesis is a justification for my own approach to life and my existence. And this would be consistent with my above claims.   I can’t do anything about the fact that my view is entirely biased by my own existence as a pattern made of patterns of patterns all in the lineage of humans emerged from hominids and so on all the way down to whatever ignited patterns of life on earth.

I could be completely wrong. Perhaps some other way of synthesizing existence all the way up and down is right. Perhaps there’s no universal way of looking at it. Though it seems highly unlikely/very strange to me that patterns at one level or in one perspective couldn’t be analyzed abstractly and apply across and up and down.   And that the very idea itself suggests patterns of pattern synthesis is fundamental strikes me as much more sensible, useful and worth pursuing than anything else we’ve uncovered and cataloged to date.

Read Full Post »

Older Posts »