Posts Tagged ‘science’

From within the strange loop of self-reference the question “What is Data?” emerges.  Ok, maybe more practically the question arises from our technologically advancing world where data is everywhere, spouting from everything.  We claim to have a “data science” and now operate “big data” and have evolving laws about data collection and data use.   Quite an intellectual infrastructure for something that lacks identity or even a remotely robust and reliable definition.  Should we entrust our understanding and experience of the world to this infrastructure?   This question seems stupid and ignorant.  However, we have taken up a confused approach in all aspects of our lives by putting data ontologically on the same level as real, physical, actual stuff.    So now the question must be asked and must be answered and its implications drawn out.

Data is and Data is not.   Data is not data.   Data is not the thing the data represents or is attached to.   Data is but a ephemeral puff of exhaust from an limitless, unknowable universe of things and their relations. Let us explore.

Observe a few definitions and usage patterns:

Data According to Google

Data According to Google


The latin roots point to the looming mystery.  “Give” -> “Something Given”.   Even back in history data was “something”.   Almost an anti-definition.

Perhaps we can find clues from clues:

Crossword Puzzle Clues for

Crossword Puzzle Clues for “Data”


Has there been a crossword puzzle word with broader or more ambiguity than that?   “Food for thought?”  seems to hit the nail on the head.   The clues boil down to data is: numbers, holdings, information, facts, figures, fodder, food, grist, bits.   Sometimes crunched and processed, sometimes raw.  Food for thoughts, disks, banks, charts and computers.


Youtube usually can tell us anything, here’s a video directly answering What Is Data:

Strong start in that video, Qualitative and Quantitative… and then by the end the video unwinds the definitions to include basically everything.

Maybe a technical lesson on data types will help elucidate the situation:

Data Types

Perhaps sticking to computers as a frame of reference helps us.   Data is stuff stored in a database specified by data types.  What exactly is stored?   Bits on a magnetic or electric device (hard drive or memory chip) are arranged according to structure defined by this “data” which is defined or created or detected by sensors and programs…   So is the data the bit?  the electric symbol?  the magnetic structures on the disk?  a pure idea regardless of physical substrate?

The confusing self-referential nature of the situation is wonderfully exploited by Tupper’s formula:

Tupper's formula


What exactly is that?  it’s a pixel rendering (bits in memory turned into electrons shot a screen or LED excitations) of a formula (which is a collection of symbols) that when fed through a brain or a computer programmed by a brain end up producing a picture of a formula….

The further we dig the less convergence we seem to have.   Yet we have a “data science” in the world and employ “data scientists” and we tell each other to “look at the data” to figure out “the truth.”

Sometimes philosophy is useful in such confusing situations:

Information is notoriously a polymorphic phenomenon and a polysemantic concept so, as an explicandum, it can be associated with several explanations, depending on the level of abstraction adopted and the cluster of requirements and desiderata orientating a theory.


Er, that doesn’t seem like a convergence.  By all means we should read that entire essay, it’s certainly full of data.

Ok, maybe someone can define Data Science and in that we can figure out what is being studied:


That’s a really long article that points to data science as a duct taped loosely linked set of tools, processes, disciplines, activities to turn data into products and tell stories.   There’s clearly no simple definition or identification of the actual substance of data found there or in any other description of data science readily available.

There’s a certain impossibility of definition and identification looming.   Data isn’t something concrete.  It’s “of” everything.  It appears to be a shadowy representational trace of phenomena and relations and objects that is itself encoded in phenomena and relations and objects.

There’s a wonderful aside in the great book “Things to Make and Do in the Fourth Dimension” by Matt Parker

Finite Nature of Data

Finite Nature of Data


Data seems to have a finite, discrete property to it and yet is still very slippery.  It is reductive – a compression of the infinite patterns in the universe, it is also a pattern. Compressed traces of actual things.   Data is wisps of existence, a subset of existence.   Data is an optical and sensory illusion that is an artifact of the limitedness of the sensor and irreducibility of connections between things.

Data is not a thing.   It is of things, about things, traces of things, made up of things.

There can be no data science.   There is no scientific method possible.   Science is done with data, but cannot be done on data.  One doesn’t do experiments on data, experiments emit and transcode data, but data itself cannot be experimental.

Data is art.   Data is an interpretive literature.  It is a mathematics – an infinite regress of finite compressions.

Data is undefined and belongs in the set of unexplainables: art, infinity, time, being, event.

Data = Art Data = Art

Read Full Post »

We have a problem.

As it stands now the present and near future of economic, social and cultural development primarily derives from computers and programming.   The algorithms already dominate our society – they run our politics, they run our financial system, they run our education, they run our entertainment, they run our healthcare.    The ubiquitous tracking of everything that can possible be tracked determined this current situation.   We must have programs to make things, to sell things, to exchange things.


The problem is not necessarily the algorithms or the computers themselves but the fact that so few people can program.    And why?   Programming Sucks.

Oh sure, for those that do program and enjoy it, it doesn’t suck. As Much.   But for the 99%+ of the world’s population that doesn’t program a computer to earn a living it’s a terrible endeavour.

Programming involves a complete abstraction away from the world and all surroundings.  Programming is disembodied – it is mostly a thought exercise mixed with some of the worst aspects of engineering.   Mathematics, especially the higher order really crazy stuff was long ago unapproachable and completely disembodied requiring no physical engineering or representation at all.  Programming, in most of its modern instances, consequences very far away from its creative behavior.  That is, in most modern system it takes days, weeks, months years to personally feel the results deeply of what you’ve built.    Programming is ruthless.  It’s unpredictable.   It’s 95% or more reinventing the wheel and configuring environments to even run the most basic program.  It’s all set up, not a lot of creation.   So few others understand it they can’t appreciate the craft during the act (only the output is appreciated counted in users and downloads).

There are a couple of reasons why this is the case – a few theoretical/natural limits and a few self-imposed, engineering and cultural issues.

First the engineering and cultural issues.   Programming languages and computers evolved rather clumsily built mostly by programmers for other programmers – not for the majority of humans.    There’s never been a requirement to make programming itself more humanistic, more embodied.    Looking back on the history of computers computing was done always in support of something else, not for its own sake.   It was done to Solve Problems.   As long as the computing device and program solved the problem the objective was met.   Even the early computer companies famously thought it was silly to think everyone one day might actually use a personal computer.   And now we’re at a potentially more devastating standstill – it’s absurd to most people to think everyone might actually need to program.    I’ll return to these issues.

Second the natural limits of computation make for a very severe situation.   There are simply things that are non-computable.   That is, we can’t solve them.   Sometimes we can PROVE we can’t solve them but that doesn’t get us any closer to solving some things.    This is sometimes called the Halting Problem.  The idea is basically that for a sufficiently complex program you can’t predict whether the program will halt or not.   The implication is simply you must run the program and see if it halts.  Again, complexity is the key here.  If these are relatively small, fast programs with a less than infinite number of possible outcomes then you can simply run the program across all possible inputs and outputs.   Problem is… very few programs are that simple and certainly not any of the ones that recommend products to you, trade your money on wall street, or help doctors figure out what’s going on in your body.


This is a VERY BIG DEAL.    Think about it.   We deploy millions of programs a day with completely non-deterministic, unpredictable outcomes.  Sure we do lots of quality assurance and we test everything we can and we simulate and we have lots of mathematics and experience that helps us grow confident… but when you get down to it, we simply don’t know if any given complex program has some horrible bug in it.

This issue rears its head an infinite number of times a day.   If you’ve ever been mad at MS Word for screwing up your bullet points or your browser stops rendering a page or your internet doesn’t work or your computer freezes… this is what’s going on.  All of these things are complex programs interacting with other programs and all of them have millions (give or take millions) of bugs in them.  Add to it that all of these things are mutable bits on your computer that viruses or hardware issues can manipulate (you can’t be sure the program you bought is the program you currently run) and you can see how things quickly escape our abilities to control.

This is devastating for the exercise of programming.  Computer scientists have invented a myriad of ways to temper the reality of the halting problem.   Most of these management techniques makes programming even more mysteries and challenging due to the imposition of even more rules that must be learned and maintained.   Unlike music and writing and art and furniture making and fashion we EXPECT and NEED computers to do exactly what we program them to do.   Most of the other stuff humans do and create is just fine if it sort of works.  It still has value.  Programs that are too erratic or worse, catastrophic, are not only not valuable we want to eliminate them from the earth.   We probably destroy some 95%+ of the programs we write.

The craft of programming is at odds with its natural limits.   Our expectations and thus the tools we craft to perform program conflict with the actuality.  Our use of programs exceeds their possibilities.

And this really isn’t due to computers or programming, but something more fundamental: complexity and prediction.    Even as our science shows us more and more that prediction is an illusion our demands of technology and business and media run counter.    This fundamental clash manifests itself in programming, programming languages, the hardware of computers, the culture of programming.  It is at odds with itself and in being so conflicted is unapproachable to those that don’t have ability to stare maddeningly into a screen flickering with millions of unknown rules and bugs.   Mastery is barely achievable except for a rare few.   And without mastery enjoyment rarely comes – the sort of enjoyment that can sustain someones attention long enough to do something significant.

I’ve thought long and hard about how to improve the craft of programming.   I’ve programmed a lot, lead a lot of programming efforts, delivered a lot of software, scrapped a lot more.  I’ve worked in 10+ languages.  I’ve studied mathematics and logic and computer science and philosophy.  I’ve worked with the greatest computer scientists.  I’ve worked with amazing business people and artists and mathematicians.   I’ve built systems large and small in many different categories.  In short, I’ve yet to find a situation in which programming wasn’t a major barrier to progress and thinking.

The solution isn’t in programming languages and in our computers.  It’s not about Code.org and trying to get more kids into our existing paradigm. This isn’t an awareness or interest problem.   The solution involves our goals and expectations.

We must stop trying to solve every problem perfectly.  We must stop trying to predict everything.   We must stop pursuing The Answer, as if it actually exists.  We must stop trying to optimize everything for speed and precision and accuracy. And we must stop applying computerized techniques to every single category of activity – at least in a way where we expect the computer to forever to the work.

We must create art.  Programming is art.  It is full of accidents and confusions and inconsistencies.   We must turn it back to an analog experience rather than a conflicted digital.    Use programming to explore and narrate and experiment rather than answer and define and calculate.

The tools that spring from those objectives will be more human.  More people will be able to participate.  We will make more approachable programs and languages and businesses.

In the end our problem with programming is one of relation – we’re either relating more or less to the world around us and as computers grow in numbers and integration we need to be able to commune, shape and relate to them.

Read Full Post »

This week my 11 year old daughter asked if she could download and join snapchat. I immediately nixed that idea. I haven’t nixed her getting involved in much else technically where the EULA allows it. Snapchat touched a chord and got me to thinking (again) about identity – how we identify ourselves – who we think we are – and who others think we are. I think about this deeply every so often, sometimes becoming unglued when I think too hard about it. It’s a complicated concept.


So many things contribute to the patterns that are what we are. Our identity and sense of place in this world – undoubtedly conditioned by the modern world – is built around physical place (and now virtual places) and social circles (and now virtual social networks) and status within established networks of influence. This was probably not always the case when people were far more nomadic and identity wasn’t tied to a hometown or a home school or a 150 person social network. But now, more than ever, identity is a thing.

I personally have moved residences over 20 times in my life. 13 of them different cities (social networks) and 5 across state lines.

Non Existence -> Born (don’t remember)
Littleton, CO (don’t remember, sorta remember)
Colorado Springs (k – 2nd grade)
Aurora, CO Laredo Circle House (2nd grade – 3rd grade???)
Aurora, CO Laredo Court House (4th grade??? – 7th grade)
Miami, FL Kendall House (8th grade)
Miami, FL Baptist Hospital House (9th grade – 10th grade)
Aurora, CO Salsaleto House (11th grade – 12th grade)
Aurora, CO Some Apartment I Forget Where (Summer before college)
Chicago, IL Woodward Court/Univ. Chicago (Freshman year college)
Aurora, CO Buckingham Mall House (Summer between Freshman and Sophomore Year)
Chicago, IL Woodward Court/Univ. Chicago (Sophomore year college)
Chicago, IL 53rd Street Apartment (summer between sophomore and junior year)
Chicago, IL Blackstone Building/University Chicago (Junior year college)
Chicago, IL 53rd Street Co-Op Apartment (summer between junior and senior year)
Santa Monica, CA 9th and Pico (1999)
Chicago, IL Roosevelt and Michigan Apartment (2000 – 2002)
Santa Monica, CA 9th and Pico (2002 – 2005)
Playa Vista, CA Fountainhead Apartment (2005 – 2006)
Venice, CA Abbot Kinney House (2006 – 2010)
Austin, TX Travis Heights House (2010 – 2011)
Austin, TX Deep Eddy House (2012)
Marina Del Rey, CA (2013 – present)

My own children have now moved 5 times (the oldest one) and twice across state lines.

And these are just the residence moves – not all the jobs, schools, social circles, life phases and other changes that go into making up our context and our history. I have 692 friends on facebook, a couple hundred followers on twitter, tens of followers on instagram, one attempt at snapchat, fifty pinterest followers and so on. Sometimes I think of this all as an audience, which is quite insane to me as a concept but I doubt I’m the only one that feels like they have an audience online. I’ve done speaking engagements at conferences, I’ve written 8 years of blogs, somehow I authored several whitepapers, I think i have a patent or three, I’ve performed in 40+ live theater shows, I built hundreds of websites and mobile apps with between 1 and 50 million users a month…. WHAT THE F*** DOES IT ALL ADD UP TO? WHO AM I? and WHY IS THAT EVEN A QUESTION?

It’s a question because my daughters keep finding new ways to “express themselves” and “connect to others.” They “identify” with my wife or myself by saying “oh, i’m so like mom!” They intellectually get the ideas of genetics and art and fashion and learning and the delineation between it all.  They are very keen at telling me I don’t “get” them…. I keep waiting for the day when the TSA finally says they are full human identities and require proof of the case (driver’s license/passport).

It’s also a question because everyday the Western world bombards each other in ways such as:
“what am I worth?”
“tell me about your past.”
“are you this ism or that ism?”
“what party are you?”

and every other variation of class, job history, race, culture, language, outward appearance…

Anchors is my best guess at identities. Us, limited beings, pattern creating and recognizing beings find ways to lay anchors and say THIS IS ENOUGH – THIS IS WHERE I’M DROPPING ANCHOR and REMEMBER THIS. We drop these anchors – which are complex patterns we simplify – and label them as classes, races, job titles, cultures, state lines, political parties, etc. We drop anchors to save energy. That is, we hope the anchors keep us from having to remember all of the context and history that lead us to here when we are in the heat of the moment of making a decision. We want to save time when working out who we hire, with whom we partner, with whom we commune, with whom we war…


Identity is an illusion.

We are not the isms, the races, the classes, nor the anchors we drop. We all are ever evolving changing masses of organs, cells, and atoms that respond to the changes around them. We are connected – to each other, to the Web, to the world, to nature, to everything that passes gamma rays into us – EVERYTHING.

And this isn’t a ZEN kind of thinking i’m talking about. It’s a very simple, real concept that *WE* don’t EXIST. and the idea that WE EXIST is a major reason why “we” all end up fighting and destroying and gloating and taking credit and paying dues and every other manner of paying homage to an illusion. We do this because the delusion of singular identity is efficient in many respects. Capital markets reward identities. Democracies, despite their conceptual idea of the masses, reward identities. Social media and the internet reward identities.

And in all this efficiency created by identities we actually end up destroying things. Identities are the most efficient destructive concepts we’ve collectively devised. They shut everything down. They allow entire populations to be ignored. They tune our attention out. They tune our own senses out.

It makes sense this is so and that it persists.

Can it be resisted? *I* don’t know. Can we live without it?  I don’t know.

Read Full Post »

The Point

Everything is a pattern and connected to other patterns.   The variety of struggles, wars, businesses, animal evolution, ecology, cosmological change – all are encompassed by the passive and active identification and exploitation of changes in patterns.

What is Pattern

Patterns are thought of in a variety of ways – a collection of data points, pictures, bits and bytes, tiling.   All of the common sense notions can be mapped to the abstract notion of a graph or network of nodes and their connections, edges.   It is not important, for the sake of the early points of this essay, to worry to much about the concept of a graph or network or its mathematical or epistemological construction.   The common sense ideas that might come to mind should suffice – everything is a pattern connected to other patterns. E.g. cells are connected to other cells sometimes grouped into organs connected to other organs sometimes grouped into creatures connected to other creatures.


As can be imagined the universe has a practically infinite number of methods of pattern identification and exploitation. Darwinian evolution is one such example of a passive pattern identification and exploration method. The basic idea behind it is generational variance with selection by consequences. Genetics combined with behavior within environments encompass various strategies emergent within organisms which either hinder or improve the strategies chance of survival. Broken down and perhaps too simplistically an organism (or collection of organisms or raw genetic material) must be able to identify threats, energy sources and replication opportunities and exploit these identifications better than the competition.   This is a passive process overall because the source of identification and exploitation is not built in to the pattern selected, it is emergent from the process of evolution. On the other hand sub processes within the organism (object of pattern were considering here) can be active – such as in the case of the processing of an energy source (eating and digestion and metabolism).

Other passive pattern processes include the effects of gravity on solar systems and celestial bodies on down to their effects on planetary ocean tides and other phenomena.   Here it is harder to spot what is the identification aspect?   One must abandon the Newtonian concept and focus on relativity where gravity is the name of the changes to the geometry of spacetime.   What is identified is the geometry and different phenomena exploit different aspects of the resulting geometry.   Orbits form around a sun because of the suns dominance in the effect on the geometry and the result can be exploited by planets that form with the right materials and fall into just the right orbit to be heated just right to create oceans gurgling up organisms and so on.   It is all completely passive – at least with our current notion of how life my have formed on this planet. It is not hard to imagine based on our current technology how we might create organic life forms by exploiting identified patterns of chemistry and physics.

In similar ways the trajectory of artistic movements can be painted within this patterned theory.   Painting is an active process of identifying form, light, composition, materials and exploiting their interplay to represent, misrepresent or simply present pattern.   The art market is an active process of identifying valuable concepts or artists or ideas and exploiting them before mimicry or other processes over exploit them until the value of novelty or prestige is nullified.

Language and linguistics are the identification and exploitations of symbols (sounds, letters, words, grammars) that carry meaning (the meaning being built up through association (pattern matching) to other patterns in the world (behavior, reinforcers, etc).   Religion, by the organizers, is the active identification and exploitation of imagery, language, story, tradition, and habits that maintain devotional and evangelical patterns. Religion, by the practitioner, can be active and passive maintenance of those patterns. Business and commerce is the active (sometimes passive) identification and exploitation of efficient and inefficient patterns of resource availability, behavior and rules (asset movement, current social values, natural resources, laws, communication medium, etc).

There is not a category of inquiry or phenomena that can escape this analysis.   Not because the analysis is so comprehensive but because pattern is all there is. Even the definition and articulation of this pattern theory is simply a pattern itself which only carries meaning (and value) because of the connection to other patterns (linear literary form, English, grammar, word processing programs, blogging, the Web, dictionaries).

Mathematics and Computation

It should be of little surprise that mathematics and computation forms the basis of so much of our experience now.   If pattern is everything and all patterns are in a competition it does make some common sense that efficient pattern translation and processing would arise as a dominant concept, at least in some localized regions of existence.

Mathematics effectiveness in a variety of situations/contexts (pattern processing) is likely tied to its more general, albeit often obtuse and very abstracted, ability to identify and exploit patterns across a great deal of categories.   And yet, we’ve found that mathematics is likely NOT THE END GAME. As if anything could be the end game.   Mathematics’ own generalness (which we could read as reductionist and lack of full fidelity of patterns) does it in – the proof of incompleteness showed that mathematics itself is a pattern of patterns that cannot encode all patterns. Said differently – mathematics incompleteness necessarily means that some patterns cannot be discovered nor encoded by the process of mathematics.   This is not a hard meta-physical concept. Incompleteness merely means that even for formal systems such as regular old arithmetic there are statements (theorems) where the logical truth or falsity cannot be established. Proofs are also patterns to be identified and exploited (is this not what pure mathematics is!) and yet we know, because of proof, that we will always have patterns, called theorems, that will not have a proof.   Lacking a proof for a theorem doesn’t mean we can’t use the theorem, it just means we can’t count on the theorem to prove another theorem. i.e. we won’t be doing mathematics with it.   It is still a pattern, like any sentence or painting or concept.


The effectiveness of mathematics is its ROBUSTNESS. Robustness (a term I borrow from William Wimsatt) is the feature of a pattern that when it is processed from multiple other perspectives (patterns) the inspected pattern maintains its overall shape.   Some patterns maintain their shape only within a single or limited perspective – all second order and higher effects are like this. That is, anything that isn’t fundamental is of some order of magnitude less robust that things that are.   Spacetime geometry seems to be highly robust as a pattern of existential organization.   Effect carrying ether, as proposed more than 100 years ago, is not.   Individual artworks are not robust – they appear different to any different perspective. Color as commonly described is not robust.   Wavelength is.

While much of mathematics is highly robust or rather describes very robust patterns it is not the most robust pattern of patterns of all. We do not and likely won’t ever know the most robust pattern of all but we do have a framework for identifying and exploiting patterns more and more efficiently – COMPUTATION.

Computation, by itself. 

What is computation?

It has meant many things over the last 150 years.   Here defined it is simply patterns interacting with other patterns.   By that definition it probably seems like a bit of a cheat to define the most robust pattern of patterns we’ve found to be patterns interacting with other patterns. However, it cannot be otherwise. Only a completely non-reductive concept would fit the necessity of robustness.   The nuance of computation is that there are more or less universal computations.   The ultimate robust pattern of patterns would be a truly universal-universal computer that could compute anything, not just what is computable.   The real numbers are not computable, the integers are.   A “universal computer” described by today’s computer science is a program/computer that can compute all computable things. So a universal computer can compute the integers but cannot compute the real numbers (pi, e, square root of 2). We can prove this and have (the halting problem, incompleteness, set theory….).   So we’re not at a completely loss of interpreting patterns of real numbers (irrational numbers in particular). We can and do compute with pi and e and square root millions of times a second.   In fact, this is the key point.   Computation, as informed by mathematics, allows us to identify and exploit patterns far more than any other apparatus humans have devised.   However, as one would expect, the universe itself computes and computes itself.   It also has no problem identifying and exploiting patterns of all infinitude of types.

Universal Computation

So is the universe using different computation than we are? Yes and no.   We haven’t discovered all the techniques of computation at play. We never will – it’s a deep well and new approaches are created constantly by the universe. But we now have unlocked the strange loopiness of it all.   We have uncovered Turing machines and other abstractions that allow us to use English-like constructs to write programs that get translated into bits for logic gates in parallel to compute and generate solutions to math problems, create visualizations, search endless data, write other programs, produce self replicating machines, figure out interesting 3D printer designs, simulate markets, generate virtual and mixed realities and anything else we or the machines think up.

What lies beneath this all though is this very abstract yet simple concept of networks.   Nodes and edges. The mathematics and algorithms of networks.   Pure relation between things. Out of the simple connection of things from things arise all the other phenomena we experience.   The network is limitless – it imposes no guardrails to what can or can’t happen. That it is a network does explain and impose why all possibilities exhibit as they do and the relative emergent levels of phenomena and experience.

The computation of pure relation is ideal.   It only supersedes (makes sense to really consider) the value of reductionist modes of analysis, creation and pattern processing when the alternative pattern processing is not sufficient in accuracy and/or has become sufficiently inefficient to provide relative value for it’s reduction.   That is, a model of the world or a given situation is only as value as it doesn’t overly sacrifice accuracy too much for efficiency.   It turns out for most day to day situations Newtonian physics suffices.

What Next

we’ve arrived at a point in discovery and creation where the machines and machine-human-earth combinations are venturing into virtual, mixed and alternate realities that current typical modes of investigation (pattern recognition and exploitation) are not sufficient. The large hadron collider is an example and less an extreme example than it was before. The patterns we want to understand and exploit – the quantum and the near the speed of light and the unimaginably large (the entire web index with self driving cars etc) – are of such a different magnitude and kind.   Then when we’ve barely scratched the surface there we get holograms and mixed reality which will create it’s own web and it’s own physical systems as rich and confusing as anything we have now. Who can even keep track of the variety of culture and being and commerce and knowledge in something such as Minecraft? (and if we can’t keep track (pattern identify) how can we exploit (control, use, attach to other concepts…)?

The pace of creation and discovery will never be less in this local region of spacetime.   While it may not be our goal it is our unavoidable fate (yes we that’s a scary word) to continue to compute and have a more computational approach to existence – the identification and exploitation of patterns by other patterns seems to carry this self-reinforcing loop of recursion and the need of ever more clarifying tools of inspection that need more impressive means of inspecting themselves…   everything in existence replicates passively or actively and at a critical level/amount of interconnectivity (complexity, patterns connected to patterns) self inspection (reasoning, introspection, analysis, recursion) becomes necessary to advance to the next generation (explore exploitation strategies).

Beyond robotics and 3d printing and self-replicating and evolutionary programs the key pattern processing concept humans will need is a biological approach to reasoning about programs/computation.   Biology is a way of reasoning that attempts to classify patterns by similar behavior/configurations/features.   And in those similarities find ways to relate things (sexually=replication, metabolism=Energy processing, etc).   It is necessarily both reductionist, in its approach to categorize, and anti-reductionist in its approach to look at everything anew. Programs / computers escape our human (and theoretical) ability to understand them and yet we need some way to make progress if we, ourselves, are to persist along side them.

And So.

It’s quite possible this entire train of synthesis is a justification for my own approach to life and my existence. And this would be consistent with my above claims.   I can’t do anything about the fact that my view is entirely biased by my own existence as a pattern made of patterns of patterns all in the lineage of humans emerged from hominids and so on all the way down to whatever ignited patterns of life on earth.

I could be completely wrong. Perhaps some other way of synthesizing existence all the way up and down is right. Perhaps there’s no universal way of looking at it. Though it seems highly unlikely/very strange to me that patterns at one level or in one perspective couldn’t be analyzed abstractly and apply across and up and down.   And that the very idea itself suggests patterns of pattern synthesis is fundamental strikes me as much more sensible, useful and worth pursuing than anything else we’ve uncovered and cataloged to date.

Read Full Post »

Yesterday someone told me that science was contingent and logic necessary.

I struck the stone – at first lightly
Then I dug in and drew blood.

Fractured assumptions flared their flimsy premise
Crumbling before less than mighty blows

This someone warned me if you crack too hard
The stone carries impact damage
Scaring the surface
Forcing you to sand and polish

That is if what you care about is something smooth and approachable.

Will this stone yield to me?

Or am I yielding to it?

My logic battering it tink after tink
Forces my theory that no matter what I do
This stone will be what it is
And it is up to me, flawed and frayed, to ask
It questions

The response is Wittgensteinien. Silent and yet understood
A brooding proposition of certain doubt
That nothing yields everything.

Read Full Post »

Previously I made this set of statements:

Computation irreducibility, the principal (unproven), suggests the best we are going to be able to do to understand EVERYTHING is just to keep computing and observing. Everything is unfolding in front of us and it’s “ahead” of us in ways that aren’t compressible. This suggests, to me, that our best source of figuring things out is to CREATE. Let things evolve and because we created them we understand exactly what went into them and after we’re dead we will have machines we made that can also understand what went into them.

This is a rather bulky ambiguous idea without putting some details behind it. What I am suggesting is that the endless zoological approach to observing and categorizing “the natural world” isn’t going to reveal path forward on many of the lingering big questions. For instance, there’s only so far back into the Big Bang we can look. A less costly effort is what is happening at LHC, where fundamental interactions are being “created” experimentally. Or in the case of the origin of life, there’s only so much mining the clues of earth and exoplanets we can do. A likely more fruitful in our lifetime approach will be to create life – in a lab, with computers and by shipping genetic and biomass out into space. And so on.

This logic carries on in the pure abstraction layers too. Computational complexity studies is about creating ever new complex systems to then go observe the properties and behaviors. Mathematics has always been this way… we extend mathematics by “creating” all sorts of new structures, first we did this geometrically, then logically/axiomatically, and now computationally. (I could probably argue successfully that these are equivalent)

All that said, we cannot abandon observation of the world around us. We lack the universal scale to create all that is around us. And we are very far from exhausting all the knowledge that can come from observation of what exists right now. The approaches of observation and creation go hand in hand, and for the most important questions it’s required to do both to be anywhere close to certain we’re on the right path to what might actually be going on. The reality is, our ability to know is quite limited. We will always lack some level of detail. Constant revision of the observational record and the attempt to recreate or create new things we see often reveals little, but critical details we miss in our initial assessments.

Examples that come to mind are Bertrand Russell’s and Whitehead’s attempt to fully articulate all of mathematics in Principia Mathematics. Godel undid that one rather handedly with his incompleteness theorem. More dramatic examples from history include the destruction of the idea of a earth centered universe, the spacetime curvature revelations of Einstein and Minkoski, and, of course, evolutionary genetics unraveling of a whole host of long standing theories.

In all those examples there’s a dance between observation and creation. Of course it’s way too clean to maintain there’s a clear distinction between observing the natural world and creating something new. Really these are not different activities. It’s just a matter of perspective on how where we’re honing our questions. The overall logical perspective I hold is that everything is a search through the space of possibilities. “Creation” is really just a repackaging of patterns. I tend to maintain it as a different observational approach rather than lump it in because something happens to people when they think they are creating – they are more open to different possibilities. When we think we are purely observing we are more inclined to associate what we observe with previously observed phenomenon. When we “create” we’ve already primed ourselves to look for “new.”

It is a combination of the likely reality of computational irreducibility and the psychological effect of “creating” and seeing things in a new light that I so strongly suggest “creating” more if we want to ask better questions, debunk false answers and increase our knowledge.


Read Full Post »

A friend recently sent me this nifty article.

Here are some of my favorite snippets.

On “knowledge”:

“Knowing is not an activity of the
brain but of human beings, and knowledge is
not contained in the brain but in books and
computers, and is possessed by human beings,
but not by their brains. It makes no sense and
explains nothing to divide the brain up into
bits that contain different kinds of knowledge
and know different sorts of things, because the
brain does not contain knowledge or know

On “consciousness”:

“Dispositional consciousness is a general
tendency to be conscious of certain
things—money-conscious, for example. Such
a generalized tendency is indicated by various
sorts of behavior—money-conscious people
are likely to save their money, spend it
carefully, talk about it and think about it more
than others, and so forth. Such a tendency
almost certainly is learned, and therefore one
can be ‘‘better’’ or ‘‘worse’’ at it depending on
one’s experience, if ‘‘better’’ and ‘‘worse’’
refer to a greater or lesser probability of
behaving in ways consistent with the disposition.
So the authors’ assertion that consciousness
is not something we can become ‘‘good
at’’ may be argued with, both in its dispositional
sense and in its occurrent transitive sense
(a current consciousness of some thing or state
of affairs). I may not become conscious of the
subtle French horn part in a piece of music
until after I have read about the composer’s
penchant for using the French horn in subtle
ways—has my learning not enhanced my
ability to be conscious of the French horn in
the composer’s music? More broadly, is there
no sense in which the common Californian
pastime of ‘‘expanding’’ or ‘‘developing’’
consciousness is true?”

On “strange loopness” of human biology:

“Far more
difficult to achieve, I believe, will be an
understanding of the fundamental nestedness
of the brain, the rest of the body, and the
person in the world, each entity executing
processes that overlap and turn back on
themselves and each other in time and space.”

On metaphors as a tool for communication, not analysis:

“The point is
that it may be the ability of metaphors and
analogies to help researchers accomplish their
theoretical goals, and not how well they stand
up to connective analysis relative to their
conventional counterparts, that is the better
basis for approving or disapproving of them.”

Language always lacks fidelity. One can only put into words some subset of what we experience. What we “experience” is only a subset of what is happening around us. What happens around us in a way that could affect us is only a subset of what there is.

Folks have a tendency in all science (and non science) to analyze and report at our “level” of experience. No, it’s not possible to apply an analysis of single cell behavior to a scene study of Shakespeare. Though we often talk of “motivation” in both studies. It’s a terribly inaccurate description in both cases but it does, often times, communicate something of value.

For an alternative, but equal misapplication of language from the “human experience” level, let’s consider quantum physics.  We experience things in 3 spacial and 1 temporal dimensions. We have NO WAY to experience the world in any other context. Thus it is incredibly hard for one to conceptualize and explain what happens at a quantum level (where things don’t follow space and time as we experience it.) It is NONsense to describe, diagram, or otherwise model the quantum world on our “human” level with expectation of accuracy. Our description of quantum mechanics is a very gross description.

Where this all gets counter-productive to the progress of knowledge is mistaking a description (model, report…) of something (a system, situation, behavior…) as the thing itself.  The use of psychological “Freudian” terms can sometimes be useful to short cutting long winded discussions but one must be disciplined to recognize that high level concepts cannot be applied to what’s actually going on.

I think there’s another reason we accept gross descriptions of the world. They work for all practical purposes. You don’t need to have a perfect description of the world to be successful in achieving whatever it is you might be doing. In fact, WE HAVE TO MAKE THIS TRADE OFF. If we didn’t short cut and take on gross descriptions of the world few of us would be able to operate. At the very least, few scientists would be able to publish if they actually had to drill down and tie up the loose ends without these gross misrepresentations.

Oh, and for those that care, I don’t think there is something like “consciousness”. We are more or less affected by things happening around and in us. We are not “aware” of our experiences in some binary way (the lightbulb never really just flips on). The linked article gets at some of this and there are other synthesis that argue this point better than I can at this stage.  A further implication is that “thought” isn’t really a THING by itself either. We don’t THINK THOUGHTS. and yes, I lack the syntax to describe my synthesis any further at this time 😉

For more insight you might turn to this very recent Edge talk.  In particular, read the responses from Sam Harris and others.  Kinda embodies everything in this post…. from baggage terms to metaphors as description to just how far away we are from reasonably deep insight.

Read Full Post »

Business Week has a really great article about the value of basic research in R&D Labs to future economies.

Many of the classic scientific research labs, such as Bell Labs and RCA Labs (now Sarnoff Corp.), were started and funded by companies with virtual monopolies and very strong, predictable cash flows. They were able to embrace the uncertainty and serendipity of pure research in the context of their business. But such companies don’t exist today. With the increasing focus on shareholder value that began in the 1990s as global competition heated up, Fortune 500 companies could no longer justify open-ended research that might not directly impact their bottom line. Today, corporate research is almost exclusively engineering R&D, tending more toward applied research with a 3- to 5-year time horizon (or shorter). IBM, Microsoft MSFT, and Hewlett-Packard HPQ, for example, collectively spend $17 billion a year on R&D but only 3% to 5% of that is for basic science.

The End of Labs

The End of Labs

It’s not just a shame, it’s actually a very bad strategy in play right now and for the future.  I once remarked at company retreat I was at that often a company or industry matures so much that it’s only strategy is to invent just for the sake of inventing, with the idea that completely new revenue streams might evolve.  I was quickly slapped down by a major executive, “We need to work on things that can be commercialized now.”  I knew then the fate of that company would be mostly an arbitrage of wall street expectations.  And that’s exactly what it, and 1000s of other companies have become.  This is also why this particular recession is so painful – most companies have no institutional ability to innovate.  Two decades of chaising the near term exit, the 30% stock market rocket shot leave industry stagnant.

Know one knows what the next big idea is.  And no one will figure that out without basic research.  And by big ideas, I mean things like the printing press, the Internet, germ theory, genetics, the Wheel.  You know – THE BIG STUFF that powers generations of commerce.

Read Full Post »

So it’s not that will doesn’t exist; it’s that the free part is problematic — a lot of people see free will and say, “Well, you’re showing there’s no free will; therefore, people have no intentions or will.” No. There is will, and will can be shaped by a host of factors: your genetic background, your early experience with your home and your family, your caretakers, you playmates, cultural influences bombarding us through the media and through socializing with your peers (and, thus what they like and what they think and what they believe from their parents). All this is being soaked up like a sponge by little kids.

John Bargh, Conversation on EDGE.org

and more zingers…

we’re much more accurate about predicting other people than we are at predicting ourselves. All these things going on inside of us get in the way, and especially the positive illusions about ourselves.

It’s a great read.  if I put a link right here, I bet you’d read it (you’re expecting the link but it’s here instead!)

Read Full Post »

In the 6th edition of “On the Origin of Species” Charles Darwin lamented over the power of “steady misrepresentation” of the facts and observations of his work 150 years ago. Those were days when God’s grace meant you could be hanged for opposing what everyone knows was the “WORD”.

While there has been a steady diet of multidisciplinary science that continues to support, extend and find nuances of his findings on natural selection, genetic drift, mutation and speciation, there is, and will always be groups that obfuscate the information in favor of their own approach to origins of life and man in particular.

As authors Glenn Branch and Eugenie C. Scott have laid out in their review in the recent Scientific American, these various miscreants of misinformation; these groups or people that have no science, no peer review, no database of exceptions, no body of anecdotal evidence to support their views also have no conflicting data points they can point to in support of their views. In fact, their approach is not about science, evidence, methodology or technology. It is about “faith in dogma” and it is shared by millions of people around the globe.

The real pariah in the whole mess is the body of people that take a “live and let live” approach. You know who they are… “Hey, as long as they don’t make me kiss a ring, they can do what they want in Rome.” These are the people who traffic in ambivalence. They too will always be with us. They sit on a fence, not necessarily supporting dogma and yet the view that man is a kin of other primates, that our hiccup reflex is a remnant of our fish history, or that we have to deal with the almost two dozen versions of extinct humans (Viktor Deak) is just upsetting enough, if not unconventionally disturbing for them to ignore. (As if prayer for soldiers being shot at isn’t or holy wars where millions have died are somehow, in comparison, OK.)

Remember Galileo who was convicted of suspicion of heresy for following the position of Copernicus which went contrary to that laid down by the Roman Catholic Church authority of Holy Scripture.  All of this today is still about the dogma of faith vs. data of science. Same stuff, different year.

There have been crusades, ethnic cleansing and the other stuff that made up the Dark Ages. And here we are in the Spring of 2009 reviewing our civilization and thwarted by those who don’t want people to figure out what the heck is going on out there.

Enter Governor Bobby Jindal who is a potential presidential hopeful of those currently out of favor in US politics. In 2008 he literally signed the Louisiana Science Education Act into law.

Marketed as supporting critical thinking in classrooms, the law threatens to open the door for the teaching of creationism and for scientifically unwarranted critiques of evolution in public school science classes [in Louisiana].

(Branch and Scott, 2009)

Does it sometimes seem to you that, while we may have evolved, there are some that didn’t get the memo? Next FOX News will be telling me that Mike Huckabee, former Presidential hopeful (who believes in the literal and biblical interpretation of Genesis) will administer the plan.


Read Full Post »

Older Posts »