Feeds:
Posts
Comments

A variety of thinkers and resources seem to converge on some fundamental ideas around existence, knowledge, perception, learning and computation.   (Perhaps I have a confirmation bias and have only found what I was primed to find).

 

Kurt Godel articulated and proved what I believe to be the most fundamental idea of all, the Incompleteness Theorem.   This theorem along with analog variants in the Halting Problem and other aspects of complexity theory provides us the notion that there is a formal limit to what we can know.   And by “to know” I mean it in the Leibnizen sense of perfect knowledge (scientific fact with logical proof, total knowledge).   Incompleteness tells us even with highly abstract, specialized formal systems there will always be some statement WITHIN that system that is true but cannot be proved. This is fundamental.

 

It means that no matter how much mathematical or computational or systematic logic we work out in the world there are just some statements/facts/ideas that are true but cannot be proven to be true.   As the name of the theorem suggests, though it’s mathematical meaning isn’t quite this, our effort in formalizing knowledge will remain incomplete.   There’s always something just out of reach.

 

It is also a strange fact that one can prove incompleteness of a system and yet not prove trivial statements within these incomplete formal systems.

 

Godel’s proof and approach to figuring this out is based on very clever re-encoding of formal systems laid out by Betrand Russell and A Whitehead.   This re-encoding of the symbols of math and language has been another fundamental thread we find through out human history.   One of the more modern thinkers that goes very deep into this symbolic aspect of thinking is Douglas Hofstadter, a great writer and gifted computer and cognitive scientist.   It should come as no surprise that Hofstadter found inspiration in Godel, as so many have. Hofstadter has spent a great many words on the idea of strange loops/self-reference and re-encodings of self-referential systems/ideas.

 

But before the 20th century Leibniz and many other philosophical, artistic, and mathematical thinkers had already started laying the groundwork around the idea that thinking (and computation) is a building up of symbols and associations between symbols.   Of course, probably most famously was Descartes in coining “I think, therefore I am.”   This is a deliciously self-referential, symbolic expression that you could spend centuries on. (and we have!)

 

Art’s “progression” has shown that we do indeed tend to express ourselves symbolically. It was only in more modern times when “abstract art” became popular that artist began to specifically avoid overt representation via more or less realistic symbols.   Though this obsession with abstraction turns out to be damn near impossible to pull off, as Robert Irwin from 1960 on demonstrated with his conditional art.   In his more prominent works he did almost the minimal gesture to an environment (a wall, room, canvas) and found that almost no matter what, human perception still sought and found symbols within the slightest gesture.   He continues to this day to produce conditional art that seeks to have pure perception without symbolic overtones at the core of what he does. Finding that it’s impossible seems, to me, to be line with Godel and Leibniz and so many other thinkers.

 

Wittgenstein is probably the most extreme example of finding that we simply can’t make sense of many things, really, in a philosophical or logical sense by saying or writing ideas.   Literally “one must be silent.”   This is a very crude reading and interpretation of Wittgenstein and not necessarily a thread he carries throughout his works but again it strikes me as being in line with the idea of incompleteness and certainly in line with Robert Irwin. Irwin, again no surprise, spent a good deal time studying Wittgenstein and even composed many thoughts about where he agreed or disagreed with Wittgenstein.   My personal interpretation is that Irwin has done a very good empirical job of demonstrating a lot of Wittgensteinien ideas. Whether that certifies any of it as the truth is an open question. Though I would argue that saying/writing things is also symbolic and picture-driven so I don’t think there’s as clear a line as Wittgenstein drew.   As an example, Tupper’s Formula is an insanely loopy mathematical function that draws a graph of itself.

 

Wolfram brings us a more modern slant in the Principle of Computational Irreducibility.   Basically it’s the idea that any system with more than very simple behavior is not reducible to some theory, formula or program that can predict it. The best we could do in trying to fully know a complex system is to watch it evolve in all its aspects.   This is sort of a reformulation of the halting problem in such a way that we might more easily imagine other systems beholden to this reality.   The odd facet of such a principle is that one cannot really prove with any reliability which systems are computational irreducible.   (P vs NP, etc problems in computer science are akin to this).

 

Chaitin, C. Shannon, Aaronson, Philip Glass, Max Richter, Brian Eno and many others also link into this train of thought….

 

Why do I think these threads of thought above (and many others I omit right now) matter at all?

 

Nothing less than everything.   The incompleteness or irreducibility or undecidability of complex systems (and even seemingly very simple things are often far more complex than we imagine!) is the fundamental feature of existence that suggests why, when there is something, there’s something rather than nothing. For there to be ANYTHING there must be something outside of full description. This is the struggle.   If existence were reducible to a full description there would be no end to that reduction until there literally was nothing.

 

Weirder, perhaps still, is the idea is the Principal of Computational Equivalence and Computational Universality.   Basically any system that can compute universally can emulate any other universal computer.   There are metaphysical implications here that if I’m being incredibly brash suggest that anything complex enough can and/is effectively anything else that is complex.   Again tied to the previous paragraph of thought I suggest that if there’s anything at all, everything is everything else.   This is NOT an original thought nor is it as easily dismissed as whacky weirdo thinking.   (Here’s a biological account of this thinking from someone that isn’t an old dead philosopher…)

 

On a more pragmatic level I believe the consequences of irreducibility suggest why computers and animals (any complex systems) learn the way they learn.   Because there is no possible way to have perfect knowledge complex systems can only learn based on versions of Probably Approximately Correct (Operant Conditioning, Neural Networks, Supervised Learning, etc are all analytic and/or empirical models of learning that suggest complex systems learn through associations rather than executing systematic, formalized, complete knowledge)   Our use of symbolics to think is a result of irreducibility.   Lacking infinite energy to chase the irreducible, symbolics (probably approximately correct representations) must be used by complex systems to learn anything at all.   (this essay is NOT a proof of this, this is just some thoughts, unoriginal ones, that I’m putting out to prime myself to actually draw out empirical or theoretical evidence that this is right…)

 

A final implication to draw out is that of languages and specifically of computer languages.   To solve ever more interesting and useful problems and acquire more knowledge (of an endless growing reservoir of knowledge) our computer languages (languages of thought) must become more and more rich symbolically.   Our computers, while we already make them emulate our more rich symbolic thinking, need to have symbolics more deeply embedded in their basic operations.   This is already the trend in all these large clusters powering the internet and the most popular software.

 

As a delightful concluding, yet open unoriginal thought from this book by Flusser comes to mind…   Does Writing Have a Future suggests that ever more rich symbolics than the centuries old mode of writing and reading will not only be desired but inevitable as we attempt to communicate in more vast networks. (which, won’t surprising, is very self-referential if you extend the thought to an idea of “computing with pictures” which really isn’t different than computing with words or other representations of bits that represent other representation of bits…)   I suppose all of this comes down to seeing which symbolic prove to be more efficient in the total scope of computation.   And whatever interpretation we assign to efficient is, by the very theme of this essay, at best, an approximation.

Observation and Creation

Previously I made this set of statements:

Computation irreducibility, the principal (unproven), suggests the best we are going to be able to do to understand EVERYTHING is just to keep computing and observing. Everything is unfolding in front of us and it’s “ahead” of us in ways that aren’t compressible. This suggests, to me, that our best source of figuring things out is to CREATE. Let things evolve and because we created them we understand exactly what went into them and after we’re dead we will have machines we made that can also understand what went into them.

This is a rather bulky ambiguous idea without putting some details behind it. What I am suggesting is that the endless zoological approach to observing and categorizing “the natural world” isn’t going to reveal path forward on many of the lingering big questions. For instance, there’s only so far back into the Big Bang we can look. A less costly effort is what is happening at LHC, where fundamental interactions are being “created” experimentally. Or in the case of the origin of life, there’s only so much mining the clues of earth and exoplanets we can do. A likely more fruitful in our lifetime approach will be to create life – in a lab, with computers and by shipping genetic and biomass out into space. And so on.

This logic carries on in the pure abstraction layers too. Computational complexity studies is about creating ever new complex systems to then go observe the properties and behaviors. Mathematics has always been this way… we extend mathematics by “creating” all sorts of new structures, first we did this geometrically, then logically/axiomatically, and now computationally. (I could probably argue successfully that these are equivalent)

All that said, we cannot abandon observation of the world around us. We lack the universal scale to create all that is around us. And we are very far from exhausting all the knowledge that can come from observation of what exists right now. The approaches of observation and creation go hand in hand, and for the most important questions it’s required to do both to be anywhere close to certain we’re on the right path to what might actually be going on. The reality is, our ability to know is quite limited. We will always lack some level of detail. Constant revision of the observational record and the attempt to recreate or create new things we see often reveals little, but critical details we miss in our initial assessments.

Examples that come to mind are Bertrand Russell’s and Whitehead’s attempt to fully articulate all of mathematics in Principia Mathematics. Godel undid that one rather handedly with his incompleteness theorem. More dramatic examples from history include the destruction of the idea of a earth centered universe, the spacetime curvature revelations of Einstein and Minkoski, and, of course, evolutionary genetics unraveling of a whole host of long standing theories.

In all those examples there’s a dance between observation and creation. Of course it’s way too clean to maintain there’s a clear distinction between observing the natural world and creating something new. Really these are not different activities. It’s just a matter of perspective on how where we’re honing our questions. The overall logical perspective I hold is that everything is a search through the space of possibilities. “Creation” is really just a repackaging of patterns. I tend to maintain it as a different observational approach rather than lump it in because something happens to people when they think they are creating – they are more open to different possibilities. When we think we are purely observing we are more inclined to associate what we observe with previously observed phenomenon. When we “create” we’ve already primed ourselves to look for “new.”

It is a combination of the likely reality of computational irreducibility and the psychological effect of “creating” and seeing things in a new light that I so strongly suggest “creating” more if we want to ask better questions, debunk false answers and increase our knowledge.

 

Time equals money is a truism. It is true in the sense that both concepts are simply agreements between people in relation to something else. In the case of time it is an agreement between people about clocks or other cyclic mechanisms and usually in relation to synchronizing activities. In the case of money it is the agreement of credit and debts as representations of trustworthiness. The day, minute, hour, and second are merely efficient conventions we use to compress information about synchronizing our activities in relation to other things. Dollars, cents, bitcoins and notes are all conventions we use to liquidate and exchange trust.

In fact, there’s nothing in human existence that isn’t in this same class of concepts like time and money other than food, water, shelter, sleep and reproduction. All of our cultural conventions and social constructs are, at the root, based upon the need to survive. All of our social existence is derived from a value association network built up to help us obtain the basic necessities of our personal survival. This value association network can become quite complicated and certainly extends beyond our own individual lifespan and influence. We have traditions, works of literature and art, history, religion, politics and so on all due to an extremely complex evolution of learned associative and genetic strategies for survival of our individual genes.

The goal in this essay is not to reduce everything we experience to survival of genes and suggest anthropology, social sciences, psychology and so forth aren’t worth investigation. The various complex systems investigated in all these disciplines exist and emerge as stand alone things to study and figure out. Because politics and economies and social networks actually do exist we must study them and understand their effects and causes. Also we cannot effectively research all the way down from these emergent concepts to the fundamentals for a variety of reasons, not least of which is simply computational irreducibility.

Computation irreducibility, the principal (unproven), suggests the best we are going to be able to do to understand EVERYTHING is just to keep computing and observing. Everything is unfolding in front of us and it’s “ahead” of us in ways that aren’t compressible. This suggests, to me, that our best source of figuring things out is to CREATE. Let things evolve and because we created them we understand exactly what went into them and after we’re dead we will have machines we made that can also understand what went into them.

In a sense there’s only so much behavior (evolution of information) we can observe with the current resources available to us. We need to set forth new creations that evolve in our lifetimes (genetic and computational lifetimes). Let us see if cultures and social structures and politics and money evolve from our creations!

However, until that’s more feasible than it is now we have history and anthropology and sociology….. and yet! While new patterns emerge at various levels of reduction often these emergent patterns will share common abstract structures and behavior. For example, the Fibonacci sequences shows up in a variety of levels of abstract patterns. Another example is that of fractal behavior in economic markets, in the growth of trees and obviously within various computational systems. It is this remarkable phenomenon that leads to my forthcoming hypothesis.

The fundamental aspect of existence is information.

Bits. Bits interacting with bits to form, deform and reform patterns. These patterns able to interpret, reinterpret and replicate. These patterns can be interpreted as networks. Networks, described by bits, made of bits, able to understand streams of bits.

[for understandable examples of this in everyday life think of your computer you're reading this on. It is made of atoms (bits) and materials like silicon (bits of bits) fashioned into chips and memory banks (a network of bits of bits that process and store bits) that understand programs (bits about other bits) and interact with humans (who type bits from their own networked being [fingers, brains, eyes...]).]

Information has no end and no beginning. It doesn’t need a physical substrate, as in a particular substrate. It becomes the substrate. It substantiates all substrates. Anything and everything that exists follows the structures of pure information and pure computation – our physical world is simply a subset of this pure abstraction.

These high level – or what we call high level – phenomenon like social networks and politics and economies all are phenomena of information and information processing. The theories of Claude Shannon, Kurt Godel, Church, Turing, Chatin, Mandelbrot, Wolfram and so on all show signs that at all levels of “how things work” there is a fundamental information-theoretic basis.

A strange thing is happening nowadays. Well, strange to many who grew up working the land and manipulating the world directly with their own hands… The majority of “advanced” societies are going digital. Digital refers to very clearly not the stuff taken directly from the ground on the earth ( I don’t mean digital in the sense of digital vs. analog… continuous vs. discrete). The economies are 90%+ digital, the majority of the most valuable companies don’t produce physical products, the politics are digital, the dominant mode of communication and social interaction is digital and so forth. It’s almost impossible at this point to think our existence will end up as anything but informatic. But it’s a bit misleading to think we’re moving away from one mode into another. The fact is it’s ALL INFORMATION and we’re just arguing about the representation in physical form of that information.

So what does any of the last set of paragraphs have to do with the opening? Well, everything. Time and money are simply exchanges of information. We will find traces of their basic ideas (synchronization and “trust”) in all sorts of complex information exchanges. Time and money are compressions of information that allow us finite, yet universal computers to do things mostly within our computational lifetime. Time and money are NOT fundamental objects in existence. They are emergent abstractions, that will emerge EVERYTIME sufficiently complex information structures start interacting and assuredly develop associative value networks.

Are they real? Sure.

Should we obsess over them? It all depends on what you, as an information packet, learn to value. If you basic means of survival as the information packet you are depends on the various associations they provide, then yes. If not, then no. Or perhaps very differently than you deal with them today.

Software that scales

20140120-121843.jpg

Certain activities are fundamental to the human endeavor.

The list:
Acquisition of food, shelter and water
Mate attraction and selection
Procreation
Acquisition or forage of information regarding the first three
Acquisition of goods and services that may make it more efficient to get the first three
Exchange of above three

Only software that deals directly with these activities flourishes into profitable, long term businesses.

I define software here as computer programs running on severs and personal computers and devices.

The race to the bottom in the pricing of information and software and hardware that runs that software ensures that only software businesses that scale beyond all competition can last. Scale means massive and efficient data centers, massive support functions and a steady stream of people talent to keep it all together. And the only activities in the human world that scale enough are those things that are fundamental activities to us all.

Sure there’s a place for boutique and specialist software but typically firms like that are swallowed by the more fundamental firms who bake the function directly into their ecosystem and then they give it away. And this is also why the boutique struggles long term. Those fundamental software builders are always driving the cost down. So even if a boutique is doing ok now it is not sustainable if it is at all relevant. It will be swallowed.

Open source software only reinforces this. In fact it takes this idea to the extreme. The really successful open source projects are always fundamental software (os, browser, web server, data processing) and further drive the price of software to zero.

Scale is the only way to survive in software.

This goes for websites, phone apps, etc. All the sites and apps that focus on niche interests that don’t deal with the fundamental activities above directly either get assimilated into larger apps and sites with broad function or they whiter and die unable to be sustained by a developer.

What is art?

Art isn’t anything.  It’s everything and nothing.  Art is pattern.  It’s narrative.  It’s expression.  Anything we do someone else could consider as art.

What’s most important is that someone else can experience us, this art, the source of art.   And art doesn’t suppose an audience in its creation.  We all create art, constantly, regardless of whether we think someone else will view it, watch it, hear it, taste it. And everyone we interact with directly or indirectly (though culture, rules, laws) influences our personal art.

Is there ART?  that is, is there something that we’d all say that’s a clear expression of art?  No.  No there isn’t.  Even when people start out with the intent to make art and they make it clear they are making ART it’s no more or less art than anything else anyone does all day.  Art is simply that which we do that becomes noticed.

The key is Does Someone Notice.  Even if that someone is YOURSELF.  Art is that thing that makes you notice, that makes you change your perspective.   That’s it.  and that’s everything.

Art’s role in life is to be noticed.  Art is about creating audience.  The more entities that notice a new perspective because of it the more relevant the art (in American culture terms).

What is progress?

The idea of progress is a flimsy concept.  Nothing in the universe comes for free.  So when some system or entity “progresses” in comes at the expense of energy somewhere.  It’s not necessarily a wholly destructive expense but it is an expense nonetheless. The way in which we commonly talk about society, civilization and the human race is in terms of progress.  We’re progressing from a barbaric or unenlightened state to a state if self reliance and control and technologically enhanced awareness.  But this progress is mostly an illusion.  It comes at a great expense to other species,the planet and even ourselves.

Some conflicting reports:

http://humanprogress.org/ (there’s progress!)

http://en.wikipedia.org/wiki/Social_progress (there’s progress!)

http://reason.com/archives/2013/10/30/human-progress-not-inevitable-uneven-and (there is a thing called progress but we’re not always on it!)

http://people.wku.edu/charles.smith/wallace/S445.htm (there’s progress!)

http://www.alternet.org/environment/myth-human-progress (progress is an illusion!)

http://www.vice.com/read/john-gray-interview-atheism (there is no progress!)

(Another way to think about this is that everything is competing to exist against other things that also are fighting to exist.  The better we compete the more we extract from the ecosystems.)

Certainly we’ve increased our life expectancy on the whole and reduced violence and physical suffering in the human race. We have invented computers, figured out space flight, eradicated some diseases, taught billions to read and write.  All progress right?

To what end?  Where is all this progress going?  How is this progress measured?  Does a longer life mean a better life? Does a less violent life lead somewhere differently than a more violent one?

Perhaps even more challenging is figuring out whether we have a choice in the matter.  Are we even biologically, physically capable of not trying to progress in these dimensions and exert our competitive advantages upon or environment?  If we had some definition of how best to live in some philosophic sense and it differed materially with the progressive ways we’ve chased could we actually change?  Could we choose less technology and a culture more in balance with the environment?  And no there’s no “hippie” justification needed for this thinking.  The question is is there a way of life that is more sustainable and less extracting from the world than the way we currently live?  Or is our survival inexorably tied to dominating everything we can?

To make this very clear consider the species that have become extinct at the hands of humankind’s hunting.  http://en.wikipedia.org/wiki/Lists_of_extinct_animals
Our “progress” has led in many cases directly to their complete decline.  Who are we to say whether our progress is worth it – Was worth their demise?

I’m directly asking everyone what is the point of our focus on progress.  Certainly in America we are all put on a course to progress through life.  Our goal is clear to get through high school, go through college, and begin to produce.  One production should lead to ever more important positions in this progressive society with ever increasing economic output.  We measure all facets of our culture against GDP and endowments and ROI.  We do not recognize that growth in these aspects must be paid for in other respects.

So the question remains.  What is progress? and what’s it worth to you?

there isn’t one.

or

every gui will be available.

I receive a lot of resistance when I suggest that the future of graphic user interfaces doesn’t exist.   Here’s why I think this.  The graphical interface is soon to become a completely personalized experience.  That is, the machine will optimize the interface for you.   Sure, there will need to be some uber designer that sort of sets up some initial styles and maybe set some basic parameters but ultimately the machines are going to decide WITH YOU the best way to integrate.

This should be an unsurprising prediction considering this is how us humans interact.  We constantly adjust ourselves to each other.  We change language, fashion, body language, cultural norms, etc to improve our understanding.  So as machines increase in sophistication we ask that the interfaces changes with our desires (values/ideals/patterns).

Beyond GUIs though the technical world moves ever towards an intelligent web of computational services instead of a hyperlinked HTML web of linked presentations.  That means things are moving quickly to a semantic, computational approach where everything (interfaces and data!) are objects for computation and able to be input into whatever connects to it.

The problem of user interface design now is that it’s happening mostly in art and production circles instead of through a real collaboration between artists, behaviorists and computer scientists.  What’s ideal is a group of people that all understand the arts, behaviorism and computational theory.  You can’t really do solid UI design without at least those types.

In the very near future we will not obsess with and talk about GUIs.  We will talk about experiences in the world and with concepts and art and characters.  And all very naturally with machines and their “senses.”   This interaction with the machines will inform the presentation.

Follow

Get every new post delivered to your Inbox.

Join 378 other followers