Feeds:
Posts
Comments

Posts Tagged ‘existence’

“IS” creates. Its mere utterance, inscription, trace imbues existence. This IS. That IS. 1 IS. 2 IS 1 and 1. This IS different than that.

FIght to Exist

IS wordform is the trivial sign of the miracle act of creation. The giver of existence is merely Making It So. Made so by a finger pointing, a twitch of the eyes, a sentence declaring something is, action potential in the nueron, a bit flipped, a sum of numbers, movement in spacetime. IS. IS existence IS?

IS supercedes from the classic BEING and EVENT philosophy. There is no seperation of BEING from EVENT. Just as light is not a wave nor a partical. IS being split is a valid creation, all creations are, but the phrasing and philosophy of BEING and EVENT as the two actual, distinct creative gestures doesn’t mean they are.

IS in all its guises is the singular gesture, ex nihilo. And yet, really, the act IS NOT something from nothing. For nothing is a something. It is something from something. Nothing, redefined is SOMETHING in-distinguished from SOMETHING ELSE. There IS NOT some thing!

A word game? Hardly. IS can be experimented with and falsified. In fact, IS requires it. The completion of the IS is NOT. Negate it with another IS. This IS IS not THIS. Ad infinum. Do it without words, without thoughts. Merely observe in any perceptive medium and any perceptive tool. What “happens”? is happens.

This IS not satisfactory though. The meaning or import of IS to practical understanding should be established. Only through another series of IS can this be carried out. Paradoxically the truth of the IS cannot be established without an IS.

In fact.

Truth IS. Truth is the only concept that resists the IS, truth cannot be IS-ified. Truth IS true. This IS true. There is no basis outside of the truth that can objectify that IS statement. The true is primary to the IS. or IS it? IS anything true before the IS establishes it for evaluation? Truth is. IS truth? IS TRUTH? IS TRUTH. TRUTH IS.

In an newtonian world (and his associates Kant, Descartes, Lovelace, Darwin, Boole, Laplace, Jesus, Muhammad, Zeus, Curie) where IS and TRUTH are mere approximations, it doesn’t matter if we really know. ? It does. IS and TRUTH matters with more and more specificity depending on the relative stakes. To land humans in a rocket on the moon the recursion of physical mathematics and physical engineering needs a much more robust IS TRUE than two humans playing catch with a ball. The near infinite regress of IS TRUE of rocket physics pales in comparison to the IS TRUE of ALL OF KNOWLEDGE. In fact – if fact IS TRUE – ALL OF KNOWLEDGE cannot be established because IS TRUE goes on beyond all cardinal infinity. But yet, here we are. Something IS. Something is TRUE. some things are true. TRUE is. FALSE is not. FALSE IS not. FALSE IS NOT.

How much IS and how much TRUE one needs for existence… NOW THAT IS THE QUESTION.

Read Full Post »

The Point

Everything is a pattern and connected to other patterns.   The variety of struggles, wars, businesses, animal evolution, ecology, cosmological change – all are encompassed by the passive and active identification and exploitation of changes in patterns.

What is Pattern

Patterns are thought of in a variety of ways – a collection of data points, pictures, bits and bytes, tiling.   All of the common sense notions can be mapped to the abstract notion of a graph or network of nodes and their connections, edges.   It is not important, for the sake of the early points of this essay, to worry to much about the concept of a graph or network or its mathematical or epistemological construction.   The common sense ideas that might come to mind should suffice – everything is a pattern connected to other patterns. E.g. cells are connected to other cells sometimes grouped into organs connected to other organs sometimes grouped into creatures connected to other creatures.

Examples

As can be imagined the universe has a practically infinite number of methods of pattern identification and exploitation. Darwinian evolution is one such example of a passive pattern identification and exploration method. The basic idea behind it is generational variance with selection by consequences. Genetics combined with behavior within environments encompass various strategies emergent within organisms which either hinder or improve the strategies chance of survival. Broken down and perhaps too simplistically an organism (or collection of organisms or raw genetic material) must be able to identify threats, energy sources and replication opportunities and exploit these identifications better than the competition.   This is a passive process overall because the source of identification and exploitation is not built in to the pattern selected, it is emergent from the process of evolution. On the other hand sub processes within the organism (object of pattern were considering here) can be active – such as in the case of the processing of an energy source (eating and digestion and metabolism).

Other passive pattern processes include the effects of gravity on solar systems and celestial bodies on down to their effects on planetary ocean tides and other phenomena.   Here it is harder to spot what is the identification aspect?   One must abandon the Newtonian concept and focus on relativity where gravity is the name of the changes to the geometry of spacetime.   What is identified is the geometry and different phenomena exploit different aspects of the resulting geometry.   Orbits form around a sun because of the suns dominance in the effect on the geometry and the result can be exploited by planets that form with the right materials and fall into just the right orbit to be heated just right to create oceans gurgling up organisms and so on.   It is all completely passive – at least with our current notion of how life my have formed on this planet. It is not hard to imagine based on our current technology how we might create organic life forms by exploiting identified patterns of chemistry and physics.

In similar ways the trajectory of artistic movements can be painted within this patterned theory.   Painting is an active process of identifying form, light, composition, materials and exploiting their interplay to represent, misrepresent or simply present pattern.   The art market is an active process of identifying valuable concepts or artists or ideas and exploiting them before mimicry or other processes over exploit them until the value of novelty or prestige is nullified.

Language and linguistics are the identification and exploitations of symbols (sounds, letters, words, grammars) that carry meaning (the meaning being built up through association (pattern matching) to other patterns in the world (behavior, reinforcers, etc).   Religion, by the organizers, is the active identification and exploitation of imagery, language, story, tradition, and habits that maintain devotional and evangelical patterns. Religion, by the practitioner, can be active and passive maintenance of those patterns. Business and commerce is the active (sometimes passive) identification and exploitation of efficient and inefficient patterns of resource availability, behavior and rules (asset movement, current social values, natural resources, laws, communication medium, etc).

There is not a category of inquiry or phenomena that can escape this analysis.   Not because the analysis is so comprehensive but because pattern is all there is. Even the definition and articulation of this pattern theory is simply a pattern itself which only carries meaning (and value) because of the connection to other patterns (linear literary form, English, grammar, word processing programs, blogging, the Web, dictionaries).

Mathematics and Computation

It should be of little surprise that mathematics and computation forms the basis of so much of our experience now.   If pattern is everything and all patterns are in a competition it does make some common sense that efficient pattern translation and processing would arise as a dominant concept, at least in some localized regions of existence.

Mathematics effectiveness in a variety of situations/contexts (pattern processing) is likely tied to its more general, albeit often obtuse and very abstracted, ability to identify and exploit patterns across a great deal of categories.   And yet, we’ve found that mathematics is likely NOT THE END GAME. As if anything could be the end game.   Mathematics’ own generalness (which we could read as reductionist and lack of full fidelity of patterns) does it in – the proof of incompleteness showed that mathematics itself is a pattern of patterns that cannot encode all patterns. Said differently – mathematics incompleteness necessarily means that some patterns cannot be discovered nor encoded by the process of mathematics.   This is not a hard meta-physical concept. Incompleteness merely means that even for formal systems such as regular old arithmetic there are statements (theorems) where the logical truth or falsity cannot be established. Proofs are also patterns to be identified and exploited (is this not what pure mathematics is!) and yet we know, because of proof, that we will always have patterns, called theorems, that will not have a proof.   Lacking a proof for a theorem doesn’t mean we can’t use the theorem, it just means we can’t count on the theorem to prove another theorem. i.e. we won’t be doing mathematics with it.   It is still a pattern, like any sentence or painting or concept.

Robustness

The effectiveness of mathematics is its ROBUSTNESS. Robustness (a term I borrow from William Wimsatt) is the feature of a pattern that when it is processed from multiple other perspectives (patterns) the inspected pattern maintains its overall shape.   Some patterns maintain their shape only within a single or limited perspective – all second order and higher effects are like this. That is, anything that isn’t fundamental is of some order of magnitude less robust that things that are.   Spacetime geometry seems to be highly robust as a pattern of existential organization.   Effect carrying ether, as proposed more than 100 years ago, is not.   Individual artworks are not robust – they appear different to any different perspective. Color as commonly described is not robust.   Wavelength is.

While much of mathematics is highly robust or rather describes very robust patterns it is not the most robust pattern of patterns of all. We do not and likely won’t ever know the most robust pattern of all but we do have a framework for identifying and exploiting patterns more and more efficiently – COMPUTATION.

Computation, by itself. 

What is computation?

It has meant many things over the last 150 years.   Here defined it is simply patterns interacting with other patterns.   By that definition it probably seems like a bit of a cheat to define the most robust pattern of patterns we’ve found to be patterns interacting with other patterns. However, it cannot be otherwise. Only a completely non-reductive concept would fit the necessity of robustness.   The nuance of computation is that there are more or less universal computations.   The ultimate robust pattern of patterns would be a truly universal-universal computer that could compute anything, not just what is computable.   The real numbers are not computable, the integers are.   A “universal computer” described by today’s computer science is a program/computer that can compute all computable things. So a universal computer can compute the integers but cannot compute the real numbers (pi, e, square root of 2). We can prove this and have (the halting problem, incompleteness, set theory….).   So we’re not at a completely loss of interpreting patterns of real numbers (irrational numbers in particular). We can and do compute with pi and e and square root millions of times a second.   In fact, this is the key point.   Computation, as informed by mathematics, allows us to identify and exploit patterns far more than any other apparatus humans have devised.   However, as one would expect, the universe itself computes and computes itself.   It also has no problem identifying and exploiting patterns of all infinitude of types.

Universal Computation

So is the universe using different computation than we are? Yes and no.   We haven’t discovered all the techniques of computation at play. We never will – it’s a deep well and new approaches are created constantly by the universe. But we now have unlocked the strange loopiness of it all.   We have uncovered Turing machines and other abstractions that allow us to use English-like constructs to write programs that get translated into bits for logic gates in parallel to compute and generate solutions to math problems, create visualizations, search endless data, write other programs, produce self replicating machines, figure out interesting 3D printer designs, simulate markets, generate virtual and mixed realities and anything else we or the machines think up.

What lies beneath this all though is this very abstract yet simple concept of networks.   Nodes and edges. The mathematics and algorithms of networks.   Pure relation between things. Out of the simple connection of things from things arise all the other phenomena we experience.   The network is limitless – it imposes no guardrails to what can or can’t happen. That it is a network does explain and impose why all possibilities exhibit as they do and the relative emergent levels of phenomena and experience.

The computation of pure relation is ideal.   It only supersedes (makes sense to really consider) the value of reductionist modes of analysis, creation and pattern processing when the alternative pattern processing is not sufficient in accuracy and/or has become sufficiently inefficient to provide relative value for it’s reduction.   That is, a model of the world or a given situation is only as value as it doesn’t overly sacrifice accuracy too much for efficiency.   It turns out for most day to day situations Newtonian physics suffices.

What Next

we’ve arrived at a point in discovery and creation where the machines and machine-human-earth combinations are venturing into virtual, mixed and alternate realities that current typical modes of investigation (pattern recognition and exploitation) are not sufficient. The large hadron collider is an example and less an extreme example than it was before. The patterns we want to understand and exploit – the quantum and the near the speed of light and the unimaginably large (the entire web index with self driving cars etc) – are of such a different magnitude and kind.   Then when we’ve barely scratched the surface there we get holograms and mixed reality which will create it’s own web and it’s own physical systems as rich and confusing as anything we have now. Who can even keep track of the variety of culture and being and commerce and knowledge in something such as Minecraft? (and if we can’t keep track (pattern identify) how can we exploit (control, use, attach to other concepts…)?

The pace of creation and discovery will never be less in this local region of spacetime.   While it may not be our goal it is our unavoidable fate (yes we that’s a scary word) to continue to compute and have a more computational approach to existence – the identification and exploitation of patterns by other patterns seems to carry this self-reinforcing loop of recursion and the need of ever more clarifying tools of inspection that need more impressive means of inspecting themselves…   everything in existence replicates passively or actively and at a critical level/amount of interconnectivity (complexity, patterns connected to patterns) self inspection (reasoning, introspection, analysis, recursion) becomes necessary to advance to the next generation (explore exploitation strategies).

Beyond robotics and 3d printing and self-replicating and evolutionary programs the key pattern processing concept humans will need is a biological approach to reasoning about programs/computation.   Biology is a way of reasoning that attempts to classify patterns by similar behavior/configurations/features.   And in those similarities find ways to relate things (sexually=replication, metabolism=Energy processing, etc).   It is necessarily both reductionist, in its approach to categorize, and anti-reductionist in its approach to look at everything anew. Programs / computers escape our human (and theoretical) ability to understand them and yet we need some way to make progress if we, ourselves, are to persist along side them.

And So.

It’s quite possible this entire train of synthesis is a justification for my own approach to life and my existence. And this would be consistent with my above claims.   I can’t do anything about the fact that my view is entirely biased by my own existence as a pattern made of patterns of patterns all in the lineage of humans emerged from hominids and so on all the way down to whatever ignited patterns of life on earth.

I could be completely wrong. Perhaps some other way of synthesizing existence all the way up and down is right. Perhaps there’s no universal way of looking at it. Though it seems highly unlikely/very strange to me that patterns at one level or in one perspective couldn’t be analyzed abstractly and apply across and up and down.   And that the very idea itself suggests patterns of pattern synthesis is fundamental strikes me as much more sensible, useful and worth pursuing than anything else we’ve uncovered and cataloged to date.

Read Full Post »

Name. Label. Category. Property. Feature. Function.

These are the objects of existence. These are not fundamentally ontological. These are secondary-effect objects. Objects of an awareness that requires difference to exist.

Naming. Labeling. Categorizing. Identifying. Testing. Sensing.

These are the activities of this difference awareness.

All that we are aware of in existence is wholly contained in our recognition of something being different – set apart – from something else. We annouce the differences by giving a thing a name – we put it in a category – we slap a label on it – we define its properties – we recognize its features and functions.

Without a name or definition – it does not exist. This thing’s specific existence depends on its acknowledged relations to other things.

Our science is a process of identification and naming of differences. Rates of change. Taxonomy of species. Periodic tables. Phase changes. Quantum states. Direction of Field. Size, color, speed, charge, strength.

Our civilization and societies are recognitions and groupings of differences between people. Classes, races, parties, families.

Our language is differences in symbols and sounds.

The question at hand: are these differences real? do the things we name have an existence, any at all, without our naming.

No. The name we give it and the associations that name carries determines the full extent of the thing. This is a direct result of the connectedness of everything and nothing. There is no cut point at which a series of points isn’t a line. There is no point at which humans break from the animal kingdom. There is no discrete point in which a particle is just a particle and not also a wave. PI never resolves. Our mathematics is never complete. We decide in our finitude when to recognize the existence of something. at that recognition is always incomplete because it seeks to disconnect that which is connected.

And so it is with our own identification of the self. We say I, and we have no true idea of who I am. We just decide what details full under the name of I, but it is not all that our particular bodies and nervous system and memories ever are. Our ideas are not our own. Our DNA is not self generated. Our sense organs do not sense I like others sense us. And so on. The definition is incomplete. Forever. But there is an existence of I because we name it, but it is always changing. The I right now is not the I in the next configuration of the universe.
And in that regard… existence, as we name it, is a secondary effect. The totality of everything cannot be named. The totality of existence does not exist. It is beyond existence, it never existed. It is contained with the infinite regress of finite awareness naming things.

We, as finite awareness machines and the more aware but still finite awareness machines we create, will never reach the end of naming. the very existence of awareness is in fact simply the product of naming. To be aware is to name. This is this. That is that. This is not that. I am aware of that. This is not true. That is false.

Here I label us machines to conjure a sort of existence up. I can refine that name and conjure a different existence up. But does that change the existence of awareness. Yes! It would be a slightly different awareness. And yet would still be a finite awareness. Total awareness implies total everything. To be aware of all differences which require awareness of all connection and relation. Which would admit of all absurdities, logical falsehoods, dark energies, the opposite of everything, total entropy… nothing. It would resolve into nothing. Total awareness is total nothing.

For there to be anything at all, anything less than infinity, there must be a loss in fidelity. A categorization of those things with sense, that can be sensed. versus those things that more sternly refuse awareness and sense – non sense.

Many will label the above words as esoteric meandering. A label itself. And perhaps it fits within the category of nonsense.

And yet, there are things we all deal with that because we are less aware of them we fail to recognize their non sense, their own esoterics.

Consider currency. What is it? What is its existence? Paper? backed by a government? backed by a commodity? backed by a military? backed by a nation state?

All a taxonomy of names. Not anything other that a building up a related names.

What is a price? What is its existence? It’s what people agree to in an exchange… an exchange of named things.

Consider art. What is art? a painting? colors?

What is geography? when does the mountain give way to the valley? When does a river become a lake?

This exercise can and will go on forever. It is an out of time and space process. It is. Existence.

Just a name.

Read Full Post »

Preface:
This essay lays out an extremely brief account of what I believe (see evidence towards) is the source of existence and thus of perception, thought and knowledge. I want to say I have some overarching practical use for this truth-seeking, but I do not. Once you dig deep enough into questions of truth and knowledge they become both the means and ends of themselves. I cannot claim truthfully that knowledge of existence would help me better live my life, make more money, live more happily or whatever it is people prefer the ends of effort to be. So often I find that the truth (as close as we can get) to a matter is that the matter itself doesn’t matter. So if I had to tie up what I do in some nice little package I’d say I’m on a journey to figure out what doesn’t matter. And that the side effects of that might often yield to less worry, because I do not think even worry matters. That is, worrying doesn’t provide any use in matters of truth and knowledge nor does it matter in getting through the day.

I should also note that this essay and previous essays are by no means complete, fully consistent and without some leaps in logic and/or unexplained connecting of the dots. My intention is to always produce ever more tight arguments and deeper questions but both myself and any readers probably have limited attention at any given moment to go all the way “there.” It has and will continue to take more and more of my life to piece everything together and there will come a spacetime context in which I can more fully devote my energies to communicating much more coherently to those with the patience to muck through what every that communication becomes (paintings, books, essays, videos, discussions…)

The basic thesis

No finite thing is able to contemplate, recognize, think of, use, make sense of, perceive any other thing without reference to or a mapping to other things. A most basic activity of even the slightest awareness or perception requires difference. Perception is response to differences – a sliding scale of identity, sameness, similarity, dissimilarity, difference, opposite, negation, inverse.

Perception is reality. Becoming is reality. Difference is truth, the only truth, it makes the truth, becomes the truth. The whole of perceived existence – and what other existence could there be? – is difference.

Two key topics or issues or thesis arise:
Difference as fundamental source of existence
The role of the infinite pulling apart into the finite

A very brief description of Difference as the fundamental source of existence

0 is 0. A tautology of non existence. but 0 is 1-1, no longer a tautology, is existence. 0 becomes something, it becomes 1 – 1, 1 plus negative 1.

Is this a word game? Certainly and so much more than that. The whole of reality is an infinite game of differences and play of this is not that. Go deeper and forget the words, think in physical or abstract terms. A proton is not an electron, it is their difference that imbues them with existence. For if a proton = electron in word and physical terms there would just be protons and positive charge only, which isn’t charge at all.

Carry this all the way up from primitive mathematical and computational constructs to biology, chemistry, linguistic, artistic, social, political constructs. Everything is its difference from everything else. Not just in language or thought – how we conceive and perceive of difference, but in full actuality. Difference actualizes everything at every level.

What is a thought if not an exploration of differences? What is politics if not differences about laws and policies? What is genetic material if not differences in proteins? What are computer programs if not differences in use interpretations of 1s and 0s? What are YOU if not differences from others, from changes in the environment, from previous YOUs in other contexts?

It makes no difference to this reasoning if one denies concepts such as YOU/I or politics or anything else. IF those concepts are to exist it will come from differences. The only things possible that could not come from a difference would be NO THING (nothing) or EVERY THING (everything). And of that I will claim, nothing is everything, everything is nothing. nothing = everything. In having no difference between them, they do not exist except through an infinite descent into nothing or infinite ascent into everything. The becoming of reality is a finite experience of difference between nothing and everything.

To test this reasoning consider a few simple questions. What would it mean to represent every political view? Or what sort of behavior could we measure if all behaviors were present? What sort of mathematics would we have if every number were the same? What sort of identity would you have if you had all identities?

An even briefer description of the role of the finite
Consider a fractal. It is the same throughout or is it? It is an infinite loop of similarity and sameness where difference arise out finite awareness – we cannot experience the whole of a fractal. Or can we? is the finite description of a fractal program/generating formula enough to experience its infinite self similarity throughout? Even if it is, it is in identifying EXACTLY THOSE DETAILS WHICH CREATES THE DIFFERENCES (this is what a formula is!) that gives one fractal distinction from another fractal.

Universal computers are all equivalent in their abstract ability to compute anything that can be computed. They become different in use. They take on their existence through difference. My laptop is equivalent in everyway as your laptop except through our different uses – our finite exploration of that universality. If you and I with our laptops computed everything that could be computed our laptops would lose all difference, and in doing so would cease to exist as unique entities. That is, if this were possible, and we both pushed our laptops towards an infinite ascent to everything they would become nothing relative to anything else that was also universal.

Drop into something perhaps more abstract, like transcendental numbers (pi, e, etc). They contain an infinitude. They are only different in finitude. They become something through a finitude. Were infinite computation and infinite perception possible these numbers would be equivalent – interchangable. In their finite application they are distinct some things. Only through an infinite exploration can we fully experience these numbers and in full exploration they cease to have a unique existence.

Perhaps making it personal illustrates the point – if all of us were infinite in our lives and could experience everything and could be transformed through all genetic and epigenetic and nature/nurture contexts we’d all be the same and cease to have any unique existence. It is our finite biology and finite contexts that we become anything at all.

Tying together, briefly.
That nothing and everything can be “pulled apart” into differences is the ontological basis of existence. It shouldn’t come as some surprise that a conclusion to this reasoning is that the pulling apart of ALL DIFFERENCES is beyond our resources, and will forever be beyond our resources. But that this finite thing by thing pulling apart that is becoming is exactly and only what can fuel existence. The universe is the infinite becoming finite over and over and over and over into an infinite percieved collection of differences.

An incomplete but useful set of three links:

[a somewhat useful discussion on difference and information: http://plato.acadiau.ca/courses/educ/reid/papers/PME25-WS4/SEM.html]
[a useful categorization of information: http://plato.stanford.edu/entries/information-semantic/#1]
[transcendental numbers: http://en.wikipedia.org/wiki/Transcendental_number#Numbers_proven_to_be_transcendental]

Read Full Post »

A variety of thinkers and resources seem to converge on some fundamental ideas around existence, knowledge, perception, learning and computation.   (Perhaps I have a confirmation bias and have only found what I was primed to find).

 

Kurt Godel articulated and proved what I believe to be the most fundamental idea of all, the Incompleteness Theorem.   This theorem along with analog variants in the Halting Problem and other aspects of complexity theory provides us the notion that there is a formal limit to what we can know.   And by “to know” I mean it in the Leibnizen sense of perfect knowledge (scientific fact with logical proof, total knowledge).   Incompleteness tells us even with highly abstract, specialized formal systems there will always be some statement WITHIN that system that is true but cannot be proved. This is fundamental.

 

It means that no matter how much mathematical or computational or systematic logic we work out in the world there are just some statements/facts/ideas that are true but cannot be proven to be true.   As the name of the theorem suggests, though it’s mathematical meaning isn’t quite this, our effort in formalizing knowledge will remain incomplete.   There’s always something just out of reach.

 

It is also a strange fact that one can prove incompleteness of a system and yet not prove trivial statements within these incomplete formal systems.

 

Godel’s proof and approach to figuring this out is based on very clever re-encoding of formal systems laid out by Betrand Russell and A Whitehead.   This re-encoding of the symbols of math and language has been another fundamental thread we find through out human history.   One of the more modern thinkers that goes very deep into this symbolic aspect of thinking is Douglas Hofstadter, a great writer and gifted computer and cognitive scientist.   It should come as no surprise that Hofstadter found inspiration in Godel, as so many have. Hofstadter has spent a great many words on the idea of strange loops/self-reference and re-encodings of self-referential systems/ideas.

 

But before the 20th century Leibniz and many other philosophical, artistic, and mathematical thinkers had already started laying the groundwork around the idea that thinking (and computation) is a building up of symbols and associations between symbols.   Of course, probably most famously was Descartes in coining “I think, therefore I am.”   This is a deliciously self-referential, symbolic expression that you could spend centuries on. (and we have!)

 

Art’s “progression” has shown that we do indeed tend to express ourselves symbolically. It was only in more modern times when “abstract art” became popular that artist began to specifically avoid overt representation via more or less realistic symbols.   Though this obsession with abstraction turns out to be damn near impossible to pull off, as Robert Irwin from 1960 on demonstrated with his conditional art.   In his more prominent works he did almost the minimal gesture to an environment (a wall, room, canvas) and found that almost no matter what, human perception still sought and found symbols within the slightest gesture.   He continues to this day to produce conditional art that seeks to have pure perception without symbolic overtones at the core of what he does. Finding that it’s impossible seems, to me, to be line with Godel and Leibniz and so many other thinkers.

 

Wittgenstein is probably the most extreme example of finding that we simply can’t make sense of many things, really, in a philosophical or logical sense by saying or writing ideas.   Literally “one must be silent.”   This is a very crude reading and interpretation of Wittgenstein and not necessarily a thread he carries throughout his works but again it strikes me as being in line with the idea of incompleteness and certainly in line with Robert Irwin. Irwin, again no surprise, spent a good deal time studying Wittgenstein and even composed many thoughts about where he agreed or disagreed with Wittgenstein.   My personal interpretation is that Irwin has done a very good empirical job of demonstrating a lot of Wittgensteinien ideas. Whether that certifies any of it as the truth is an open question. Though I would argue that saying/writing things is also symbolic and picture-driven so I don’t think there’s as clear a line as Wittgenstein drew.   As an example, Tupper’s Formula is an insanely loopy mathematical function that draws a graph of itself.

 

Wolfram brings us a more modern slant in the Principle of Computational Irreducibility.   Basically it’s the idea that any system with more than very simple behavior is not reducible to some theory, formula or program that can predict it. The best we could do in trying to fully know a complex system is to watch it evolve in all its aspects.   This is sort of a reformulation of the halting problem in such a way that we might more easily imagine other systems beholden to this reality.   The odd facet of such a principle is that one cannot really prove with any reliability which systems are computational irreducible.   (P vs NP, etc problems in computer science are akin to this).

 

Chaitin, C. Shannon, Aaronson, Philip Glass, Max Richter, Brian Eno and many others also link into this train of thought….

 

Why do I think these threads of thought above (and many others I omit right now) matter at all?

 

Nothing less than everything.   The incompleteness or irreducibility or undecidability of complex systems (and even seemingly very simple things are often far more complex than we imagine!) is the fundamental feature of existence that suggests why, when there is something, there’s something rather than nothing. For there to be ANYTHING there must be something outside of full description. This is the struggle.   If existence were reducible to a full description there would be no end to that reduction until there literally was nothing.

 

Weirder, perhaps still, is the idea is the Principal of Computational Equivalence and Computational Universality.   Basically any system that can compute universally can emulate any other universal computer.   There are metaphysical implications here that if I’m being incredibly brash suggest that anything complex enough can and/is effectively anything else that is complex.   Again tied to the previous paragraph of thought I suggest that if there’s anything at all, everything is everything else.   This is NOT an original thought nor is it as easily dismissed as whacky weirdo thinking.   (Here’s a biological account of this thinking from someone that isn’t an old dead philosopher…)

 

On a more pragmatic level I believe the consequences of irreducibility suggest why computers and animals (any complex systems) learn the way they learn.   Because there is no possible way to have perfect knowledge complex systems can only learn based on versions of Probably Approximately Correct (Operant Conditioning, Neural Networks, Supervised Learning, etc are all analytic and/or empirical models of learning that suggest complex systems learn through associations rather than executing systematic, formalized, complete knowledge)   Our use of symbolics to think is a result of irreducibility.   Lacking infinite energy to chase the irreducible, symbolics (probably approximately correct representations) must be used by complex systems to learn anything at all.   (this essay is NOT a proof of this, this is just some thoughts, unoriginal ones, that I’m putting out to prime myself to actually draw out empirical or theoretical evidence that this is right…)

 

A final implication to draw out is that of languages and specifically of computer languages.   To solve ever more interesting and useful problems and acquire more knowledge (of an endless growing reservoir of knowledge) our computer languages (languages of thought) must become more and more rich symbolically.   Our computers, while we already make them emulate our more rich symbolic thinking, need to have symbolics more deeply embedded in their basic operations.   This is already the trend in all these large clusters powering the internet and the most popular software.

 

As a delightful concluding, yet open unoriginal thought from this book by Flusser comes to mind…   Does Writing Have a Future suggests that ever more rich symbolics than the centuries old mode of writing and reading will not only be desired but inevitable as we attempt to communicate in more vast networks. (which, won’t surprising, is very self-referential if you extend the thought to an idea of “computing with pictures” which really isn’t different than computing with words or other representations of bits that represent other representation of bits…)   I suppose all of this comes down to seeing which symbolic prove to be more efficient in the total scope of computation.   And whatever interpretation we assign to efficient is, by the very theme of this essay, at best, an approximation.

Read Full Post »