Feeds:
Posts
Comments

Posts Tagged ‘epistemology’

There’s a great deal of confusion about what is meant by the concept “computational knowledge.”

Stephen Wolfram put out a nice blog post on the question for computable knowledge.  In the beginning he loosely defines the concept:

So what do I mean by “computable knowledge”? There’s pure knowledge—in a sense just facts we know. And then there’s computable knowledge. Things we can work out—compute—somehow. Somehow we have to organize—systematize—knowledge to the point that we can build on it—compute from it. And we have to know methods and models for the world that let us do that computation.

Knowledge

Trying to define it any more rigorously than above is somewhat dubious.  Let’s dissect the concept a bit to see why.  Here we’ll discuss knowledge without getting too philosophical.  Knowledge is concepts we have found to be true and that we somewhat understand the context, use and function – facts, “laws” of nature, physical constants.  Just recording those facts without understanding context, use, and function would be pretty worthless – a bit like listening to a language you’ve never heard before.  It’s essentially just data.

In that frame of reference, not everything is “knowledge” much less computational knowledge.  How to define what is and isn’t knowledge… well, it’s contextual in many cases and gets into a far bigger discussion of epistemology and all that jive.  A good discussion to have, for sure, but will muddy this one.

Computation

What I suspect is more challenging for folks is the idea of “computational” knowledge.  That’s knowledge we can work out – generate, in a sense, from other things we already know or assume (pure knowledge – axioms, physical constants…).  Computation is a very broad concept that refers to far more than “computer” programs.  Plants, People, Planets, the Universe computes – all these things take information in (input) one form (energy, matter) and converts it to other forms (output).  And yes, calculators and computers compute… and those objects are made from things (silicon, copper, plastic…) that you don’t normally think of as “computational”… but when configured appropriately they make a “computer”.   Now to get things to compute particular things they need instructions – (we need to systemitize… or program it).  Sometimes these programs are open ended (or appear to be!).  Sometimes they are very specific and closed.  Again, here don’t think of a program as something written in Java.  DNA is an instruction set, so are various other chemical structures, and arithmetic, and employee handbooks… basically anything that can tell something else how to use/do something with input.  Some programs, like DNA, can generate themselves.  these are very useful programs.  The point is… you transform input to some output.  That’s computation put in a very basic, non technical way.  It becomes knowledge when the output  has an understandable context, use and function.

Categorizing what is computational knowledge and what is not can be a tricky task.  Yet for a big chunk of knowledge it’s very clear.

Implications and Uses

The follow on question once this is grokked — What’s computational knowledge good for?

The value end result, the computed knowledge, is determined by its use.  However, the method of computing knowledge is valuable because in many cases it is much more efficient (faster and cheaper) than waiting around for the “discovery” of the knowledge by other methods.  For example, you can run through millions of structure designs using formal computational methods very quickly versus trying to architect / design / test those structures by more traditional means.  The same could be said for computing rewarding financial portfolios, AdWords campaigns, optimal restaurant locations, logo designs and so on.  Also, computational generation of knowledge sometimes surfaces knowledge that may otherwise never have been found with other methods (many drugs are now designed computationally, for example).

Web Search

These concepts and methods have implications in a variety of disciplines.   The first major one is the idea of “web search”.  The continuing challenge of web search is making sense of the corpus of web pages, data snippets and streams of info put out every day.  A typical search engine must hunt through this VERY BIG corpus to answer a query.  This is an extremely efficient method for many search tasks – especially when the fidelity of the answer is not such a big deal.  It’s a less efficient method when the search is really a very small needle in a big haystack and/or when precision and accuracy are imperative to the overall task.  Side note: Web search may not have been designed with that in mind… however, users come more and more to expect a web search to really answer a query – often users mistake the fact that it is the landing page, the page that was indexed that is doing the answering of a query.  Computational Knowledge can very quickly compute answers to very detailed queries.  A web search completely breaks down when the user query is about something never before published to the web.  There are more of these queries than you might think!  In fact, an infinite number of them!

Experimentation

Another important implication is that computational knowledge is a method for experimentation and research.  Because it is generative activity one can unearth new patterns, new laws, new relationships, new questions, new views….  This is a very big deal.  (not that this has been possible before now… of course, computation and knowledge are not new!  the universe has been doing it for ~14 billion years.  now we coherent and tangible systems to make it easier and more useful to use formal computation for more and more tasks).

P.S.

There are a great many challenges, unsolved issues and potentially negative aspects of computational knowledge.  Formal computation systems by no means are the most efficient, most elegant, most fun ways to do some things.  My FAVORITE example and what I want to propose one day as the evolution of the Turing Test is HUMOR.  Computers and formal computation suck at humor.  And I do believe that humor can be generated formally.  It’s just really really really hard to figure this out.  So for now, it’s still just easier and more efficient to get a laugh by hitting a wiffle-ball at your dad and putting it on YouTube.

Read Full Post »

This post is my interpretation.  Other thinkers, philosophers and researchers have other (more technical) approaches regarding this subject.

Statement: There are no models that completely explain the “how or why” sufficiently complex phenomenon.

Clarifications:

Explain – Accurately represents the causes, context, behavior and consequences of a phenomenon and presents such representation in a usable form (we can apply this knowledge outside of just explaining)

Completely – 100% (or very nearly 100%) represent all cases of the phenomenon.  In particular, there are no “exceptions” nor is there simply a “rule of thumb.”

“How and Why” –  The actual behavior, make up, and structure of the phenomenon.

In other words:

All our scientific efforts produce models, not explanations.  Models help us improve our methods and provide insight into phenomenon, but they are not the “thing” and they do not explain the “thing”.  Our explanations based on models and/or the incomplete information they are always based on (computational irreducibility, uncertainty) are forever not complete and always capable of revision (inaccurate).

Math is the ultimate model language.  It is a way to describe relationships when you strip away the gnarly details of the real world.  It sometimes has beautiful results but never produces an explanation of the real world.

Computer science is inbetween the real world and math.  A great way to simulate things and build new computational models, but because it’s not made of the stuff we’re often simulate it can’t possibly be completely accurate.

Biology and other specialized disciplines tend to rely more observations than abstract models.  The result is a nearly infinite record of exception cases making conceptual models that span multiple phenomenon very difficult (well, that’s because you mostly can’t do it.)

Though I’m giving a very truncated account of everything hopefully the point is clear.  Explanations are always our judgment, our subjective synthesis of the inaccurate data we have.  This does not imply we don’t know anything.  Nor does it imply we don’t have explanations.  For simple, or relatively simple, phenomenon we have accurate explanations and good working knowledge.

Specifically, as related to this blog, economics, behaviorism, and social models are all useful models.  None of the “laws” presented in these disciplines are fullproof.  Rational Choice theory, supply and demand, matching laws….  these are good tools, but not full explanations of the how and way of behavior, media and social activity.

Proof: Is left as an exercise to the reader.

Proof Part Duex:  This is an intractable problem.  There’s no way to formally prove these statements.  They are hunches.  I do believe the proof is somehow along these lines: to determine if an explanation is complete and accurate I’d have to be able to reduce a phenomenon down somehow, which is impossible for sufficiently complex phenomenon. (think along the lines of the halting problem.  i can’t determine if a program is going to halt any more quickly than running the program and seeing if it halts….)

For a more formal treatment of scientific explanation head here.

There are many more resources and I’ll post them as I surface them.

Read Full Post »