Feeds:
Posts
Comments

Posts Tagged ‘computational intelligence’

There’s a great deal of confusion about what is meant by the concept “computational knowledge.”

Stephen Wolfram put out a nice blog post on the question for computable knowledge.  In the beginning he loosely defines the concept:

So what do I mean by “computable knowledge”? There’s pure knowledge—in a sense just facts we know. And then there’s computable knowledge. Things we can work out—compute—somehow. Somehow we have to organize—systematize—knowledge to the point that we can build on it—compute from it. And we have to know methods and models for the world that let us do that computation.

Knowledge

Trying to define it any more rigorously than above is somewhat dubious.  Let’s dissect the concept a bit to see why.  Here we’ll discuss knowledge without getting too philosophical.  Knowledge is concepts we have found to be true and that we somewhat understand the context, use and function – facts, “laws” of nature, physical constants.  Just recording those facts without understanding context, use, and function would be pretty worthless – a bit like listening to a language you’ve never heard before.  It’s essentially just data.

In that frame of reference, not everything is “knowledge” much less computational knowledge.  How to define what is and isn’t knowledge… well, it’s contextual in many cases and gets into a far bigger discussion of epistemology and all that jive.  A good discussion to have, for sure, but will muddy this one.

Computation

What I suspect is more challenging for folks is the idea of “computational” knowledge.  That’s knowledge we can work out – generate, in a sense, from other things we already know or assume (pure knowledge – axioms, physical constants…).  Computation is a very broad concept that refers to far more than “computer” programs.  Plants, People, Planets, the Universe computes – all these things take information in (input) one form (energy, matter) and converts it to other forms (output).  And yes, calculators and computers compute… and those objects are made from things (silicon, copper, plastic…) that you don’t normally think of as “computational”… but when configured appropriately they make a “computer”.   Now to get things to compute particular things they need instructions – (we need to systemitize… or program it).  Sometimes these programs are open ended (or appear to be!).  Sometimes they are very specific and closed.  Again, here don’t think of a program as something written in Java.  DNA is an instruction set, so are various other chemical structures, and arithmetic, and employee handbooks… basically anything that can tell something else how to use/do something with input.  Some programs, like DNA, can generate themselves.  these are very useful programs.  The point is… you transform input to some output.  That’s computation put in a very basic, non technical way.  It becomes knowledge when the output  has an understandable context, use and function.

Categorizing what is computational knowledge and what is not can be a tricky task.  Yet for a big chunk of knowledge it’s very clear.

Implications and Uses

The follow on question once this is grokked — What’s computational knowledge good for?

The value end result, the computed knowledge, is determined by its use.  However, the method of computing knowledge is valuable because in many cases it is much more efficient (faster and cheaper) than waiting around for the “discovery” of the knowledge by other methods.  For example, you can run through millions of structure designs using formal computational methods very quickly versus trying to architect / design / test those structures by more traditional means.  The same could be said for computing rewarding financial portfolios, AdWords campaigns, optimal restaurant locations, logo designs and so on.  Also, computational generation of knowledge sometimes surfaces knowledge that may otherwise never have been found with other methods (many drugs are now designed computationally, for example).

Web Search

These concepts and methods have implications in a variety of disciplines.   The first major one is the idea of “web search”.  The continuing challenge of web search is making sense of the corpus of web pages, data snippets and streams of info put out every day.  A typical search engine must hunt through this VERY BIG corpus to answer a query.  This is an extremely efficient method for many search tasks – especially when the fidelity of the answer is not such a big deal.  It’s a less efficient method when the search is really a very small needle in a big haystack and/or when precision and accuracy are imperative to the overall task.  Side note: Web search may not have been designed with that in mind… however, users come more and more to expect a web search to really answer a query – often users mistake the fact that it is the landing page, the page that was indexed that is doing the answering of a query.  Computational Knowledge can very quickly compute answers to very detailed queries.  A web search completely breaks down when the user query is about something never before published to the web.  There are more of these queries than you might think!  In fact, an infinite number of them!

Experimentation

Another important implication is that computational knowledge is a method for experimentation and research.  Because it is generative activity one can unearth new patterns, new laws, new relationships, new questions, new views….  This is a very big deal.  (not that this has been possible before now… of course, computation and knowledge are not new!  the universe has been doing it for ~14 billion years.  now we coherent and tangible systems to make it easier and more useful to use formal computation for more and more tasks).

P.S.

There are a great many challenges, unsolved issues and potentially negative aspects of computational knowledge.  Formal computation systems by no means are the most efficient, most elegant, most fun ways to do some things.  My FAVORITE example and what I want to propose one day as the evolution of the Turing Test is HUMOR.  Computers and formal computation suck at humor.  And I do believe that humor can be generated formally.  It’s just really really really hard to figure this out.  So for now, it’s still just easier and more efficient to get a laugh by hitting a wiffle-ball at your dad and putting it on YouTube.

Read Full Post »

In the area of algorithmic trading alone, industry estimates by the Aite Group predict
that by 2010, 50% of U.S., 28% of European and 16% of Asian order flow will be executed automatically via trading algorithms [1]. With about 8.5 billion shares currently being
traded daily in the US this would equate to the automatic trading
of $120 billion of stock in current money terms.

* [1] Algos 3.0, Developments in Algorithmic Trading, Traders Magazine 2007. Special Report.
SourceMedia’s Custom Publishing Group.

Computational Intelligence Magazine, from IEEE

For a very general overview of Algorithmic Trading, hit this Wikipedia entry.

If you want sort of an “insiders look” at all this, head over to Advanced Trader magazine.

To see the very limited amount of research on the impacts, check out these various book resources:

Google Books on Algo Trading

Amazon

I’ve seen almost no analysis of how algo trading impacts economies.  WIth 30-50% of the money changing hands in the stock market every day based on these algos, it would seem highly improbably that algo trading isn’t a significant departure from traditional/human-based trading behavior.

How much of these wild stock swings are the result of velocity algos going nutso?

How do we verify the algos do what we want?

Also note that the average trade size has dropped from 1200 to 300 – this matters a lot.  Large volume trades typically are indicators to human traders.  With large trades vanishing, the data stream is different than it used to be.

If someone has some clear, understandable research (not HowTos) on algo trading theory, send it along.

Read Full Post »

I propose one hard test for the progress of comp sci.  I’ve laid the ground work for a computational engine that can write late night talk show monologues as well as the human writers.

Do you think it’s possible?

Here’s my basic idea…code forth coming.

—GENERATIVE JOKE ENGINE——

Some Basic Info
http://en.wikipedia.org/wiki/Computational_humor

Mathematicas and Humor, a book by John Allen Paulos

Philosophy of Humor/Theories of Humor
http://en.wikipedia.org/wiki/Philosophy_of_humor
http://www.iep.utm.edu/h/humor.htm

Some useful mathematical theory
http://en.wikipedia.org/wiki/Catastrophe_theory

Liguistics
http://www.tomveatch.com/else/humor/paper/humor.html

Joke Generator
http://grok-code.com/12/how-to-write-original-jokes-or-have-a-computer-do-it-for-you/

Potential Ideas
Simple Program based on Replacement rules of Subjects, Relationships, Events

Simple Program of puns, word combinations, definition crossing

Simple programs and then an rich interface that uses and avatar or on screen talent to “tell” the selected jokes.  Would prefer it to all be computer based as we want to find out whether the “telling” of a joke contains a lot (most?) of the humor.
How to do this:

Prep: Create a database of common objects, slang terms, relationship descriptions

a) parse the news each night for subjects, relationships, objects, events

b) enumerate all jokes (basically sentence combinations) using replacement of subjects, objects, relationships with objects in the prep database.

c) run training algo against real monologues (what jokes are likely to be used based on past jokes)

d) tune it

e) create inflection and pausing algorithm that “tells the joke better”

We can exclude the use of existing monologues to train the algorithms and instead use an audience (internet visitors) to rate the jokes and monologues.  The algo can then learn what replacements, what structures, and what styles work best.  Though i think using existing monologues is realistic as most writers and comedians borrow from successful previous work to save a long, boring training period.

Exhaust all possibilities of jokes using replacement rules.  Then run this model against actual jokes used on late night television.

Analyze how many of the actual jokes we found.  Push this analysis to back in to give weighting to the generated jokes to predict late night monologues.

Can we ever replace monologue writers?

Read Full Post »