Feeds:
Posts
Comments

Archive for the ‘cellular automata’ Category

When, in 1991, a list was drawn up by an assortment of heavy weight problem solvers to focus on important social and scientific topics receiving prominent play in media over the prior years.   Behavior, psychology nor its related sub fields were mentioned.

Other areas were listed… molecular biology, artificial intelligence, chaos theory, massive parallelism, neural nets, fractals, complex adaptive systems, superstrings, biodiversity, nanotechnology, the human genome, expert systems, punctuated equilibrium, cellular automata, fuzzy logic, space biospheres, the Gaia hypothesis, virtual reality, and cyberspace, to mention a significant few, but no psychology…  Other important disciplines besides psychology were also absent: 3D printed body parts, immunology, pluripotent stem cells, chemistry, epigenetics, climate change, internet of everything, etc.

Things have changed since 1991…

The world is rocking in a way not envisioned by Led Zeppelin or Van Halen.  The “rocking” I am referring to core changes that involve every aspect of our existence.  Over the last twenty-five years or so, all the rules, ideals, principles, and codes, etc., have been changing faster and faster and we now are experiencing the collective impact of those changes.

For many, that is a very good thing.

For the world, because all those rules, mores, traditions, ideals, values are ALL changing AND, all at the same time, it is more than an unsettling variation.  No, no one has acceptable ways to understand, predict, or control the changes, their paths, consequences or implications.

More than metaphorically, we have a world out of balance that is worse off that it might otherwise be if we collectively understood it was, indeed, out of whack. Most in the world doesn’t understand or they double down so they don’t have to deal with it.  Of course, they are clueless about how to deal with it.  Thus, entities keep digging in deeper to keep the old rules ’cause that has mostly how it worked in the past in times of uncertainty.  Hard to give up on making buggy whips when the horse carriages have gone away.

You can observe it everywhere. People, groups and agencies hanging on to the last vestiges of the past by their mental fingernails in efforts to hold on to what was once comforting. The carcasses of ideals, dichotomies, castes, simplistic explanations are hard to ignore.  But many keep trying to do just that. No one wants to say out loud in front of the lords of celebrity and the kings of political unions that the jobs of 1990 aren’t coming back (different ones are emerging but…), equality is available if one values it, aristocracy over citizens is weak, and Jacksonian statements from “The Lottery,” “we’ve always done it that way” are more impotent than ever.

Today, 2016, we want to understand ISIS, rulers of in Iran, North Korea, Washington, DC, teachers, parents, babies and ourselves.

A more objective objective is needed. An objective that is liberated enough to abandon the almost endless marginal disputes of quarrelsome mundane dogmas in order to affect the survival of everybody on the planet, all on the way to figuring out what the heck is going on. We might want to study behavior. We might be ready.

Unlike some smokestack disciplines still protecting ancient edifices or intellectual self-indulgence, the empirical study of behavior viewed as a horizontal set of endeavors has solutions rather than the regurgitations of irrelevant quackery.  This proposal is based on very pragmatic understanding that there is no time left to dally and psychology’s past has run out of runway to contribute to even the simplest solutions necessary to be of value to Earth.

Some think another and perhaps bigger gun, Lightsaber, a deity with new super powers, yoga schools, another pill, repression of the weird ones, stricter laws, election of a benevolent bully, or the return to fundamental values from another era would bring back order, old forms of rule, hierarchies and such.

Haven’t we heard all that before?  Hello…!

Who knows how to change behavior?

JHBryant – Lone Star College – Conroe, Texas

Read Full Post »

 

It’s nearly, if not totally, impossible to remove our deeply held biases, values and contextual history from our raw, sensory perceptions of the world. The difficulty to sense more objectively is what perpetuates so many non-truths about the world. Nearly everything we sense and think is distorted by the biological patterns shaped within us by the world and our interaction within the world. It is within this frame of reference I seek to put down, in an obviously flawed ways, what I think to be at least less non-truth than other theories and thoughts floating about out there.

Our most basic means of communication, the words, sounds, gestures and pictures, we use are so filled with bias it’s impossible to commit to their use in an objective way. The best hope I have is to present as many variations across mediums so that what emerges from these communications is perhaps, if not objective, at least more fully representative of various perspectives that at least the trap of obvious one-sided subjectivity is avoided.

And with that warning, let us proceed.

A first exercise is of definitions and clarifying of terms.

Everything is information. From the most basic particles of existence to governments to rocket ships to the abstractions of mathematics – everything is information. Information looped and entangeled within other information. Information trapped within patterns of information by other patterns of information. Particles trapped into behaviors dictated by the laws of physics. Proteins and chemicals replicating into biological entities by the encodings of genetic instruction. Objects of pure quantity expressed in combinations dictated by rules of provable logical inference. Symbols imbued with meaning combined to form words and sentences and stories that stick in the brains of people and come out of their mouths to be reinterpreted over the eons. Faintly remembered events strung together by stories to form history and imagined events of some time that has not come to pass forming a future hope.

More fundamentally… space and time and causality and logic and being itself. All are matters of information. The casual ordering of events in relation to what is different based on the difference of another entity forming the conception of time and space. The coherence within a frame of reference of words strung together with symbols for equal, not equal, for all, and the such coalescing into logic. What is and what isn’t in reference to what’s logically or causally sensible to us becomes the notion of being.

But this is not quite enough.

Recently various categories of research, science and/or philosophic discussion have added ’emergence’ and ‘complexity’ to the pantheon of fundamental concepts from which we can chart our maps of existence and meaning. The unseen in the parts that only shows itself in the collective – the multitude – the interactive, this notion of emergence.

All in – meaning. Meaning is a vague notion of symbolics and representation within the ontological dimensions of space, time, cause, logic, emergence and being. Meaning is proximal, local phenomenon of pattern. In totality, all things considered, that is all of infinity, there is no meaning – there is no pattern. That is, all patterns at play is pure entropy and no meaning is possible on a universal, infinite scale. (As if we can even imagine such a concept). On a local, limited frame of reference meaning emerges from patterns (people, computers, plants, etc) pattern matching (sensing, perceiving, transforming, encoding, processing).

I propose a phrase: existential equivalence. Every investigative thought, every scientific gesture, every act of art, every attempt to send a message, every ritual, every interaction at all with the world at any level is all of similar thing: the encoding and decoding of information within information. This is not a reduction or a reductionist exercise. Quite the contrary. The varieties of symbolic expression in all of existence is REAL, it is a thing. That existence is expressible in an infinite variety is necessary. and it can only be known, even in a limited way, by actual variety of expression. If anything is to exist, it must exist in infinite variety and multiplicity. Everything that exists has existential equivalence. The entirity of existence is relational.

For instance if there is such a concept and sensation of color it must have expression in physical and artistic and literary terms. It exists at all levels implicated there. If a wavelength of light is able to generate a visual and neuronal concept we called red, then red isn’t just the wavelength, nor the wave of light, nor the eye, nor the brain, nor the word… it is all of those things and all of the things we do not yet think or talk or gesture about.

Or consider a computer program. Its existence is a string of words and phrases transcoded into 1s and 0s and into physical logical gates transmitting electrons and back around and on itself into monitor LEDs into human eyes and brains into motor movements of mouse and keyboard and so on. A computer program is the interaction of all the information.

But surely there are such simple things that do not have a universal relationship – an existential equivalence? what is the simplest thing we can think or speak of? a boson? the number 1? a dot? just an abstract 1? It is impossible to wipe the complexity of existence from even these pure abstractions. We only conceive of their simplicity in relation to other concepts we find complex. Their simplicity must be weighed against everything that isn’t simple.

And so here we have a collosal contradiction. Patterns are a local phenomenon. They aren’t the entirety. And yet I’ve suggested that patterns are existence – all that exists. Unraveling this I am directly saying that patterns interpreting/transcoding/sensing patterns is what exists – creates th world – at all levels. Pure relation, which is only possible at a local level, is existence. Particles only exist in relation to other particles – a gradient. Humans to other humans, to animals, to the planet, to particles. Planets to other massive bodies… and so on, and on, up and down, left to right, back and forward, in and out….

herein lies a beautiful thing – mathematics and computation are a wonderfully efficient symbolic translation methods. This is why computers and mathematics always creep their way into our efforts to make things and make sense of the world. It is why our brains are so damn useful. complex abstract pattern recognizing patterns – these networks of neurons. It is why DNA is so proficient at replication. a “simple”, resilient substrate carrying everything necessary to generate and regenerate these networks of neurons that can then make synthetic networks of pure relation. Whether particles or quantum or digital or biological or chemical there is pure relation, pure patterns among patterns – there is math. It matters not and is completely the point that math and computation can be done in any substrate – between proteins, with pen and paper, on a calculator, in a quantum computer.

AND

why is that? WHY?

In a feat of complete and utter stupid philosophy and unlogic… because it cannot be any other way. Positing a god doesn’t escape this. Positing a multi-verse doesn’t escape this. If any of those things are to exist, they must exist still in relation – they are relation! It’s borderline mystical. Of course it is!

And why does any of this matter? is this just another sound of one hand clapping? a tree falls in a forest does it make a sound? Yes. yes indeed. Those, while used to dismiss the question from the outset actually do call attention to the entirety of the situation. What we conceive of as existence and existing is usually reductively done in by our discrete categorization and our failure to continuously review and revise our categories. The practical implications of this adherence to categories (zoology, isms, religion, gender, nations, science disciplines, etc) is what stunts our path towards knowledge and keeps us in fear.

If we don’t lean into the idea that everything has an existential equivalence we are simply deciding to be ignorant. And in that ignorance we trend towards non-existence. In every day terms if we see the human population only by the color of skin we diminish human existence. If we say and take for truth all of the -isms, reductions, and arbitrary definitions we snuff out relation. If we make any assumptions at all and refuse to question those assumptions, even what we think are so obvious and so simple, we move closer to entropy. If we want to exist at all, we must be mystical and fanatical about sensing relation, resensing it, re-interpreting it. This is not a moral argument. Existence is no more moral than non-existence – except as a local conception.

It really does come down to this (and this is very Camus-like):

If you care at all to exist as you, you must question/express/relate to everything as much as you can before your pattern is fully transcoded into something not you. (we are just food for worms…)

So yes, ask yourself and answer it in infinite variety over and over “if a tree falls in a forest does it make a sound?” This is life – it is your existential equivalence to everything else. You relate, therefore, you are. I relate, therefore I am. X is, therefore X relates.

Read Full Post »

Now that both the iPad and Wolfram|Alpha iPad are available it’s time to really evaluate the capabilities of these platforms.

Wolfram|Alpha on the iPad

Wolfram|Alpha iPad

[disclaimer: last year I was part of the launch team for Wolfram|Alpha – on the business/outreach end.]

Obviously I know a great deal about the Wolfram|Alpha platform… what it does today and what it could do in the near future and in the hands of great developers all over the world.  I’m not shy in saying that computational knowledge available on mobile devices IS a very important development in computing.  Understanding computable knowledge is the key to understanding why I believe mobile computable knowledge matters.   Unfortunately it’s not the easiest of concepts to describe.

Consider what most mobile utilities do… they retrieve information and display it.  The information is mostly pre-computed (meaning it has been transformed before your request), it’s generally in a “static” form.   You cannot operate on the data in a meaningful way.  You can’t query most mobile utilities with questions that have never been asked before expecting a functional response.  Even the really cool augmented reality apps are basically just static data.  You can’t do anything with the data being presented back to you… it’s simply an information overlay on a 3d view of the world.

The only popular applications that currently employ what I consider computable knowledge are navigation apps that very much are computing real time based on your requests (locations, directions, searches).    Before nav apps you had to learn routes by driving them, walking them, etc. and really spending time associating a map, road signs and your own sense of direction.   GPS navigation helps us all explore the world and get around much more efficiently. However, navigation is only 1 of the 1000s of tasks we perform that benefit from computable knowledge.

Wolfram|Alpha has a much larger scope!    It can compute so many things against your current real world conditions and the objects in the world that you might be interacting with.   For instance you might be a location scout for a movie and you want to not only about how far the locations are that you’re considering you want to compute ambient sunlight, typical weather patterns, wind conditions, likelihood your equipment might be in danger and so forth.  You even need to consider optics for your various shots. You can get at all of that right now with Wolfram|Alpha.  This is just one tiny, very specific use case.  I can work through thousands of these.

The trouble with Wolfram|Alpha (its incarnations to date)  people cite is that it can be tough to wrangle the right query.   The challenge is that people still think about it as a search engine.   The plain and simple fact is that it isn’t a web search engine.  You should not use it as a search engine.  Wolfram|Alpha is best used to get things done. It isn’t the tool you use to get an overview of what’s out there – it’s the system you use to compute, to combine, to design, to combine concepts.

The iPad is going to dramatically demonstrate the value of Wolfram|Alpha’s capabilities (and vice versa!). The form factor has enough fidelity and mobility to show why having computable knowledge literally at your fingertips is so damn useful.  The iPhone is simply too small and you don’t perform enough intensive computing tasks on it to take full advantage.  The other thing iPad and similar platforms will demonstrate is that retrieving information isn’t going to be enough for people.  They want to operate on the world.  They want to manipulate.  The iPad’s major design feature is that you physically manipulate things with your hands.  iPod does that, but again, it’s too small for many operations.   Touch screen PCs aren’t new, but they are usually not mobile.  Thus, here we are on the cusp of direct manipulation of on screen objects.  This UI will matter a great deal to the user.  They won’t want to just sort, filter, search again.  They will demand things respond in meaningful ways to their touches and gestures.

So how will Wolfram|Alpha take advantage of this?   It’s already VISUAL! And the visuals aren’t static images.  Damn near every visualization in Wolfram|Alpha are real time computed specifically to your queries.   The visuals can respond to your manipulations.  In the web version of Wolfram|Alpha this didn’t make as much sense  because the keyboard and mouse aren’t at all the same as your own two hands on top of a map, graph, 3d protein, etc.

Early on there was a critical review of Wolfram|Alpha’s interface – how you actually interact with the system.  It was dead on in many respects.

WA is two things: a set of specialized, hand-built databases and data visualization apps, each of which would be cool, the set of which almost deserves the hype; and an intelligent UI, which translates an unstructured natural-language query into a call to one of these tools. The apps are useful and fine and good. The natural-language UI is a monstrous encumbrance…

In an iPad world, natural language will sit back-seat to hands on manipulations.  Wolfram|Alpha will really shine when people manipulate the visuals and the data display and the various short cuts. People’s interaction with browsers is almost all link or text based, so the language issues with Wolfram|Alpha and other systems are always major challenges.  Now what will be interesting is how many popular browser services will be able to successfully move over to a touch interface.  I don’t think that many will make it.  A new type of services will have to crop up as iPad apps will not be simply add-ons to a web app, like they usually are for iPhone.  These services will have to be great in handling direct manipulation, getting actual tasks accomplished and will need to be highly visual.

My iPad arrives tomorrow.  Wolfram|Alpha is the first app getting loaded. and yes, I’m biased.  You will be too.

Read Full Post »

There’s a great deal of confusion about what is meant by the concept “computational knowledge.”

Stephen Wolfram put out a nice blog post on the question for computable knowledge.  In the beginning he loosely defines the concept:

So what do I mean by “computable knowledge”? There’s pure knowledge—in a sense just facts we know. And then there’s computable knowledge. Things we can work out—compute—somehow. Somehow we have to organize—systematize—knowledge to the point that we can build on it—compute from it. And we have to know methods and models for the world that let us do that computation.

Knowledge

Trying to define it any more rigorously than above is somewhat dubious.  Let’s dissect the concept a bit to see why.  Here we’ll discuss knowledge without getting too philosophical.  Knowledge is concepts we have found to be true and that we somewhat understand the context, use and function – facts, “laws” of nature, physical constants.  Just recording those facts without understanding context, use, and function would be pretty worthless – a bit like listening to a language you’ve never heard before.  It’s essentially just data.

In that frame of reference, not everything is “knowledge” much less computational knowledge.  How to define what is and isn’t knowledge… well, it’s contextual in many cases and gets into a far bigger discussion of epistemology and all that jive.  A good discussion to have, for sure, but will muddy this one.

Computation

What I suspect is more challenging for folks is the idea of “computational” knowledge.  That’s knowledge we can work out – generate, in a sense, from other things we already know or assume (pure knowledge – axioms, physical constants…).  Computation is a very broad concept that refers to far more than “computer” programs.  Plants, People, Planets, the Universe computes – all these things take information in (input) one form (energy, matter) and converts it to other forms (output).  And yes, calculators and computers compute… and those objects are made from things (silicon, copper, plastic…) that you don’t normally think of as “computational”… but when configured appropriately they make a “computer”.   Now to get things to compute particular things they need instructions – (we need to systemitize… or program it).  Sometimes these programs are open ended (or appear to be!).  Sometimes they are very specific and closed.  Again, here don’t think of a program as something written in Java.  DNA is an instruction set, so are various other chemical structures, and arithmetic, and employee handbooks… basically anything that can tell something else how to use/do something with input.  Some programs, like DNA, can generate themselves.  these are very useful programs.  The point is… you transform input to some output.  That’s computation put in a very basic, non technical way.  It becomes knowledge when the output  has an understandable context, use and function.

Categorizing what is computational knowledge and what is not can be a tricky task.  Yet for a big chunk of knowledge it’s very clear.

Implications and Uses

The follow on question once this is grokked — What’s computational knowledge good for?

The value end result, the computed knowledge, is determined by its use.  However, the method of computing knowledge is valuable because in many cases it is much more efficient (faster and cheaper) than waiting around for the “discovery” of the knowledge by other methods.  For example, you can run through millions of structure designs using formal computational methods very quickly versus trying to architect / design / test those structures by more traditional means.  The same could be said for computing rewarding financial portfolios, AdWords campaigns, optimal restaurant locations, logo designs and so on.  Also, computational generation of knowledge sometimes surfaces knowledge that may otherwise never have been found with other methods (many drugs are now designed computationally, for example).

Web Search

These concepts and methods have implications in a variety of disciplines.   The first major one is the idea of “web search”.  The continuing challenge of web search is making sense of the corpus of web pages, data snippets and streams of info put out every day.  A typical search engine must hunt through this VERY BIG corpus to answer a query.  This is an extremely efficient method for many search tasks – especially when the fidelity of the answer is not such a big deal.  It’s a less efficient method when the search is really a very small needle in a big haystack and/or when precision and accuracy are imperative to the overall task.  Side note: Web search may not have been designed with that in mind… however, users come more and more to expect a web search to really answer a query – often users mistake the fact that it is the landing page, the page that was indexed that is doing the answering of a query.  Computational Knowledge can very quickly compute answers to very detailed queries.  A web search completely breaks down when the user query is about something never before published to the web.  There are more of these queries than you might think!  In fact, an infinite number of them!

Experimentation

Another important implication is that computational knowledge is a method for experimentation and research.  Because it is generative activity one can unearth new patterns, new laws, new relationships, new questions, new views….  This is a very big deal.  (not that this has been possible before now… of course, computation and knowledge are not new!  the universe has been doing it for ~14 billion years.  now we coherent and tangible systems to make it easier and more useful to use formal computation for more and more tasks).

P.S.

There are a great many challenges, unsolved issues and potentially negative aspects of computational knowledge.  Formal computation systems by no means are the most efficient, most elegant, most fun ways to do some things.  My FAVORITE example and what I want to propose one day as the evolution of the Turing Test is HUMOR.  Computers and formal computation suck at humor.  And I do believe that humor can be generated formally.  It’s just really really really hard to figure this out.  So for now, it’s still just easier and more efficient to get a laugh by hitting a wiffle-ball at your dad and putting it on YouTube.

Read Full Post »

Wolfram Mathematica Home Edition is available.  It’s a $295 fully functional version of Mathematica 7.

Everyone should consider getting a copy.  No, really, everyone.  

What mathematica can help you do is as useful as word processing.  I know, that sounds crazy.  How could scientific computing be for everyone?

Consider the amount of math, data mining and research one already does just to get through the day.  Do you check the stock market? do you look up information in wikipedia? do you use the tools in your online bank site? Do you watch the weather report?

Much of this data is available in Mathematica and is immediately made interactive by Mathematica.  Other examples

OK, still not convinced?  Just do the math.  Mathematica can replace Visio, your calculator (graphing calculator), excel, batch photo editor and most common programming environments.

If you a developer, even just a dabbler, you must get Mathematica.  It’s easy to pick up and the more you learn the more amazing things you find.  Beyond that though, Mathematica’s symbolic programming is a progressive approach.  In a world of multi core, multi threaded apps OOP and Procedural programming is becoming increasingly complicated and bug prone.  Mathematica’s approach avoids the pitfalls of lost threads and memory leaks because the paradigm itself doesn’t allow you to make those mistakes (for the most part).  

I’ll let you in on another secret, that almost no literature covers.  Mathematica has the best web parsers out there.  It is insanely easy to bring data in from like 200 different file formats, including HTML.  For anyone who has ever built a web service, a scraper, spider or crawler, you know how painful it is to build these in most languages, not to mention maintaining a scraper or crawler.  Why no one promotes this feature is beyond me considering the mashup nature of the Web now.  It’s super fun to mash the various APIs out there with some cool mathematica visualizations.  (Oh, and for the search engine nutz out there, the linguistic engine in mathematica is insanely easy to use vs. raw wordnet and various spelling engines.  you can creating a really neat search suggestion tool within in an hour.)

(e.g. I made a visual search engine of shoes and women’s tops that crushes like.com.  it took me 1.5 hours.  I used the image manipulation tools in Mathematica to analyze shapes and colors of products via the built in similarity algorithms.  Post a comment if you want that code)

So, yes, web industry people/media workers, you can get way ahead with this software.

BI people.  Give up that lame copy of SAS and SPSS.  Seriously, those products are so expensive for somewhat limited use.  I’ll still install R, because it’s FREE and extensible, but those other two gotta go if you are a stats and BI person.  Get a home copy of mathematica, learn it, and then get a pro copy at work.  Don’t trust me on this, just try it.  Let me know if you really can’t kick your SPSS habit.

I really could go on forever.  The scope of use for this software is pretty insane.  Hell, the documentation alone is a great teaching aid.  Sometimes I just browse the documentation to learn new math or programming or to explore the data.  What few people know is that the documentation itself is interactive and computable.  You don’t just get a book of examples, you can actually “run the program” within the documentation and see it live.  For the home user, this means you can use the documentation to get going very quickly and start to modify the examples to suit your task.

Call me a FanBoy.  That’s fine.  You will be too if you invest $295 and 2 hours of your time.  Methinks you’ll feel what I feel about this – how can I possibly be given this much power without paying 10x this much?  There must be a catch!  There isn’t.  This is the best deal in software. (just think of how much you paid for MS Office and Photoshop… and those only do a handful of functions)

Read Full Post »

IEEE’s Spectrum magazine has an excellent article about memristors and their history. This is an excellent overview piece written for a wider audience.  It clearly explains how the memristor came to be, why it matters, and what exactly it is.

It’s worth noting that Memristors have relation to cellular automata and neural networks, at least in originators.  Leon Chua is one of the main researchers behind Cellular Neural Networks and predicted the existence of memristors.  Also of note is the typical path of MIT, U of Illinois Champaign and Berkley – shared by many others working in similar disciplines.

Methinks the relationship between memristors and other cellular automata like theoretical models is much deeper than just the research instituations.  Wolfram mentioned the possibility of a parallel computing architecture based on CAs, perhaps the memristors plays some role in all of this.

Anyhoo, read the article.  Memristors will be significant in computing design.

Read Full Post »

It appears likely that cellular automata, even elementary CAs, can model Fixed Action Patterns.  This is a potential area of study for me.  However, my gut suggests this won’t be all that interesting in of itself.  Now by cobbling together a handful of Fixed Action Patterns in the form of a CA model we might get to something very dynamic.  Will this be a useful model? Will it be accurate?

Though for automata to be useful in the study of human behavior, we’re going to need to identify more complicated implementations of automata.

Please not that thought I think automata can make valuable models to understanding relationships between behaviors, I do not suggest that automata IS THE MECHANISM.  I am simply looking for a reliable way to computational represent animal and human behavior for the purposes of building a bigger story about learning, conditioning and social dynamics.

 

Read Full Post »

Older Posts »