Archive for October, 2008

Sure beats my “Man in Mirror” knock off I did in 7th grade.

Kick ass.

Read Full Post »

I present to you a principle of mine relevant to releasing websites and web software.

You Don’t Really Know Until It’s Live Principle

Basic Idea:

No one Looks At Anything Until it’s Publicly Released.  Then it’s a frenzy of real feedback.

The implications:

  • There is no better QA that real users on real software on real hardware in the real world
  • You can’t experience software unless you get the full experience in the real world, real set up (and I mean EXPERIENCE, not test or click or review)
  • the Halting Problem applies big time – that is, you won’t know what breaks a site/service/software until something breaks it.  Even the best unit tests and XP efforts won’t uncover all the halts

It’s been true for the last 78 pieces of software I’ve worked on…

Why does this principle hold?

a) the consequences (the stakes!) are very high when a piece of software is LIVE.  Thus it is very reinforcing for people to give feedback and really dig in. (oh shit, it’s live!)

b) Technology obstacles and lots of caveats usually hold in prototypes, mocks and dev sites (oh, ignore that link, we didn’t get to that yet)

c) Websites are very complicated, especially ones where you have lots of mixed media, complete design overall, a new backend, aggregation, and so forth.  Mocks can never showcase the full experience and experiential bugs are impossible to uncover unless you are in the flow.

So spare yourself the agony of deciding when to release or trying to be perfect on public release.  Just release.  You’ll get on with the fixing and improvement cycle sooner.

Read Full Post »

http://www.nytimes.com/2008/10/28/opinion/28brooks.html?_r=1&scp=2&sq=&st=nyt&oref=sloginfree registration to read if not registered…

Go with the as postulated in this NYT.com article, there are four steps to every decision…

  1. you perceive a situation
  2. you think of possible courses of action
  3. you calculate which course is in your best interest
  4. you take the action


If only it were that simple.

Over the past few centuries, public policy pundits, talking heads and some academicians have presumed that step three was the most important. Social science disciplines are premised on that presumption as well; despite the ink used to propagate altruistism at every opportunity, people calculate and behave in their own self-interest.

Greenspan’s quoted in the above article made that clear for his reign and for the country. His comments aside, none of the steps above are worth a lot without the others.

Most of the processing takes place without literal awareness. We behave and when pressed for why, we generate a story that fits that situation and puts us in a virtuous light. We don’t really perceive all that well. Thus, the step that seems most simple is the most complex. Looking at and perceiving the world is an active process of symbol meaning-making that shapes and biases the rest of the decision-making chain.

Psychologists have been exploring our biases for four decades with the work of Amos Tversky and Daniel Kahneman, and also with work by people like Richard Thaler, Robert Shiller, John Bargh and Dan Ariely. Now Brooks would have it that it is time for the economists to contribute. Gasp!

The desperation of the day may mean a new wave of behavioral economists and others who are next to bring pop psychology to the realm of public policy. These are the same pundits that used their antiquated assumptions to provide plausible explanations for why so many others are wrong about risk behaviors and globalization implications.

Nassim Nicholas Taleb for instance. In his books “Fooled by Randomness” and “The Black Swan” he explains it all in equally simplistic manner as the four rules above. As an astute colleague pointed out bluntly, we are asking the guy who coined the perception of “black swans” to predict black swans.” The irony is laughable. What gives a black swan example its value is that it is not obvious [read predictable]. While Taleb may have seen it coming, as stated in the above article, that precludes it from being an example of a “black swan” phenomenon. Irony for sure.

When Taleb gets on the philosophical diving board to spring into evolutionary causation decreeing that humanoids brains evolved to fit a less complex world I found myself gagging instead of gasping. His examples of the perceptual biases that distort our thinking are themselves century old prejudices.

1. Our tendency to see data that confirm our prejudices more vividly than data that contradict them

a. We recognize information due to its relation with exiting cues we have in our repertoire. We don’t see what we haven’t been reinforced to see; it is not self-deception any more than it is self enlightenment when we see what turns out to be correct. In that set of circumstances the correctness is not based on enlightenment but on relationships that were there all along but not focused on, recognized or reinforced by the environment.

b. That environment is the same one where superstition, myth, magic, mind and phenomenalism is considered valuable to be our “humanity” and, knock on wood, we sometimes guess right despite the reasons behind the guess.

2. Our tendency to overvalue recent events when anticipating future possibilities

a. the last 6 months is more like the next six months than the last 1000 years are like the next 6 months

3. Our tendency to spin concurring facts into a single causal narrative

a. if for no other reason, this site is the mainstay of the defeat of monocausality which haunts our culture, bolsters our superstitions and keeps us surprised at regular intervals

4. Our tendency to applaud our own supposed skill in circumstances when we’ve actually benefited from dumb luck.

a. We benefit from historical uniqueness and education that is more than smattered with scientific skepticism as opposed to boorish cynicism that ignores our strengths and panders to the voodoo in the caves.

b. See 1.-b above.

Errors of perception are everywhere when experimental analysis is NOT involved. Clearly, getting to our moon and beyond was due to experimental analysis and NOT interpretation of perceptions of pundits.

Without experimental analysis we’ll continually fail to perceive “what’s going on out there.” The relationships between a zillion things and another zillion things are to complex. While a four point decision tree helps us walk across the street in a small town, it is not the way to figure out how to navigate rules of this years tax code or interpret the Patriot Act I or II on any given Sunday. Who knew and who still knows which small events are linked to big disasters? Who knew that the mechanical Newtonian links were there as well as selected consequences of a billion factors coming together world [pick one] (cause – contribute – accompany) a social-political-economic unraveling? Experimental analysis was not involved. Interpretation of biases was.

Faulty perceptions are not the only reason or application for an experimental analysis. Relationships are complex, not caused by single small or an enormous events as you have been trained to think. We don’t have much training to recognize or understand what our own self-interests are in anything but localized strings of spatial-temporal events. Brooks’ towing with trusting government to become engaged in the process is folly. Just how much “help” can a country endure? What’s worse, it is lazy. Separating government and business is impossible but collusion is asking for our own demise handed to as a coupon toward irrelevance. While we regularly make poor decisions, the government is insensitive to making the correct one or those needing to be made in a timely fashion.

If you doubt that, don’t look in the rear view mirror as some would suggest. Follow the consequences of a potential decision and determine for yourself if you or an agent of an ideology is better suited to care for what is in your best interests. Government information feedback mechanisms are limited, broadly myopic, and mechanical; not timely. The very thing that got them away from the citizenry to be politicians has had ideology numb them contributing to an end to pragmatism. This bias, to be sure, is no better or worse than any other bias. They all can be replaced with an experimental analysis from science rather than the pop pap solutions we are offered.

As we’ve seen from recent crashes before the latest one this set of economic biases just keeps on giving. It keep on giving us the problems that government is content to continue to administer to; mindfulness, equality of everything not equal, brinkmanship over leadership and above, all, saying what works to get re-elected. As stated, this meltdown is a cultural event reminding us that we are perceptive beings, seeing things that aren’t there and not perceiving things that are there. (See previous blogs]

Read Full Post »

This post is my interpretation.  Other thinkers, philosophers and researchers have other (more technical) approaches regarding this subject.

Statement: There are no models that completely explain the “how or why” sufficiently complex phenomenon.


Explain – Accurately represents the causes, context, behavior and consequences of a phenomenon and presents such representation in a usable form (we can apply this knowledge outside of just explaining)

Completely – 100% (or very nearly 100%) represent all cases of the phenomenon.  In particular, there are no “exceptions” nor is there simply a “rule of thumb.”

“How and Why” –  The actual behavior, make up, and structure of the phenomenon.

In other words:

All our scientific efforts produce models, not explanations.  Models help us improve our methods and provide insight into phenomenon, but they are not the “thing” and they do not explain the “thing”.  Our explanations based on models and/or the incomplete information they are always based on (computational irreducibility, uncertainty) are forever not complete and always capable of revision (inaccurate).

Math is the ultimate model language.  It is a way to describe relationships when you strip away the gnarly details of the real world.  It sometimes has beautiful results but never produces an explanation of the real world.

Computer science is inbetween the real world and math.  A great way to simulate things and build new computational models, but because it’s not made of the stuff we’re often simulate it can’t possibly be completely accurate.

Biology and other specialized disciplines tend to rely more observations than abstract models.  The result is a nearly infinite record of exception cases making conceptual models that span multiple phenomenon very difficult (well, that’s because you mostly can’t do it.)

Though I’m giving a very truncated account of everything hopefully the point is clear.  Explanations are always our judgment, our subjective synthesis of the inaccurate data we have.  This does not imply we don’t know anything.  Nor does it imply we don’t have explanations.  For simple, or relatively simple, phenomenon we have accurate explanations and good working knowledge.

Specifically, as related to this blog, economics, behaviorism, and social models are all useful models.  None of the “laws” presented in these disciplines are fullproof.  Rational Choice theory, supply and demand, matching laws….  these are good tools, but not full explanations of the how and way of behavior, media and social activity.

Proof: Is left as an exercise to the reader.

Proof Part Duex:  This is an intractable problem.  There’s no way to formally prove these statements.  They are hunches.  I do believe the proof is somehow along these lines: to determine if an explanation is complete and accurate I’d have to be able to reduce a phenomenon down somehow, which is impossible for sufficiently complex phenomenon. (think along the lines of the halting problem.  i can’t determine if a program is going to halt any more quickly than running the program and seeing if it halts….)

For a more formal treatment of scientific explanation head here.

There are many more resources and I’ll post them as I surface them.

Read Full Post »

No matter how you slice this election, old school political media plans just got schooled.

The Obama campaign altered national campaigning in a major way during this election.  Forget whether he wins or loses or whether you are this political part or that one, the fact is fundraising and political marketing is forever different.

Some key insights:

  • Logos will no longer be just stars or flags or letter marks – campaigns will spend the money on a real logo
  • The candidate’s name will be backseat to the key word or phrase
  • Media plans will have digital/online as the center piece
  • Real time editing of video (live + highlight) is essential
  • New creative every day is required
  • Network TV can be bought, cheaply
  • Video game advertising will grow in importance
  • YouTube (and other online video sources) are more important than they seem
  • Spamming is effective
  • Robocalls are less effective than humans
  • Cell phones should be issued to all campaigners (or pay for their data plan)
  • The network effect can raise a lot of money
  • HD TV requires good looking people
  • Nothing can be hidden, everyone is an investigative journalist. Plan for exposure.
  • Campaigns will never be shorter than 13 months
  • Polls get press
  • dotcoms generate real traffic
  • programming is a required skill of some of the campaign staff

I can’t wait to see the final tally on campaign spending and the break out by media type.  Also, the voter turn out and correlation to media should be fairly interesting.

Again, stand where you want on the issues, the media plans and marketing campaigns involved are bigger than the biggest brands on earth.

I see a lot of RFPs, IOs and agency plans in my line of work.  None of them come close to the brand integration of the campaigns, and Obama’s in particular.  None of these RFPs have near the coverage or depth or scope of these campaigns.

The coordination required to pull of these campaigns is beyond what you can imagine, even if you work in advertising.  And to be raising the budget as you need it makes it more impressive.. or maybe that’s what makes it possible at all.

Welcome to the digital, on demand age.  It’s here.

Read Full Post »

CHREST Computational Model

CHREST Computational Model

Chunking theory is a theory for how we learn/remember things.  In combinatorial “chunks” essentially, stored in long term memory.  Recent work in receptor webs and other areas of cognitive work and biology are following similar concepts.

If this sounds like I’m not explaining it well… I’m not.  I’m absorbing it too.  Read the source material for more info.

What I’m mostly excited by is how well this chunking seems to mesh with other “network” computational models.  And it should as I think it’s the same researchers who’ve branched.

There’s actually a code base called CHREST modeling the theory.

Here’s great information on CHREST that leads to a variety of other great resources.

CHREST homepage

Some of the original work by Chase and Simon.

Read Full Post »

I asked my five year old daughter this on the way to school yesterday, “What do you think you’ll do today at school?”

She skipped along and replied, “I don’t know.  I’m not there yet doing something.”

Talk about existing in the now.

Read Full Post »

Older Posts »