Feeds:
Posts
Comments

Archive for the ‘computation’ Category

The aim of most businesses is to create wealth for those working at it. Generally it is preferred to do this in a sustainable, scalable fashion so that wealth may continue to be generated for a long time. The specific methods may involve seeking public valuation in the markets, selling more and more product directly profitably, private valuation and investment and more. The aim of most technology based companies to make the primary activity and product of the business involve technology. Most common understanding of the “technology” refers to information technology, bio technology, advanced hardware and so forth – i.e. tools or methods that go beyond long established ways of doing things and/or analog approaches. So the aims of a technology company are to create and maintain sustainable, scalable wealth generation through technological invention and execution.

Perhaps there are better definitions of terms and clearer articulation of the aims of business but this will suffice to draw out an argument for how technology companies could fully embrace the idea of a platform and, specifically, a technological platform. Too often the technology in a technology company exists solely in the end product sold to the market. It is a rare technology company that embraces technological thinking every where – re: big internet media still managing advertising contracts through paper and faxes, expense reports through stapled papers to static excel spreadsheets and so on. There are even “search” engine companies that are unable to search over all of their own internal documentation and knowledge.

The gains of technology are significant when applied everywhere in a company. A technological product produced by primitive and inefficient means is usually unable to sustain its competitive edge as those with technology in their veins quickly catch up to any early leads by a first, non technical mover. Often what the world sees on the outside of a technology company is The Wizard of Oz. A clever and powerful façade of technology – a vision of smoking machines doing unthinkable things. When in reality it is the clunky, hub bub of a duct taped factory of humans pulling levers and making machine noises. If the end result is the same, who cares? No one – if the result can be maintained. It never scales to grow the human factory of tech facade making. Nor does it scale to turn everything over to the machines.

What’s contemplated here is a clever and emergent interaction of human and machine technology and how a company goes from merely using technology to becoming a platform. Consider an example of a company that produces exquisite financial market analysis to major brokerage firms. It may be that human analysts are far better than algorithms at making the brilliant and challenging pattern recognition observations about an upcoming swing in the markets. There is still a technology to employ here. Such a company should supply the human analysts with as much enhancing tools and methods to increase the rate at which human analysts can spot patterns, reduce the cost in spreading the knowledge where it needs to go and to complete the feedback loop on hits and misses. There is no limit to how deeply a company should look at enhancing the humans ability. For instance, how many keystrokes does it take for the analyst to key in their findings? How many hops does a synthesized report go through before hitting the end recipient? how does the temperature of the working space impact pattern recognition ability? Perhaps all those details are far more of an impact to the sustainable profit than tuning a minute facet in some analytic algorithm.

The point here is that there should be no facet of a business left untouched by technology enhancement. Too often technology companies waste millions upon millions of dollars updating their main technology product only to see modest or no gain at all. The most successful technology companies of the last 25 years have all found efficiencies through technology mostly unseen by end users and these become their competitive advantages. Dell – ordering and build process. Microsoft – product pre-installations. Google – efficient power sources for data centers. Facebook – rapid internal code releases. Apple – very efficient supply chain. Walmart – intelligent restocking. Amazon – everything beyond the core “ecommerce”.

In a sense, these companies recognized their underlying ”platform” soon after recognizing their main value proposition. They learned quickly enough to scale that proposition – and to spend a solid blend of energy on the scale and the product innovation. A quick aside – scale here is taken to mean how efficiently a business can provide its core proposition to the widest, deepest customer base. It does not refer solely to hardware or supply chain infrastructure, though often that is a critical part of it.

One of many interesting examples of such platform thinking is the Coors Brewing company back in its hey day. Most people would not consider Coors a “technology” company. In the 1950s though it changed many “industries” with the introduction of the modern aluminum can. This non-beer related technology reduced the cost of operations, created a recycling sub industry, reduced the problem of tin-cans damaging the beers taste and so on. It also made it challenging on several competitors to compete on distribution, taste and production costs. This isn’t the first time the Coors company put technology to use in surprising ways. They used to build and operate their own powerplants to reduce reliance on non optimal resources and to have better control over their production.

Examples like this abound. One might conclude that any company delivery product at scale can be classified as a technology company – they all will have a significant platform orientation. However, this does not make them a platform company.

What distinguishes a platform technology from simply a technology company is one in which the platform is provided to partners and customers to scale their businesses as well. These are the types of companies where their product itself becomes scale. These are the rare, super valuable companies. Google, Apple, Intel, Facebook, Microsoft, Salesforce.com, Amazon and so on. These companies often start by becoming highly efficient technically in the production of their core offering and then turn that scale and license it to others. The value generation gets attributed to the scale provider appropriately in that it becomes a self realizing cycle. The ecosystem built upon the platform of such companies demands the platform operator to continue to build their platform so they too may scale. The platform operator only scales by giving more scale innovation back to the ecosystem. Think Google producing Android, offering Google Analytics for Free and so on. Think Facebook and Open Graph and how brands rely on their facebook pages to connect and collect data. Think Amazon and its marketplace and Cloud Computing Services. Think Microsoft and the MSDN/developer resources/cloud computing. Think Apple and itunes, app store and so on.

It’s not all that easy though! There seems to come a time with all such platform companies that a critical decision must be made before it’s obvious that it’s going to work. To Open The Platform Up To Others Or Not? Will the ecosystem adopt it? How will they pay for it? Can we deal with what is created? Are we truly at scale to handle this? Are we open enough to embrace the opportunities that come out of it? Are we ready to cede control? Are we ready to create our own competitors?

That last question is the one big one. But it’s the one to embrace to be a super valuable, rare platform at the heart of a significant ecosystem. And it happens to be the way to create a path to sustainable wealth generation that isn’t a short lived parlor trick.

Read Full Post »

As I watched some of the Republican National Convention, gear up for the DNC, get through my own daily work, read essays, strategize about business, talk to friends and family and synthesize all the data, I just come back to this question What Are We So Afraid Of?

I decided to write this post today specifically because I saw this ridiculous commercial yesterday for ADT Pulse.   http://www.adtpulse.com/  This commercial made it clear that if you aren’t monitoring your home in real time with video all the time everything you know and love was in grave danger!    So, I’ve decided to figure out just how afraid of everything I should be.

Here’s some of what we seem to be afraid about as a culture.

Our jobs: 

http://www.pewsocialtrends.org/2012/08/31/public-says-a-secure-job-is-the-ticket-to-the-middle-class/

http://www.cnbc.com/id/29275784/People_Fear_Losing_Job_the_Most_Poll

 

Our economy: 

http://www.conference-board.org/data/?CFID=20758670&CFTOKEN=9d689c13bda4ed14-4C556B63-968C-7A5F-C9BBEBCC03AA5B5E

http://pewresearch.org/pubs/2306/global-attitudes-economic-glum-crisis-capitalism-european-union-united-states-china-brazil-outlook-work-ethic-recession-satisfaction-gloomy

 

Our government: 

http://www.realclearpolitics.com/epolls/other/direction_of_country-902.html

http://www.people-press.org/2012/06/04/partisan-polarization-surges-in-bush-obama-years/

 

People different than us: 

http://www.sciencemag.org/content/336/6083/853.short

http://www.nyclu.org/news/nyclu-analysis-reveals-nypd-street-stops-soar-600-over-course-of-bloomberg-administration

http://www.winnipegfreepress.com/arts-and-life/entertainment/books/unfounded-fears-167413105.html

 

Murder:

http://www.psychologytoday.com/blog/the-narcissus-in-all-us/200903/mass-murder-is-nothing-fear

 

Food:

http://www.amazon.com/Fear-Food-History-Worry-about/dp/0226473740

http://shop.forksoverknives.com/Forks_Over_Knives_The_DVD_p/5000.htm

 

Technology and Media:

http://richardlouv.com/books/last-child/

http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647

http://www.theatlantic.com/technology/archive/2011/12/5-things-we-fear-new-technologies-will-replace/250545/

 

Cancer, Disease:

http://www.lancet.com/journals/lancet/article/PIIS0140-6736%2810%2960610-1/fulltext

 

Medicine, Shots, Vaccines:

http://www.wired.com/magazine/2009/10/ff_waronscience/

 

God, Heaven and Hell:

http://www.plosone.org/article/info:doi/10.1371/journal.pone.0039048?imageURI=info:doi/10.1371/journal.pone.0039048.t001

http://www.usatoday.com/news/opinion/forum/2011-08-07-love-wins-afterlife-hell_n.htm

 

Terrorism:

http://blogs.tribune.com.pk/story/13262/london-olympics-2012-the-odds-of-dying-in-a-terrorist-attack/

 

Our Children’s Safety:

http://articles.chicagotribune.com/2011-07-17/news/ct-met-walk-alone-20110717_1_free-range-kids-abductions-york-writer-lenore-skenazy

http://www.denverpost.com/ci_16725742

 

Tattoos:

http://www.dailymail.co.uk/health/article-2032696/Now-tattoos-cancer-U-S-regulator-probes-fears-inks-contain-carcinogenic-chemicals.html

http://professional.wsj.com/article/SB10001424052702303933404577505192265987100.html?mg=reno64-wsj

http://cityroom.blogs.nytimes.com/2009/04/01/when-tattoos-hurt-job-prospects/

 

Large Hadron Collider:

http://www.time.com/time/health/article/0,8599,1838947,00.html

 

Everything else:

 

Nothing to Fear?

So is there anything to fear?   are the fears valid?  well, I guess they are valid fears if you don’t have information.   So here’s some information.

 

Most fears drilled into us aren’t founded on evidence – at least not at the level we fear them:

http://www.amazon.com/False-Alarm-Truth-About-Epidemic/dp/0471678694

http://www.amazon.com/The-Science-Fear-Culture-Manipulates/dp/0452295467/ref=pd_sim_sbs_b_2

 

Unemployment isn’t really that high in this country (or most western countries), especially if you get an education:

http://www.wolframalpha.com/input/?i=unemployment+rate+USA%2C+England

 

You’ll probably have 5-10 employers in your working lifetime so assume you’ll get laid off, fired or go out of business.  There will be other businesses to hire you or you can just make something yourself:

http://online.wsj.com/article/SB10001424052748704206804575468162805877990.html

 

Economy will have short term blips but ultimately continues to churn ahead:

http://www.wolframalpha.com/input/?i=gdp+usa

 

You’re unlikely to be murdered

http://www.wolframalpha.com/input/?i=crime+rates+in+austin%2Ctx

 

Children aren’t taken very often (at least in Colorado)

http://www.denverpost.com/portlet/article/html/imageDisplay.jsp?contentItemRelationshipId=3433817

 

In fact, violence has long been on the decline:

http://edge.org/conversation/mc2011-history-violence-pinker

 

It’s ok if you forget to pray, chances are it probably doesn’t change outcomes:

http://www.washingtonpost.com/wp-dyn/content/article/2006/03/23/AR2006032302177.html

 

And humans have been getting tattoos for a long time and the world hasn’t ended:

http://www.smithsonianmag.com/history-archaeology/tattoo.html

 

Oh, and, humans aren’t that different from Bonobos or Chimps, much less other humans.  So, maybe we should rethink that worrying about people that aren’t just like us:

http://www.dailymail.co.uk/sciencetech/article-2159027/Humans-share-genetic-code-endangered-ape-species-bonobo.html

 

Almost every one of common fears are unwound through perspective changes aka education aka realizing it’s not black and white.    Again, see the S. Pinker History of Violence link above to get an idea of the real impact of just literacy and access to information and what it does to fear.

Is it a big deal that people fear the wrong things?   Yes!   Especially if it leads to suicide bombing, racial profiling, not getting an education and so on.

 

But, c’mon, aren’t there some things we should fear?

Maybe…

http://www.huffingtonpost.com/david-ropeik/fear-of-climate-change-ma_b_1665019.html

and maybe this too

http://www.foxbusiness.com/personal-finance/2010/09/20/student-loan-debt-surpasses-credit-card-debt/

well maybe this too

http://www.cbsnews.com/2100-201_162-628194.html

 

In the end, methinks fearing too much is a waste of time because in the end we just don’t know what’s going to happen, right?

http://en.wikipedia.org/wiki/Black_swan_theory

Knowing you can’t predict it all (thus prevent it) what’s the point in worrying to the point of being truly scared?

http://mathworld.wolfram.com/ComputationalIrreducibility.html

 

So, no, ADT, I won’t be buying your Pulse product.

 

 

Read Full Post »

Anyone that has worked with me is tired of me suggested that everyone in business should know how to program.   This thought is met with a variety of rebuttals that have only a slight shred of a validity.

Everyone programs.  If you get out of bed in the morning and go through any sort of routine (everything is pretty much a routine) you are programming.   This is not semantics. Programming is nothing more than organizing things in such a way that they transform into other things.   Everyday life is programming, it’s just not the uber-formal (re: very restrictive) programming  we think computer programmers do.

When people reject my statement about everyone programs and should get better at what they are actually rejecting is the specific implementations of computer programming – the syntax, the formalities, the tools, the long hours in front of a headache inducing screen.

If you speak, write, draw or communicate at all you have learned a set of rules that you apply to various inputs and produce various outputs.   If you work in spreadsheets, at a cash register, with a paint brush, in a lecture haul, in a lab, on a stage, you are programming.   If you make yourself a sandwich, eat it and go for a jog, you are programming.  Everything you do is taking inputs and transforming it into outputs using various rules of a system.   The system is more or less formal, more or less open.

I don’t see there being any room for dispute on this observation or rather this definition or axiom.

With that basic assumption as a starting point let me make the case that honing your more formal, strict and, yes, traditional “computer” programming skill is a must do for anyone participating in modern society.  (yes, if you do not participate in modern society and do not wish to do so, you don’t need formal programming skill, but you will always be programming within the universe…)

Without getting too out there – our lives will never have fewer computers, fewer programs, fewer gadgets, fewer controllers monitoring, regulating, data exposing, recommending, and behaving on our behalf.   Cell phone penetration is near ubiquitous, every car has computers, trains run on computerized schedules, more than 50% of stocks are algorithmically trade, your money is banked electronically, the government spends your taxes electronically and so on.   So in some sense, to not be able to program formally leaves you without any knowledge of how these systems work or miswork.  Some will have the argument that “I don’t need to know how my car works to use it/benefit from it.”   This is true.  But computers and programming are so much more fundamental than your car.   To not be able to program is akin, at this point, to not being able to read or write.   You are 100% dependent on others in the world.  You can function without a working car.

Before you reject my claim outright consider the idea that learning to program is quite natural and dare I say, easy.   It requires no special knowledge or skill.  It requires only language acquisition skills and concentration which every human i’ve read about or know has these two basic capabilities (before we go on destroying them in college.)

Why do I make this claim of ease?

Programming languages and making programs that work rely on a very small language.  Very simple rules.   Very simple syntax.   Frustratingly simple!   The english language (or any spoken language) is so much more ridiculously complicated.

It does not surprise me that people think it’s hard.  It’s frustrating.  It’s the practice and the simplification of your thoughts into more simple languages and syntax that’s hard.   And so is writing a speech others will understand, or painting a masterpiece, or correctly building a financial accounting book, or pretty doing anything you do for a living that requires someone else to understand and use your output.

I firmly believe each persons ability to translate their lives into useful programs is a differentiator of those that have freedom and identity and those that do not.  Either you are programming and able to keep watch over the programs you use or you are programmed.

Sure, companies and people are busy at work making easier and easier tools to “program” but that doesn’t change the fundamental problem.   The programs you layer on top of other programs (web page builder guis to HTML to browser parsers to web servers…) the more chance of transcription problems (miscommunication), unnoticed malicious use and so forth.

Beyond the issue of freedom it is fun and invigorating to create, to mold your world.  This is the part that’s hard for adults.  Having spent probably from age 10 to whatever age we all are following rules (others programs) and being rewarded (program feedback loops) we all don’t really do a great job molding our world.  Kids are so good at experimenting (playing).   And playing is essential to really great programming.   Programming that will fill you up and make your life better is the kind that generates wonderfully unexpected but useful results.   It’s not always about getting it right or spitting out the answer (though for simple programs that might be the point).  It’s about creating, exploring, and finding connections in this world.

I can replace the word programmer (and programming) in this post with Artist, Mathematician, Reader, Writer, Actor, etc and it will be essentially the same piece with the same reasoning.   All of these “occupations” and their activities are programming – the only thing that differs are the implementations of language (syntax, medium, tools).

When people are rejecting my argument that everyone should learn to program, they are rejecting the notion of sitting down in front of a blinking cursor on a screen and having a piece of software say “error”.   Reject that!  I hate that too!  For me, correcting grammar in my posts or emails or journals is as painful! (but it doesn’t prevent me from wanting to write better or write at all, i *need* to to survive and be free!)

Don’t reject the notion that you shouldn’t be always trying to communicate or understand better – taking inputs from the world and transforming them into useful outputs.  To reject that is essentially rejecting everything.  (and that is now the annoying over-reaching philosophical close!)

Read Full Post »

The iPad, like the iPod, iPhone, and iMac isn’t a revolution in computer science, design interface, consumer packaging nor ui. It’s a revolution of the economics of those things. Now that there’s a device on the market now at 500 bucks and an unlimited data plan for 30 bucks a month it’s almost assured that the iPad type of computing and media platform will be popularized and maybe not even by apple. The hype of the technology will surely drown out the economic story for some time but in the long run the implications of the price of this technology will be the big story.

Sure we have sub 500 dollar computers and media devices. they have never been this functional or this easy. Apple has just shown what is possible so now the other competitors will have to follow suit. It really doesn’t matter in the grand scheme if it’s apple or htc or google or microsoft or Sony who wins the bragging wars each quarter – the cat is out of the bag – cost effective, easy to use, and fun computing for everyone is possible in a mass producible construction.

There are some interesting side effects coming out of this. If a business can’t make huge profits from the hardware or the connection or the applications where will the profit come from? (I’m not saying companies won’t mark good profits I just don’t think it will be sustainable – especially for companies used to big margins.)

Obviously the sales of content matters. Books, movies, games, music and so on. This computing interface makes it far more easy to buy content and get a sense that it was worth buying. If the primary access channel is through a browser I think people aren’t inclined to pay – we all are too used to just freely browsing. On a tablet the browser isn’t the primary content access channel.

The challenge for content providers is that quality of the content has to be great. This new interface requires great interactivity and hifi experiences. Cutting corners will be very obvious to users. There’s also not really some easy search engine to trick into sending users to a sub par experience. That only works when the primary channel is the browser.

If advertising is going to work well on this platform boy does there have to be a content and interaction shift in the industry. Banners and search ads will just kill an experience on this device. Perhaps more old school magazine style ads will work because once your in an app you can’t really do some end around or get distracted. Users might be willing to consume beautiful hifi ads. Perhaps the bigger problem is that sending people to a browser to take action on an ad will be quite weird.

Clicks can’t be the billable action anymore. Clicks aren’t the same on a tablet! (in fact, most Internet ads won’t work on the iPad. Literally. Flash and click based ads won’t function)

Perhaps the apps approach to making money will work. To date the numbers don’t add up. Unless users are willing to pay more for apps than they do on the iPhone only a handful of shops will be able to handle the economics of low margin, mass software. So for the iPad apps seem to be higher priced. More users coming in may change that though.

In a somewhat different vein…. Social computers will be a good source of cold and flu transmission. If we’re really all going to be leaving these lying about and passing them between each other, the germs will spread. Doesn’t bother me, but some people might consider that.

Will users still need to learn a mouse in the future?

Should we create new programming interfaces that are easier to manipulate with a touch screen. Labview products come to mind?

What of bedroom manners? The iPhone and blackberries are at least small…

And, of course, the porn industry. The iPhone wasn’t really viable as a platform. This touch based experience with big screens… Use your imagination and I’m sure you can think up some use cases…

I do think this way of interacting with computers is here to stay. It’s probably a good idea to think through how it changes approaches to making money and how we interact with each other. I’d rather shape our interactions than be pushed around unknowingly….

Happy Monday!

Read Full Post »

Now that both the iPad and Wolfram|Alpha iPad are available it’s time to really evaluate the capabilities of these platforms.

Wolfram|Alpha on the iPad

Wolfram|Alpha iPad

[disclaimer: last year I was part of the launch team for Wolfram|Alpha – on the business/outreach end.]

Obviously I know a great deal about the Wolfram|Alpha platform… what it does today and what it could do in the near future and in the hands of great developers all over the world.  I’m not shy in saying that computational knowledge available on mobile devices IS a very important development in computing.  Understanding computable knowledge is the key to understanding why I believe mobile computable knowledge matters.   Unfortunately it’s not the easiest of concepts to describe.

Consider what most mobile utilities do… they retrieve information and display it.  The information is mostly pre-computed (meaning it has been transformed before your request), it’s generally in a “static” form.   You cannot operate on the data in a meaningful way.  You can’t query most mobile utilities with questions that have never been asked before expecting a functional response.  Even the really cool augmented reality apps are basically just static data.  You can’t do anything with the data being presented back to you… it’s simply an information overlay on a 3d view of the world.

The only popular applications that currently employ what I consider computable knowledge are navigation apps that very much are computing real time based on your requests (locations, directions, searches).    Before nav apps you had to learn routes by driving them, walking them, etc. and really spending time associating a map, road signs and your own sense of direction.   GPS navigation helps us all explore the world and get around much more efficiently. However, navigation is only 1 of the 1000s of tasks we perform that benefit from computable knowledge.

Wolfram|Alpha has a much larger scope!    It can compute so many things against your current real world conditions and the objects in the world that you might be interacting with.   For instance you might be a location scout for a movie and you want to not only about how far the locations are that you’re considering you want to compute ambient sunlight, typical weather patterns, wind conditions, likelihood your equipment might be in danger and so forth.  You even need to consider optics for your various shots. You can get at all of that right now with Wolfram|Alpha.  This is just one tiny, very specific use case.  I can work through thousands of these.

The trouble with Wolfram|Alpha (its incarnations to date)  people cite is that it can be tough to wrangle the right query.   The challenge is that people still think about it as a search engine.   The plain and simple fact is that it isn’t a web search engine.  You should not use it as a search engine.  Wolfram|Alpha is best used to get things done. It isn’t the tool you use to get an overview of what’s out there – it’s the system you use to compute, to combine, to design, to combine concepts.

The iPad is going to dramatically demonstrate the value of Wolfram|Alpha’s capabilities (and vice versa!). The form factor has enough fidelity and mobility to show why having computable knowledge literally at your fingertips is so damn useful.  The iPhone is simply too small and you don’t perform enough intensive computing tasks on it to take full advantage.  The other thing iPad and similar platforms will demonstrate is that retrieving information isn’t going to be enough for people.  They want to operate on the world.  They want to manipulate.  The iPad’s major design feature is that you physically manipulate things with your hands.  iPod does that, but again, it’s too small for many operations.   Touch screen PCs aren’t new, but they are usually not mobile.  Thus, here we are on the cusp of direct manipulation of on screen objects.  This UI will matter a great deal to the user.  They won’t want to just sort, filter, search again.  They will demand things respond in meaningful ways to their touches and gestures.

So how will Wolfram|Alpha take advantage of this?   It’s already VISUAL! And the visuals aren’t static images.  Damn near every visualization in Wolfram|Alpha are real time computed specifically to your queries.   The visuals can respond to your manipulations.  In the web version of Wolfram|Alpha this didn’t make as much sense  because the keyboard and mouse aren’t at all the same as your own two hands on top of a map, graph, 3d protein, etc.

Early on there was a critical review of Wolfram|Alpha’s interface – how you actually interact with the system.  It was dead on in many respects.

WA is two things: a set of specialized, hand-built databases and data visualization apps, each of which would be cool, the set of which almost deserves the hype; and an intelligent UI, which translates an unstructured natural-language query into a call to one of these tools. The apps are useful and fine and good. The natural-language UI is a monstrous encumbrance…

In an iPad world, natural language will sit back-seat to hands on manipulations.  Wolfram|Alpha will really shine when people manipulate the visuals and the data display and the various short cuts. People’s interaction with browsers is almost all link or text based, so the language issues with Wolfram|Alpha and other systems are always major challenges.  Now what will be interesting is how many popular browser services will be able to successfully move over to a touch interface.  I don’t think that many will make it.  A new type of services will have to crop up as iPad apps will not be simply add-ons to a web app, like they usually are for iPhone.  These services will have to be great in handling direct manipulation, getting actual tasks accomplished and will need to be highly visual.

My iPad arrives tomorrow.  Wolfram|Alpha is the first app getting loaded. and yes, I’m biased.  You will be too.

Read Full Post »

If been asked many times about the size of Facebook’s infrastructure.  Folks love to get a gauge of how much hardware/bandwidth is required to run high trafficked sites.

Here’s a recent report of the set up. Read the details there.  In short, 30,000 or so servers with tons of optimizations to networking, mysql, PHP, web server, and lots and lots of caching.

There’s an interesting point here.  30,000 servers to handle 300 million registers users and their 200 billion pageviews a month.  That puts about 7 million pageviews per server.   Almost every company I have worked with as WAY over built hardware and infrastructure.  I’ve seen people deploy new servers for every 100,000 pageviews per month.   Modern web servers and dbs, with the right set up, can handle far more load than most webmasters and IT folks realize.

One subtle point that’s hard to figure out from this data… the amount of compute/CPU time/power required to parse the metrics for this site.  Beyond serving the site up there’s a considerable amount of business intelligence to work through.  Logging and log parsing, without even the analysis part, has got to be a major effort not accounted for in these infrastructure details.

Read Full Post »

A friend recently sent me this nifty article.

Here are some of my favorite snippets.

On “knowledge”:

“Knowing is not an activity of the
brain but of human beings, and knowledge is
not contained in the brain but in books and
computers, and is possessed by human beings,
but not by their brains. It makes no sense and
explains nothing to divide the brain up into
bits that contain different kinds of knowledge
and know different sorts of things, because the
brain does not contain knowledge or know
anything.”

On “consciousness”:

“Dispositional consciousness is a general
tendency to be conscious of certain
things—money-conscious, for example. Such
a generalized tendency is indicated by various
sorts of behavior—money-conscious people
are likely to save their money, spend it
carefully, talk about it and think about it more
than others, and so forth. Such a tendency
almost certainly is learned, and therefore one
can be ‘‘better’’ or ‘‘worse’’ at it depending on
one’s experience, if ‘‘better’’ and ‘‘worse’’
refer to a greater or lesser probability of
behaving in ways consistent with the disposition.
So the authors’ assertion that consciousness
is not something we can become ‘‘good
at’’ may be argued with, both in its dispositional
sense and in its occurrent transitive sense
(a current consciousness of some thing or state
of affairs). I may not become conscious of the
subtle French horn part in a piece of music
until after I have read about the composer’s
penchant for using the French horn in subtle
ways—has my learning not enhanced my
ability to be conscious of the French horn in
the composer’s music? More broadly, is there
no sense in which the common Californian
pastime of ‘‘expanding’’ or ‘‘developing’’
consciousness is true?”

On “strange loopness” of human biology:

“Far more
difficult to achieve, I believe, will be an
understanding of the fundamental nestedness
of the brain, the rest of the body, and the
person in the world, each entity executing
processes that overlap and turn back on
themselves and each other in time and space.”

On metaphors as a tool for communication, not analysis:

“The point is
that it may be the ability of metaphors and
analogies to help researchers accomplish their
theoretical goals, and not how well they stand
up to connective analysis relative to their
conventional counterparts, that is the better
basis for approving or disapproving of them.”

Language always lacks fidelity. One can only put into words some subset of what we experience. What we “experience” is only a subset of what is happening around us. What happens around us in a way that could affect us is only a subset of what there is.

Folks have a tendency in all science (and non science) to analyze and report at our “level” of experience. No, it’s not possible to apply an analysis of single cell behavior to a scene study of Shakespeare. Though we often talk of “motivation” in both studies. It’s a terribly inaccurate description in both cases but it does, often times, communicate something of value.

For an alternative, but equal misapplication of language from the “human experience” level, let’s consider quantum physics.  We experience things in 3 spacial and 1 temporal dimensions. We have NO WAY to experience the world in any other context. Thus it is incredibly hard for one to conceptualize and explain what happens at a quantum level (where things don’t follow space and time as we experience it.) It is NONsense to describe, diagram, or otherwise model the quantum world on our “human” level with expectation of accuracy. Our description of quantum mechanics is a very gross description.

Where this all gets counter-productive to the progress of knowledge is mistaking a description (model, report…) of something (a system, situation, behavior…) as the thing itself.  The use of psychological “Freudian” terms can sometimes be useful to short cutting long winded discussions but one must be disciplined to recognize that high level concepts cannot be applied to what’s actually going on.

I think there’s another reason we accept gross descriptions of the world. They work for all practical purposes. You don’t need to have a perfect description of the world to be successful in achieving whatever it is you might be doing. In fact, WE HAVE TO MAKE THIS TRADE OFF. If we didn’t short cut and take on gross descriptions of the world few of us would be able to operate. At the very least, few scientists would be able to publish if they actually had to drill down and tie up the loose ends without these gross misrepresentations.

Oh, and for those that care, I don’t think there is something like “consciousness”. We are more or less affected by things happening around and in us. We are not “aware” of our experiences in some binary way (the lightbulb never really just flips on). The linked article gets at some of this and there are other synthesis that argue this point better than I can at this stage.  A further implication is that “thought” isn’t really a THING by itself either. We don’t THINK THOUGHTS. and yes, I lack the syntax to describe my synthesis any further at this time 😉

For more insight you might turn to this very recent Edge talk.  In particular, read the responses from Sam Harris and others.  Kinda embodies everything in this post…. from baggage terms to metaphors as description to just how far away we are from reasonably deep insight.

Read Full Post »

There’s a great deal of confusion about what is meant by the concept “computational knowledge.”

Stephen Wolfram put out a nice blog post on the question for computable knowledge.  In the beginning he loosely defines the concept:

So what do I mean by “computable knowledge”? There’s pure knowledge—in a sense just facts we know. And then there’s computable knowledge. Things we can work out—compute—somehow. Somehow we have to organize—systematize—knowledge to the point that we can build on it—compute from it. And we have to know methods and models for the world that let us do that computation.

Knowledge

Trying to define it any more rigorously than above is somewhat dubious.  Let’s dissect the concept a bit to see why.  Here we’ll discuss knowledge without getting too philosophical.  Knowledge is concepts we have found to be true and that we somewhat understand the context, use and function – facts, “laws” of nature, physical constants.  Just recording those facts without understanding context, use, and function would be pretty worthless – a bit like listening to a language you’ve never heard before.  It’s essentially just data.

In that frame of reference, not everything is “knowledge” much less computational knowledge.  How to define what is and isn’t knowledge… well, it’s contextual in many cases and gets into a far bigger discussion of epistemology and all that jive.  A good discussion to have, for sure, but will muddy this one.

Computation

What I suspect is more challenging for folks is the idea of “computational” knowledge.  That’s knowledge we can work out – generate, in a sense, from other things we already know or assume (pure knowledge – axioms, physical constants…).  Computation is a very broad concept that refers to far more than “computer” programs.  Plants, People, Planets, the Universe computes – all these things take information in (input) one form (energy, matter) and converts it to other forms (output).  And yes, calculators and computers compute… and those objects are made from things (silicon, copper, plastic…) that you don’t normally think of as “computational”… but when configured appropriately they make a “computer”.   Now to get things to compute particular things they need instructions – (we need to systemitize… or program it).  Sometimes these programs are open ended (or appear to be!).  Sometimes they are very specific and closed.  Again, here don’t think of a program as something written in Java.  DNA is an instruction set, so are various other chemical structures, and arithmetic, and employee handbooks… basically anything that can tell something else how to use/do something with input.  Some programs, like DNA, can generate themselves.  these are very useful programs.  The point is… you transform input to some output.  That’s computation put in a very basic, non technical way.  It becomes knowledge when the output  has an understandable context, use and function.

Categorizing what is computational knowledge and what is not can be a tricky task.  Yet for a big chunk of knowledge it’s very clear.

Implications and Uses

The follow on question once this is grokked — What’s computational knowledge good for?

The value end result, the computed knowledge, is determined by its use.  However, the method of computing knowledge is valuable because in many cases it is much more efficient (faster and cheaper) than waiting around for the “discovery” of the knowledge by other methods.  For example, you can run through millions of structure designs using formal computational methods very quickly versus trying to architect / design / test those structures by more traditional means.  The same could be said for computing rewarding financial portfolios, AdWords campaigns, optimal restaurant locations, logo designs and so on.  Also, computational generation of knowledge sometimes surfaces knowledge that may otherwise never have been found with other methods (many drugs are now designed computationally, for example).

Web Search

These concepts and methods have implications in a variety of disciplines.   The first major one is the idea of “web search”.  The continuing challenge of web search is making sense of the corpus of web pages, data snippets and streams of info put out every day.  A typical search engine must hunt through this VERY BIG corpus to answer a query.  This is an extremely efficient method for many search tasks – especially when the fidelity of the answer is not such a big deal.  It’s a less efficient method when the search is really a very small needle in a big haystack and/or when precision and accuracy are imperative to the overall task.  Side note: Web search may not have been designed with that in mind… however, users come more and more to expect a web search to really answer a query – often users mistake the fact that it is the landing page, the page that was indexed that is doing the answering of a query.  Computational Knowledge can very quickly compute answers to very detailed queries.  A web search completely breaks down when the user query is about something never before published to the web.  There are more of these queries than you might think!  In fact, an infinite number of them!

Experimentation

Another important implication is that computational knowledge is a method for experimentation and research.  Because it is generative activity one can unearth new patterns, new laws, new relationships, new questions, new views….  This is a very big deal.  (not that this has been possible before now… of course, computation and knowledge are not new!  the universe has been doing it for ~14 billion years.  now we coherent and tangible systems to make it easier and more useful to use formal computation for more and more tasks).

P.S.

There are a great many challenges, unsolved issues and potentially negative aspects of computational knowledge.  Formal computation systems by no means are the most efficient, most elegant, most fun ways to do some things.  My FAVORITE example and what I want to propose one day as the evolution of the Turing Test is HUMOR.  Computers and formal computation suck at humor.  And I do believe that humor can be generated formally.  It’s just really really really hard to figure this out.  So for now, it’s still just easier and more efficient to get a laugh by hitting a wiffle-ball at your dad and putting it on YouTube.

Read Full Post »

BBC reports on simulations run by astronomers suggesting we could see some planets collide in a billion years or so.

What’s fun is that you can actually ATTEMPT to run these computations in Wolfram|Alpha.  Here’s mercury in 1 billion years. Unfortunately the one thing I want to be able to show is the orbits of the planets and that is pushing W|A to the heuristic timing limit.

I can put this into Mathematica and work it out using more local CPU power.  Then again, I like just playing with numbers to see where I can take this.  Here’s Mercury at 199,999 years.  Things get gnarly.

Read Full Post »

Update 2/17/09: Here’s a fun piece on CNN about MDs using Twitter from the OR. Again, this is NOT particular useful data being generated.  It is, however, an excellent BROADCAST tool.  Surgeons pushing out updates is useful to families and friends. In the grand scheme of useful information unto itself, this content will have no reuse outside of that surgical operation context.  Perhaps an aggregation and synthesis (not real time) would be useful in trending operations, but there are other, more efficient, ways of computing and comparing data from operations.

Ok, so perhaps, this is why VCs, media pundits and internet geeks gush over Twitter: The idea that it represents some collective thought stream/collective brain.

The most common statement about why this colletive stream of drivel has value comes in this excerpt from the TechCrunch post:

Twitter may just be a collection of inane thoughts, but in aggregate that is a valuable thing. In aggregate, what you get is a direct view into consumer sentiment, political sentiment, any kind of sentiment. For companies trying to figure out what people are thinking about their brands, searching Twitter is a good place to start. To get a sense of what I’m talking about, try searching for “iPhone,” “Zune,” or “Volvo wagon”.

Viewing the proposed examples SEEMS to validate the claim.  However, online discussion and online “tweets” are NOT the same as the behavior you’re actually trying to gain insight into.  Whether people are into a brand is not accurately assesed by viewing what they SAY about it — it’s what they DO about it.  Do people BUY the brand? Do they SHOW the brand/products to others?  Do they consume the brand?

These above examples are not predictive in anyway.  They are reflective.  Twitter can’t do much better than Google, blogs, and news outlets at ferreting out important events, people, products, and places before they are important.  Twitter, in some respects gets in its own way because the amount of “tweet” activity is not always a great indicator of importance.  In fact, some of the most mundane events, people and places get a ton of twitter activity versus really important stuff.

Twitter is also highly biased.  It is predominately used by the technical/digtial elite.  Yes, it’s growing quickly, but it still doesn’t reflect more than perhaps 1-2% of the US population.    Heck, even Google traffic is highly biased, as only 50% of the US population uses search every day. You say, so what, it will get there!  No, it won’t.  Consider the following examples.

Twitter can’t tell you ANYTHING about the real stuff of life like Baby Food, Peanut (recall), or your local hospital. (I leave it as an exercise for the reader to try these searches on Google and compare the results).  With more usage, this only gets more impossible to find the real information.  New tools to parse and organize tweets must be created.  This implies you’ll need computational time to parse it all, thus destroying the “real time part” the techcrunch authors and this quoted blogger adore.  Beyond just filtering and categorizing, an engine needs some method to find the “accurate” and “authoritative” data stream.  Twitter provides no mechanism of this and doing so would destroy it’s general user value (you don’t want to have to compete with more authoritative twitterers, do you?)  Twitter search would need to become more “Googly” to matter at all in some bigger world or commerce sense.

TechCrunch correctly identifies this problem:

An undifferentiated thought stream of the masses at some point becomes unwieldy. In order to truly mine that data, Twitter needs to figure out how to extract the common sentiments from the noise (something which Summize was originally designed to do, by the way, but it was putting the cart before the horse—you need to be able to do simple searches before you start looking for patterns).

So where does Twitter really sit and does it have value?  It is a replacement for the newsgroup and chatroom and some IM functions.  It has value, obviously, because people use it.  Users replace their other forms of conversation with Twittering.  Broadcasters and publishers are also replacing other forms of broadcasting/pushing messages with Twitter.  This, too, has value in that Twitter better fits the toolsets more and more of us sit in front of all day long.  It’s somewhat of a “natural” evolution of things to find a new mechanism of broadcasting when a medium (terminals attached to the network) reaches critical mass.  The hudson river landing example is a better example of the shift in broadcasting method than it is of some crack in the value of Google and others for a value to have “real time search.”  If that logic were sound, CNN would been hailed as a “Google Slayer” as they are more real time than Twitter is (yes, they use twitter and ireport and citizen journalism…).    In fact, CNN is the human powered analytic filter required to make sense of real time streams of data.  News journalists capture all that incoming data and find the useful and accurate information and summarize and rebroadcast.

If I were an operator of IM networks or a business that relied on chatrooms and forums, I’d be worried.  Google, news outlets and other portals should not be worried.  They don’t need more contextless content to sift through, they do just fine without yet another 99% source of throw-away thoughts.

I, myself, am not a Twitter-hater.  It is a great media success.  It probably can make money.  However, it doesn’t represent some shift in social networking, high tech, communications, much less how we interact.  Anyone who claims that must be delusional or hoping to make a buck or two, which is fine too.

TechCrunch concludes with the real question here:

But what is the best way to rank real-time search results—by number of followers, retweets, some other variable? It is not exactly clear. But if Twitter doesn’t solve this problem, someone else will and they will make a lot of money if they do it right.

Is there a possibility to generate a collective thoughtstream? big Internet brain?  Sure, in some loose sense, that’s already happened.  Twitter (and other tools) is just a piece of the puzzle.  The human brain doesn’t have just one piece you can claim as the main part – the CPU that can make sense of everything.  Why should we think something less complicated (the Internet has far fewer nodes, interconnections and far higher energy demands than just one human brain!) have a central core (service) providing some dominant executive function?   There are several reasons this physically can’t happen.  The main thing, I mentioned it earlier, is that making sense of random streams of data requires computational time.  The more inputs a system takes in, the more computation it requires to make sense (or to filter it in the first place).  New information or new types of information must first be identified as potentially useful before they can even be included for summarization.  And so on.  The more useful you need to make entropic data (random), the more energy you need expend. Raw data streams trend toward entropy, yes in an informatic and thermodynamic sense.

In other words, no one company is going to figure out how to rank real time search results – it can’t be done.  Perhaps more damning is, it doesn’t need to be done.  There’s no actual value in searching real time.  The idea of searching is that there is some order (filter) to be applied.  When something happens, John Borthwick, correctly claims “relevancy is driven mostly by time”.  So twitter already has the main ordinal, time, as it’s organizing principle.  Perhaps TC and John Borthwick desire a “authority” metric on tweet search… however, you can’t physically do this without destroying the value of real time.  No algorithm accounting for authority will be completely accurate -there’s a trade off with real time and authority.  (PageRank has the similar problem with authority and raw relevancy, as no name authors and pages often have EXACTLY what you want but you can’t find them.  This is a more damaging problem in “real time” scenerios where you want the RIGHT data at the RIGHT TIME).

If Twitter could plant an authoritative twitterer at every important event and place, real time twitter search might become real.

Oh wait, that’s called Journalism – we already have 1000s of sources of that.

Read Full Post »

« Newer Posts - Older Posts »