Feeds:
Posts
Comments

Archive for the ‘search’ Category

There’s an amazing thing going on.  It’s a small little product release that few folks outside of the techworld cover.  The release and tech uproar over Google Chrome Frame.

Why do I say this?  Oh, well, some folks at Google woke up and realized there are such things as platform dependencies and you pick the platform that makes it efficient to produce and distribute your product.  So… it produced a WRAPPER for the platform most widely distributed (windows/IE) AND reduced its dev costs (produce a runtime that runs on anything.).

We could continue in this fashion, but using Google Chrome Frame instead lets us invest all that engineering time in more features for all our users, without leaving Internet Explorer users behind,” argued Lars Rasmussen and Adam Schuck of Google’s Wave team last week.

Beyond Google making such an aggressive move to stash Chrome inside IE as a stab at Microsoft, this move demonstrates  that BROWSERS determine a BIG PART of business on the Internet.  Netscape was right, just way too early.  The browser is the new OS – both in user function and business line.    All the players are pitching users on various propositions.  Do you care about security? compatibility?  native software?  cool features?  It can be bought, sold, and managed just like any other piece of commercial software.  The browsers are not immune to real business.  The require real capital to build and real support to maintain.  Firefox is hanging on… but how long does it have with its main benefactors producing competitive products and forging competitive alliances?

Basically, the browser as a community project – Free Software Thing – is losing ground to browser as front door to lots of revenue.

It’s well known, and extremely frustrating, to many software vendors that whatever ships with the computer is what wins and trying to get a mass of users to install the platform is a losing battle.  As a result Google is trying very hard to make Android and Chrome OS a default shipping system, but it’s not there yet.    If Google is to ever grow as big as MSFT it MUST own the default software on the majority of systems.

I predict eventually Google has to ship hardware – perhaps in deep partnerships (tmobile MyTouch with Google is just the beginning).  It will definitely start shipping Google branded hardware that has Google OS and Google search, Google apps capable of doing real work and real entertainment.   Apple, PC Makers, Cell Carriers and others will divorce Google slowly over time as Google takes more and more of their core business.

As a very interesting side note…. the biggest eyeball engine every created still doesn’t have enough advertising revenue growth to power long term business growth.  That’s right… SELLING ACTUAL STUFF IS STILL WHERE BUSINESS LIES.  Just hawking someone else’s stuff isn’t enough…. and so it goes.

Welcome, Internet, to long term business.  Reality bites.

OR

Maybe I’m completely wrong and this helter-skelter game of pushing open source and community projects strategically can disrupt competitors enough to keep growing and further distribute the Google eyeball engine… hmmmm….

Read Full Post »

Methinks the best experience will end up combining real time search with regular web search.  Yes, it’s nice to have unfiltered immediate information in certain situations like breaking news or emergencies.  Outside of that synthesis is essential to keep the noise to signal ratio down.

I don’t so much mind the metaphor used on TechCrunch today of consciousness and memory.

Imagine having just memory or just real time consciousness – it somehow wouldn’t be very efficient for the processing of information into action.  TC brings this up.  Yesterday’s Michael Jackson and celebrity death coverage and the malware issues showcases that without some non-real time synthesis things get pretty messed up.

Thinking through this is not that hard.  Though you can’t use citation analysis to filter results like in PageRank, you can do similar things to get some confidence interval in the real time results.  However, the more accurate you make that the more processing time it will take and, thus, it will be less real time.   I think some hybrid of rapid filtering with a real time pressentation of streams with a big note that says UNFILTERED or UNVERIFIED should do just fine at the top of regular web results.

I’d use that kind of experience, for what it’s worth…

Read Full Post »

Whether it’s “valid” or not humans (and probably most animals) make associations of new, unknown things with similar-seeming known things.  In fact, this is the basis of communication.

In the case of discussing new websites/services/devices like Wolfram|Alpha, Bing, Kindle, iPhone, Twitter and so on it’s perfectly reasonable to associate them to their forebears.  Until users/society gets comfortable with the new thing and have a way of usefully talking about it making comparisons to known things is effective in forming shared knowledge.

My favorite example of this is Wikipedia and Wikis.  What the heck is a wiki?  and what the heck is wikipedia based on this wiki?  Don’t get me wrong – I know what a wiki is. But to someone who doesn’t, hasn’t used one, and hasn’t contributed to one it’s pretty hard to describe without giving them anchors based on stuff they do know.  “Online Encyclopedia”, “Like a Blog but more open”…  (for fun read how media used to talk about wikipedia, more here)

More recently is Twitter.  What is it like?  A chat room? a social network?  a simpler blog? IM?  right… it’s all that and yet something different, it’s Twitter.  You know it when you use it.

Just like in nature new forms are always evolving with technology.  Often new tech greatly resembles its ancestories.  Other times it doesn’t.

In the specific case of Wolfram|Alpha and Bing/google… they share a common interface in the form of the browser and an HTML text field.  They share a similar foundation in trying to make information easy to access.  The twist is that Wolfram|Alpha computes over retrieved information and can actually synthesize (combine, plot, correlate) it into new information.  Search engines retreive information and synthesize ways to navigate it.  Very different end uses, often very complimentary.  Wikipedia uses humans to synthesize information into new information, so it shares some concepts with Wolfram|Alpha.  Answers.com and other answer sites typically are a mash up of databases and share the concept of web search engines of synthesizing ways to navigate data.

All of these are USEFUL tools and they ARE INTERCONNECTED.  None of them will replace each other.  Likely they will all co-evolve. And we will evolve our ways of talking about them.

Read Full Post »

Here is one of the best blog posts on putting Wolfram|Alpha into perspective:

Asking which result is “right” misses the point. Google is a search engine; it did exactly what it’s supposed to do. It isn’t making any any assumptions about what you’re looking for, and will give you everything the cat dragged in. If you’re an elementary school teacher or a flat-earther, you can find the result you want somewhere in the big, messy pile. If you want accurate data from a known and reliable source, and you want to use that data in other computations, you don’t want Google’s answer; you want Alpha’s. (BTW, the Earth’s circumference is .1024 of the distance to the Moon.)

When is this important? Imagine we were asking a more politically charged question, like the correlation between childhood vaccinations and autism, or the number of civilians killed in the six-day war. Google will (and should) give you a wide range of answers, from every part of the spectrum. It’s up to you to figure out whether the data actually came from. Alpha doesn’t yet have data about autism or six-day war casualties, and even when it does, no one should blindly assume that all data that’s “curated” is valid; but Wolfram does its homework, and when data like this is available, it will provide the source. Without knowing the source, you can’t even ask the question.

Read Full Post »

No. Not really.

This is the main reason most new search engines fail.  This is also why refreshes to existing search engines with radical features don’t work particularly well either.

It’s really a misconception that search engines could be made better.  Common discussion suggests one day the search engines will magically find what we need if only someone will write the perfect semantic algorithms or rank pages better.  Other common attempts to improve search include improving the search interface via cleaner design, more links, less links, categorization and so on.  It is all a big fat waste of time.

The web is messy.  It’s mostly unstructured.  Structure is buried in noise and the noise grows very fast. With this ever growing mess,  the search engines do exactly what we all want them to do.  They help users source possibilities.   They take this mass of web pages, databases and media and make it navigable.   The idea of one engine to rule them all is a bit unrealistic, and probably intractable.  Just do a little thought experiment – can you imagine data that is not effectively accessed and navigable via a little search box? I can – Maps.  So we have Google Earth, Maps and other ways of moving through that data.  Images.  Images are better navigated visually (for a variety of reasons, not least of which is characterizing an image in words…)  There are many other examples.

Another way to think of it… search is just a first layer of discovery.  Yes, of course, we can go deeper than a first effort in some search activities, but generally speaking it can only give you a rough cut.  Where is boundary on that?  No one knows and it changes all the time, but it generally is a thin layer compared to the depth one needs to go to really dig into a data set or subject matter.  This limit arises from the noise on the web, the loose structure of hyperlinks, folksomonies and presentation layers.  The limit is also a result of the difficulty in forming short statements that fit in a search box that properly characterize what one is looking for and filter what one doesn’t want to see.

Again, these are not problems.  Search is what it is and it works.  Just as tables of contents, indices, bibliographies, reference librarians, bookmarks, dog ears, post it notes all do what they should and do it well.  We’re never going to need or want fewer ways to navigate and take notes.  The variety is where efficiency lies.

So if you’re waiting around for a Google killer in web search.  Move on.  It ain’t going to happen.  There’s no big enough reason that it would.  What would that mean anyway? [many great search engines exist that are at least as capable as Google…]

Sure you might have someone that competes with Google for ad dollars, but no one is going to compete with in indexing the web and doing your first layer search. There is definitely roomo innovate  and compete with Google in delivering highly targeted, high performance advertising.  There is definitely a way to compete for audience as Twitter, Facebook, MySpace and others demonstrate.

Finally, the cost of indexing and mining the web will never get cheaper.  Even though the hardware and bandwidth prices go down the algorithmic methods, spam fighting, and the raw “keeping up with the web” continue to grow.  Perhaps the most important point here is that advertising budgets have nothing to do with these costs.  That is, improvements in the technology, at this point, don’t necessarily equate to a growth in advertising revenue.  This is one reason why it’s probably not feasible to compete in web search and, my hypothesis is, that growing search ad revenue enough to keep up with the costs is going to be almost impossible.  Add to this the idea that there are no more users for search engines to entice into using them.  Everyone that uses search is using Google or the others.  The search companies have to go outside of web search to gain audience.   At some point the existing model for search and search advertising is going to flatten (it might already be doing that).  This further destroys any motivation to innovate in pure web search.

Read Full Post »

Update 3/29/09: Danny Sullivan correctly pointed out to me that he is a publisher and an advertiser.  I’ll disagree on the idea that he is a “real user”, by which I meant “regular user”, because he is not nor I am.  We study websites, traffic and human behavior – we notice and ignore and react to things very differently than a user just flying by to get the latest news and views.  I do agree with Danny that my argument mostly matches his… thus, I’m only calling out Clemons argument.

Update 3/28/09: Techcrunch keeps stirring this up.  Now Danny Sullivan replies…

The most damaging part of both of their arguments is that neither one is arguing Clemons original argument and rebuttal mostly fail to convince his claims about the death of Internet Advertising.  He’s conclusions don’t match actual data and experience from the perspectives of an advertiser, a publisher nor really a regular user.

These points are not defensible without real data:

Users don’t trust ads
Users don’t want to view ads
Users don’t need ads
Ads cannot be the sole source of funding for the internet
Ad revenue will diminish because of brutal competition brought on by an oversupply of inventory, and it will be replaced in many instances by micropayments and subscription payments for content.
There are numerous other business models that will work on the net, that will be tried, and that will succeed.

In fact, let’s consider some counter examples:

Someone sold 4 million Snuggies based on ads.  Did the people who responded to those ads not trust the ads?  Their behavior shows they did enough to fork over $15 bucks for a blanket with holes in it.  The better statement is some users don’t trust some ads.

Users do want to view ads.  Millions of people love superbowl ads and actually seek them out online and on their TIVOs.  Online only ads that people do want to view include the millions of mini games they play, youtube videos they watch, contests they enter.  A better statement is that some users to want to view some ads, especially when the ads are not engaging, useful or catchy.

Users do need ads.  Search engines and social graphs can only show you information about things that are already popular/reached tipping point.  They cannot show you stuff just coming out of the labs.  Users need ads to learn about new and different products and services.  And the only way to introduce people to new things is put new things alongside already known things.

Ads are not the sole source of funding for the internet. Anyone who is claiming this is what web companies think clearly has not really studied the industry or worked at a web company and/or companies that extensively use the web in their business models.

Ad revenue will continue to grow in the long run.  As long as businesses need to sell more product, more ad revenue will go into the market.  The difference is that the ad spend is spread among more and more entities, so individual businesses will get less ad revenue.

Many other business models already work. and more will be created.  Selling apps, selling computer time, renting server space, selling subscriptions, donor models, barters, licensing, premium access…. I mean, gosh.  I don’t think we lack for business models that work.  The media is simply pointing to the high profile failures of big media companies that haven’t figured out to how to shoehorn it’s model into the internet way of doing things.

Once again we see that pundits rarely represent the real story.  They don’t know the price of milk. Just talking to people in the industry and summarizing the conversation is not enough to predict the end of online advertising.

See below for rest of my original response.

—————

Despite the impressive length,  a recent TechCrunch guest feature on the failure of internet advertising fails to reveal what’s really destroying the ad model online.  Clemons neither states what he claims is actually failing and doesn’t really prove it is. Alas, I will still attempt to refute the possible implications of his claim.

It is not a particularly insightful observation that “The problem is not the medium, the problem is the message, and the fact that it is not trusted, not wanted, and not needed”. Of course people don’t like being distracted with ad messages.  That’s always been the case, that’s why marketers have to pay for ad placement.  Nothing new here.

Advertising itself is not broken nor will ever go away.  As long as companies have products they need to push into market, they have to advertise, regardless of nature of the medium.  Play with the language and state definitions all you want – advertising will always be a part of our lives and media experiences.

What’s wrong with the business models of sites that rely on advertising is the pricing, not the actual idea of advertising.  Spending in terms of dollars is down in all mediums, certainly.  However, the amount of advertising we’re exposed to is likely still growing.   I have a long post on all sorts of data points on this topic here.  The short of it:  marketers have a growing number  advertising impressions out there, everyone know’s how well they perform and thus the pricing is coming way down from the relatively overpriced “older” advertising models in print, radio and tv.  This shrinking pricing model puts pressure on the business from a margin standpoint and so the less efficient businesses fail.

Yes, I generally hate banner, text, billboard ads and neon signs like everyone else. Except when I don’t.  And when I don’t that’s valuable to the company that paid for that placement and it’s valuable to me to be notified of something I might have missed.  We’re just arguing price.

Read Full Post »

« Newer Posts - Older Posts »