No. Not really.
This is the main reason most new search engines fail. This is also why refreshes to existing search engines with radical features don’t work particularly well either.
It’s really a misconception that search engines could be made better. Common discussion suggests one day the search engines will magically find what we need if only someone will write the perfect semantic algorithms or rank pages better. Other common attempts to improve search include improving the search interface via cleaner design, more links, less links, categorization and so on. It is all a big fat waste of time.
The web is messy. It’s mostly unstructured. Structure is buried in noise and the noise grows very fast. With this ever growing mess, the search engines do exactly what we all want them to do. They help users source possibilities. They take this mass of web pages, databases and media and make it navigable. The idea of one engine to rule them all is a bit unrealistic, and probably intractable. Just do a little thought experiment – can you imagine data that is not effectively accessed and navigable via a little search box? I can – Maps. So we have Google Earth, Maps and other ways of moving through that data. Images. Images are better navigated visually (for a variety of reasons, not least of which is characterizing an image in words…) There are many other examples.
Another way to think of it… search is just a first layer of discovery. Yes, of course, we can go deeper than a first effort in some search activities, but generally speaking it can only give you a rough cut. Where is boundary on that? No one knows and it changes all the time, but it generally is a thin layer compared to the depth one needs to go to really dig into a data set or subject matter. This limit arises from the noise on the web, the loose structure of hyperlinks, folksomonies and presentation layers. The limit is also a result of the difficulty in forming short statements that fit in a search box that properly characterize what one is looking for and filter what one doesn’t want to see.
Again, these are not problems. Search is what it is and it works. Just as tables of contents, indices, bibliographies, reference librarians, bookmarks, dog ears, post it notes all do what they should and do it well. We’re never going to need or want fewer ways to navigate and take notes. The variety is where efficiency lies.
So if you’re waiting around for a Google killer in web search. Move on. It ain’t going to happen. There’s no big enough reason that it would. What would that mean anyway? [many great search engines exist that are at least as capable as Google…]
Sure you might have someone that competes with Google for ad dollars, but no one is going to compete with in indexing the web and doing your first layer search. There is definitely roomo innovate and compete with Google in delivering highly targeted, high performance advertising. There is definitely a way to compete for audience as Twitter, Facebook, MySpace and others demonstrate.
Finally, the cost of indexing and mining the web will never get cheaper. Even though the hardware and bandwidth prices go down the algorithmic methods, spam fighting, and the raw “keeping up with the web” continue to grow. Perhaps the most important point here is that advertising budgets have nothing to do with these costs. That is, improvements in the technology, at this point, don’t necessarily equate to a growth in advertising revenue. This is one reason why it’s probably not feasible to compete in web search and, my hypothesis is, that growing search ad revenue enough to keep up with the costs is going to be almost impossible. Add to this the idea that there are no more users for search engines to entice into using them. Everyone that uses search is using Google or the others. The search companies have to go outside of web search to gain audience. At some point the existing model for search and search advertising is going to flatten (it might already be doing that). This further destroys any motivation to innovate in pure web search.