Feeds:
Posts
Comments

Posts Tagged ‘search’

I got into a discussion this week with a fellow technology focused person about whether CREATION is a thing. That is, do we actually create anything. My position is that there is no creative act. No source of creativity. There is search and selection by consequences. Everything in existence, and particular our lives from our gene code to our behavior to the software we write is a search through the space of possibilities and a relentless selection by consequences that maintains possibilities that survive more consequences. The creative act is a notion that’s perhaps useful to communicate unfamiliar or low frequency of occurrence possibilities but it’s not a fundamental thing into itself.

Why I care about this argument? Assuming CREATION as a thing leads to all sorts of false notions that anyone or thing is responsible or should get credit for what happens in the universe. And this practically played out is the source of most of our inequality and makes sure we don’t actually explore possibilities without bias. Exalting the “creative” and the “creators” blinds us to the infinitude of possibilities, most lurking right under our very confused senses and conditioned context.

Read Full Post »

First, we will bring ourselves to computers. The small- and large-scale convenience and efficiency of storing more and more parts of our lives online will increase the hold that formal ontologies have on us. They will be constructed by governments, by corporations, and by us in unequal measure, and there will be both implicit and explicit battles over how these ontologies are managed. The fight over how test scores should be used to measure student and teacher performance is nothing compared to what we will see once every aspect of our lives from health to artistic effort to personal relationships is formalized and quantified.

 

[…]

There is good news and bad news. The good news is that, because computers cannot and will not “understand” us the way we understand each other, they will not be able to take over the world and enslave us (at least not for a while). The bad news is that, because computers cannot come to us and meet us in our world, we must continue to adjust our world and bring ourselves to them. We will define and regiment our lives, including our social lives and our perceptions of our selves, in ways that are conducive to what a computer can “understand.” Their dumbness will become ours.

 

from: David Auerbach, N+1.  read it all.   

 

I love this piece.  Brilliant synthesis.  Hard to prove… just have to watch it all unfold.

Read Full Post »

Here is one of the best blog posts on putting Wolfram|Alpha into perspective:

Asking which result is “right” misses the point. Google is a search engine; it did exactly what it’s supposed to do. It isn’t making any any assumptions about what you’re looking for, and will give you everything the cat dragged in. If you’re an elementary school teacher or a flat-earther, you can find the result you want somewhere in the big, messy pile. If you want accurate data from a known and reliable source, and you want to use that data in other computations, you don’t want Google’s answer; you want Alpha’s. (BTW, the Earth’s circumference is .1024 of the distance to the Moon.)

When is this important? Imagine we were asking a more politically charged question, like the correlation between childhood vaccinations and autism, or the number of civilians killed in the six-day war. Google will (and should) give you a wide range of answers, from every part of the spectrum. It’s up to you to figure out whether the data actually came from. Alpha doesn’t yet have data about autism or six-day war casualties, and even when it does, no one should blindly assume that all data that’s “curated” is valid; but Wolfram does its homework, and when data like this is available, it will provide the source. Without knowing the source, you can’t even ask the question.

Read Full Post »

No. Not really.

This is the main reason most new search engines fail.  This is also why refreshes to existing search engines with radical features don’t work particularly well either.

It’s really a misconception that search engines could be made better.  Common discussion suggests one day the search engines will magically find what we need if only someone will write the perfect semantic algorithms or rank pages better.  Other common attempts to improve search include improving the search interface via cleaner design, more links, less links, categorization and so on.  It is all a big fat waste of time.

The web is messy.  It’s mostly unstructured.  Structure is buried in noise and the noise grows very fast. With this ever growing mess,  the search engines do exactly what we all want them to do.  They help users source possibilities.   They take this mass of web pages, databases and media and make it navigable.   The idea of one engine to rule them all is a bit unrealistic, and probably intractable.  Just do a little thought experiment – can you imagine data that is not effectively accessed and navigable via a little search box? I can – Maps.  So we have Google Earth, Maps and other ways of moving through that data.  Images.  Images are better navigated visually (for a variety of reasons, not least of which is characterizing an image in words…)  There are many other examples.

Another way to think of it… search is just a first layer of discovery.  Yes, of course, we can go deeper than a first effort in some search activities, but generally speaking it can only give you a rough cut.  Where is boundary on that?  No one knows and it changes all the time, but it generally is a thin layer compared to the depth one needs to go to really dig into a data set or subject matter.  This limit arises from the noise on the web, the loose structure of hyperlinks, folksomonies and presentation layers.  The limit is also a result of the difficulty in forming short statements that fit in a search box that properly characterize what one is looking for and filter what one doesn’t want to see.

Again, these are not problems.  Search is what it is and it works.  Just as tables of contents, indices, bibliographies, reference librarians, bookmarks, dog ears, post it notes all do what they should and do it well.  We’re never going to need or want fewer ways to navigate and take notes.  The variety is where efficiency lies.

So if you’re waiting around for a Google killer in web search.  Move on.  It ain’t going to happen.  There’s no big enough reason that it would.  What would that mean anyway? [many great search engines exist that are at least as capable as Google…]

Sure you might have someone that competes with Google for ad dollars, but no one is going to compete with in indexing the web and doing your first layer search. There is definitely roomo innovate  and compete with Google in delivering highly targeted, high performance advertising.  There is definitely a way to compete for audience as Twitter, Facebook, MySpace and others demonstrate.

Finally, the cost of indexing and mining the web will never get cheaper.  Even though the hardware and bandwidth prices go down the algorithmic methods, spam fighting, and the raw “keeping up with the web” continue to grow.  Perhaps the most important point here is that advertising budgets have nothing to do with these costs.  That is, improvements in the technology, at this point, don’t necessarily equate to a growth in advertising revenue.  This is one reason why it’s probably not feasible to compete in web search and, my hypothesis is, that growing search ad revenue enough to keep up with the costs is going to be almost impossible.  Add to this the idea that there are no more users for search engines to entice into using them.  Everyone that uses search is using Google or the others.  The search companies have to go outside of web search to gain audience.   At some point the existing model for search and search advertising is going to flatten (it might already be doing that).  This further destroys any motivation to innovate in pure web search.

Read Full Post »