« February 2010 | Main | April 2010 »
We're in the process of doing a search engine evaluation for a large customer. That, by itself, isn't news: we do those quite a bit for companies large and small. No, what makes this project most interesting is that we are doing side-by-side comparisons of three leading search technologies using industry-standard data sets.
Our assumption going in was that, for out-of-box simple searches, all three engines would return pretty much of the same set of results: after all, if TF/IDF (term frequency/inverse document frequency) was at the core of these technologies, they should be getting roughly the same results sets. Much to our surprise, if we look at the top 10 search results from each engine for a simple search, we get only about 15% overlap.
Let me explain it this way: if we retrieve ten search results for a specific query from one search engine, only 3 of the twenty - 15% - results were found by either of the other engines. In a typical list of 10 results, only 3 show up in more than one engine. We were especially amazed because we are going out of our way to use default parameters as much as possible: no entity extraction, no search tuning, no special synonyms or thesaurus terms.
We're still too early in the process to understand what's behind this surprising situation: it's always possible the results are too tentative to make any judgments, or we could find an error in our methodology. We're working on it, and we'll get back with any findings that we can share. If you have any explanations, leave a comment - we'd love to hear what you think.
/s/Miles