Years ago, what we now call ‘enterprise search’ came with pretty basic fictionality. Product arrived with the ability to create an index; ‘filters’ to enable the indexing to work properly on non-text content like Word Perfect and WordStar; and an application that delivered a basic search experience, so inexperienced users could find the content they wanted. Rarely, advanced products included basic reporting, but everything was primarily focused on the indexing process, and very rarely on user query activity. Some companies – I’m thinking of organizations like Verity and Fulcrum Technologies (both former employers of mine) actually included some limited reporting, with the focus primarily on user queries.
Nowadays, technology has expanded everywhere. And with the popularity and success of the leading names in ‘internet search’ - Google, Bing, Yahoo, and others – more and more users have come to expect similar results in their corporate search capabilities for both internal and public-facing content. The catch is that the intranet is not the internet, and search is frequently an orphan product managed by an IT team short on time and often with little experience with search. I’ve summarized this problem before by confessing that search is not “fire and forget technology.
I’ve written about the challenges of enterprise search before: to summarize, enterprise queries generally have a ‘right answer’; document-level security, associated with many – if not a majority of documents; a variety of different file formats; and most importantly, someone whose job is to ensure content and search quality.
But we’ve seen a major enhancement in enterprise search lately, one that promises quantum leaps in search quality, precision, and personalization: machine learning or ML. And just to set the record straight, ML is not the same as ‘artificial intelligence’; technically, ML is an implementation of AI.
ML does not free you from managing your search platform. The “L” in ML is about learning, but sadly, the “M” is not for ‘magic’. As with we humans, machine learning happens by repetition and by observing behavior over time. And sadly, few enterprises have enough content and query activity quantity to be anywhere near as good as Google. And even though we are well into the 21st century, we don’t have the technology of the HAL 9000 computer – at least not yet. Come back next year!
What can you do to at least start seeing some benefits from ML technology we now see integrated with commercial and open-source search technologies? Regardless of what advanced technologies you use, you still have to pay attention to the basics. I’ve written about much all of these in the past, but they all deserve repeating. As one of my teachers in college used to say, "I ask you to put a hand over one of your ears because what I’m about to say is too important to go in one ear and out the other". Good old Coach Gollnick.
First, make it a habit to watch your search platform. Most likely, your IT folks have tools that track behavior of critical software – yes, search is critical, even if you’re not using it for eCommerce. Talk to those folks, have them set up monitoring of search: if possible, have them track queries, presented results, viewed content, and most importantly, ‘no hits’. If a user performs a search, presumably he or she expects to see content relevant to the query; and if you have no content, either your user is looking for content you don’t have, or your content is not tagged properly. When in doubt, ask your users to confirm what he or she was looking for.
What Metrics to Track
Most search technologies now offer at least some metrics; what you’d ideally have includes these basic statistics:
Top Queries: Simply, what are the most common queries users submit. You want to understand queries across departments; for example, Marketing will have different interests and queries than Sales or IT staff. Understanding queries provides the metrics the search team needs to keep content relevant.
No Hits: When users perform a search – especially a relevant search term - and find nothing, they become frustrated. Tracking these queries provides critical information to the teams that create and manage content; and addressing what are often misspellings or incorrect terms leads to happier end users.
Rare terms frequently seen in queries: This is a very interesting metric, since it provides knowledge on content users are looking for, but for which there may not be much content.
Additional Must-Do Tasks
What about content that is common throughout your content and indices, but where you can identify and promote the ‘right answer’. Sometimes, this means talking to folks in other departments who may have better understanding of the query intent. One thing we’ve seen that has often proved worthwhile is to form a ‘Enterprise Search Team’ or a “Search Center of Excellence’. These groups, made up with users from across the organization and all levels of the company from individual contributors to senior management, meet periodically – quarterly seems to be the most common frequency - to discuss users’ feedback on search platform
In addition to the above reporting, there are a few more things to track. It’s a good idea to track queries by department: often we’ve seen instances where one or two departments utilize search thoroughly, while others use it rarely. Talk to folks in those departments to understand a users’ intent; and if you are missing content, address the problem. Keep an eye out for rare but potentially critical queries; believe it or not, they can be important.
One extreme, but hopefully rare case: sensitive terms. A query for ‘sexual harassment policy” is a symptom of potentially legal vulnerabilities; and if that query occurs, it should escalate to HR. I’m certainly not an expert in the law, but I’ve been told that, in cases where companies should have known there were issues, there may be liability. Talk to HR (and perhaps Corporate Legal); Investigate the issue, and address the problem. And, as a potential bonus, to fend lawsuits.
Search is not ‘fire and forget’ technology, and it takes time and resources to maintain high-quality user satisfaction. The squeaky wheel may get the grease; but constant improvement will help address any dissatisfaction and that’s always a good deal.
A Final Word
A lack of feedback from users is not always a good thing: combined with unexpectedly low query activity, it could be that your users have given up on search. Communicate with your users, gather their input, and communicate.