32 posts categorized "GSA"

June 28, 2017

Poor data quality gives search a bad rap

If you’re involved in managing the enterprise search instance at your company, there’s a good chance that you’ve experienced at least some users complain about the poor results they see. 

The common lament search teams hear is “Why didn’t we use Google?” when in fact, sites that implemented the GSA but don’t utilize the Google logo and look, we’ve seen the same complaints.

We're often asked to come in and recommend a solution. Sometimes the problem is simply using the wrong search platform: not every platform handles every user case and requirement equally well. Occasionally, the problem is a poorly or misconfigured search, or simply an instance that hasn’t been managed properly. Even the renowned Google public search engine doesn’t happen by itself, but even that is a poor example: in recent years, the Google search has become less of a search platform and more of a big data analytics engine.

Over the years, we’ve been helping clients select, implement, and manage Intranet search. In my opinion, the problem with search is elsewhere: Poor data quality. 

Enterprise data isn’t created with search in mind. There is little incentive for content authors to attach quality metadata in the properties fields of Adobe PDF Maker, Microsoft Office, and other document publishing tools. To make matters worse, there may be several versions of a given document as it goes through creation, editing, reviews, and updates. And often the early drafts, as well as the final version, are in the same directory or file share. Very rarely does a public facing web site content have such issues.

Sometimes content management systems make it easy to implement what is really ‘search engine optimization’ or SEO; but it seems all too often that the optimization is left to the enterprise search platform to work out.

We have an updated two-part series on data quality and search, starting here. We hope you find it helpful; let us know if you have any questions!

November 16, 2016

What features do your search users really want?

What features and capabilities do corporate end-users need from their search platform? Here's a radical concept: ask stakeholders what they want- and what they need - and making a list. No surprise: you'll have too much to do.

Try this: meet with stakeholders from each functional area of the organization. During each interview, ask people to tell you what internet search sites they use for personal browsing, and what capabilities of those sites they like best. As they name the desired features, write them on a white board.

Repeat this with representatives from every department, whether marketing, IT, support, documentation, sales, finance, shipping or others - really every group that will use the platform for a substantial part of their days. 

Once you have the list, ask for a little more help. Tell your users they each have $100 "Dev Dollars" to invest in new features, and ask them to spend whatever portion they want to pay for each feature - but all they have is $100 DD.

Now the dynamics get interesting. The really important features get the big bucks; the outliers get a pittance -  if anything. Typically, the top two or three features requested get between 40DD and 50DD; and that quickly trails off. 

I know - it sounds odd. These Dev Dollars have no true value - but people give a great deal of thought to assigning relative value to a list of capabilities - and it gives you a feature list with real priorities.

How do you discover what users really want? 

 

 

January 20, 2015

Your enterprise search is like your teenager

During a seminar a while back, I made this spontaneous claim. Recently, I made the comment again, and decided to back up my claim - which I’ll do here.

No, really – it’s true. Consider:

You can give your search platform detailed instructions, but it may or may not do things the way you meant:

Modern search platforms provide a console where you, as the one responsible for search, can enter all of the information needed to index content and serve up results. You tell it what repositories to index; what security applies to the various repositories; and how you want the results to look.  But did it? Does it give you a full report of what it did, what it was unable to do, and why?

You really have no idea what it’s doing – especially on weekends:

 Search platforms are notorious for the lack of operational information they provide.

Does your platform give you a useful report of what content was indexed successfully, and which were not – and why? And some platforms stop indexing files when they reach a certain size: do you know what content was not completely indexed?

When it does tell you, sometimes the information is incomplete: 

Your crawler tells you there were a bunch of ‘404’ errors because of a bad or missing URL; but will it tell you which page(s) had the bad link? Chances are it does not. 

They can be moody, and malfunction without any notice:

You schedule a full update of you index every weekend, and it has always worked flawlessly – as far as you know. Then, usually on a 3-day weekend, it fails. Why? See above.

When you talk to others who have search, theirs always sounds much better than yours:

As a conscientious search manager, you read about search, you attend webinars and conferences, and you always want to learn more. But you wonder why other search mangers seem to describe their platform in glowing terms, and never seem to have any of the behavioral issues you live with every day. It kind of makes you wonder what you’re doing wrong with yours.

It costs more to maintain than you thought and it always needs updates:

When you first got the platform you knew there we ongoing expenses you’d have to budget – support, training, updates, consulting. But just like your kid who needs books, a computer, soccer coaching, and tuition, it’s always more than you budgeted. Sometimes way more!

You can buy insurance, but it never seems to cover what you really need:

Bear with me here: you get insurance for your kids in case they get sick or cause an accident, and you buy support and maintenance for your search platform.  But in the same way that you end up surprised that orthodontics are not fully covered, you may find out that help tuning the search platform, or making it work better, isn’t covered by the plan you purchased – in fact, it wasn’t even offered. QED.

It speaks a different vocabulary:

You want to talk with your kid and understand what’s going on; you certainly don’t want to look uncool. But like your kid, your search platform has a vocabulary that only barely makes sense to you. You know rows and columns, and thought you understood ‘fields’; but the search platform uses words you know but that don’t seem to be the same definition you’ve known from databases or CMS systems.

It's hard for one person to manage, especially when it's new:

Many surveys show that most companies have one (or less) full-time staff responsible for running the search engine – while the same companies claim search is ‘critical’ to their mission.  Search is hard to run, especially in the first few years when everything needs attention. You can always get outside help – not unlike day care and babysitters – but it just seems so much better if you could have a team to help manage and maintain search to make it behave better.

How it behaves reflects on you:

You’re the search manager and you’ve got the job to make search work “just like Google”.  You spent more than $250K to get this search engine, and the fact that it just doesn’t work well reflects badly on you and your career. You may be worried about a divorce.

It doesn’t behave like the last one:

People tend to be nostalgic, as are many search managers I know. They learned how to take care of the previous one, but this new one – well, it’s NOTHING like the earlier one. You need to learn its habits and behaviors, and often adjust your behavior to insure peace at work.

You know if it messes up badly late at night, even on a weekend or a holiday, you’ll hear about it:

If customers or employees around the world use your search platform, there is no ‘down time’: when it’s having an issue, you’ll hear about it, and will be expected to solve the issue – NOW. You may even have IT staff monitoring the platform; but when it breaks in some odd and unanticipated way, you get the call. (And when does search ever fail in an expected way?)

 You may be legally responsible if it messes up:

Depending on what your search application is used for, you may find yourself legally responsible for a problem. Fortunately, the chances of you personally being at fault are slim, but if your company takes a hit for a problem that you hadn’t anticipated, you may have some ‘career risk’ of your own. Was secure content about the upcoming merger accidentally made public? Was content to be served only to your Swiss employees when they search from Switzerland exposed outside of the country? And you can’t even buy liability insurance for that kind of error.

When it’s good, you rarely hear about it; when it's bad, you’ll hear about it:

Seriously, how many of you have gotten a call from your CIO to tell you what a great experience he or she had on the new search platform? Do people want to take you to lunch because search works so well? If you answered ‘yes’ to either of these, I’d like to hear from you!

In my experience, people only go out of their way to give feedback on search when it’s not working well. It’s not “like Google”. Even though Google has hundreds or people and ‘bots’ examining every search query to try to make the result better, and you have only yourself and an IT guy.

You’ll hear. 

The work of managing it is never done:

The wonderful southern writer Ferrol Sams wrote :

“He's a good boy… I just can't think of enough things to tell him not to do.” Sound like your search platform? It will misbehave (or fail outright) in ways you never considered, and your search vendor will tell you “We’ve never seen a problem like that before”. Who has to get it fixed? You have to ask?

Once it moves away, you sometimes feel nostalgic:

Either you toss it out, or a major upgrade from your vendor comes alone and the old search platform gets replaced. Soon, you’re wishing for the “Good old days” when you knew how cute and quirky the old one was, and you find yourself feeling nostalgic for it and wishing that it didn’t have to move out.

Do you agree with my premise? What  have I missed?

August 21, 2014

More on the Gartner MQ: Fact or fiction?

There is a lively discussion going on over in the LinkedIn ‘Enterprise Search Engine Professionals’ group about the recent Gartner Magic Quadrant report on Enterprise Search. Whit Andrews, a Gartner Research VP, has replied that the Gartner MQ is not a 'pay to play'. I confess guilt to have been the one who brought the topic up in these threads, at least, and I certainly thank Whit for clarifying the misunderstanding directly.

That said, two of my colleagues who are true search experts have raised some questions I thought should be addressed.

Charlie Hull of UK-based Flax says he's “unconvinced of the value of the MQ to anyone wanting a comprehensive … view of the options available in the search market'. And Otis Gospodnetić of New York-based Sematext asks "why (would) anyone bother with Gartner's reports. We all know they don't necessarily match the reality". I want to try to address those two very good points.

First, I'm not sure Gartner claims to be a comprehensive overview of the search market. Perhaps there are more thorough lists- my friends and colleagues Avi Rappoport and Steve Arnold both have more complete coverage. Avi, now at Search Technologies, still maintains   

www.searchtools.com with a list that is as much a history of search as a list of vendors. And Steve Arnold has a great deal of free content on his site as well as high quality technology overviews by subscription. Find links to both at arnoldit.com.

Nonetheless, Gartner does have published criterion, and being a paid subscriber is not one of them. His fellow Gartner analyst French Caldwell calls that out on his blog. By the way, I have first-hand experience that Gartner is willing to cut some slack to companies that don't quite meet all of their guidelines for inclusion, and I think that adds credence to the claim that everything.

A more interesting question is one that Otis raises: “why would anyone bother with Gartner's reports”?

To answer that, let me paraphrase a well-known quote from the early days of computers: "No one ever got fired for following Gartner's advice". They are well known for having good if not perfect advice - and I'd suspect that in the fine print, Gartner even acknowledges the fallibility of their recommendations. And all of us know that in real life, you can't select software as complex as an enterprise search platform without a proof of concept in your environment and on your content.

The industry is full of examples where the *best* technology loses pretty consistently to 'pretty good' stuff backed by a major firm/analyst/expert. Otis, I know you're an expert, and I'd take what you say as gospel. A VP at a big corporation who is not familiar with search (or his company's detailed search requirements) may not do so. And any one on that VP's staff who picks a platform based solely on what someone like you or I say probably faces some amount of career risk. That said, I think I speak for Otis and Charlie and others when I say I am glad that a number of folks have listened to our advice and are still fully employed!]

So - in summary, I think we're all right. Whit Andrews and Gartner provide advice that large organizations trust because of the overall methodology of their evaluation. Everyone does know it's not infallible, so a smart company will use the 'trust but verify' approach. And they continue to trust you and I, but more so when Gartner or Forrester or one of the large national consulting companies conforms our recommendation. And of not, we have to provide a compelling reason why something else is better for them. And the longer we're successful with out clients, the more credible we become.

 

 

December 18, 2012

Last call for submiting papers to ESS NY

This Friday, December 21, is the last day for submitting papers and workshops to ESS in NY in May 21-22. See the information site at the Enterprise Search Summit Call for Speakers page.

If you work with enterprise search technologies (or supporting technologies), chances are the things you've learned would be valuable to other folks. If you have an in-depth topic, write it up as a 3 hour workshop; if you have a success story, or lessons learned you can share, submit a talk for a 30-45 minute session.

I have to say, this conference has enjoyed a multi-year run in terms of quality of talks and excellent Spring weather.. see you in May?

 

 

August 21, 2012

Mind the gap

A few weeks ago, a former client asked me about the 'lay of the land' in enterprise search - which companies were the one to be considered for evaluation. It's something I'm frequently asked, and one big reason why I strive to stay current with all of the leading commercial and open source vendors in the market.

As I pulled together the list, it occurred to me that recent consolidation has led to an odd situation: there is no longer a 'mid-market' in enterprise search.

Under $25,000(US), there are a number of options from free and low-cost open source (SearchBlox and my employer LucidWorks come to mind). 

Google has discontinued its low cost (blue) search appliance, and raised the cost of its regular (yellow) one to apparently be well above $25K.

We also have the old-school major commercial vendors - like FAST (now Microsoft SharePoint Search); Autonomy (now HP); Endeca (now Oracle), and finally Vivisimo (now IBM). Trend or not, these enterprise search products command high initial outlay, often significant implementation costs, and high ongoing 'support' once you've rolled it out. Looks like the mid-market is gone.

So now the question is: What do you get for the difference in price? I'd suggest not much in the way of capability; nothing in terms of scalability; and very very little in the way of flexibility.  I guess it's 'caveat emptor' - buyer beware!

What about some products/projects I haven't mentioned? Well, the focus of my article here is on enterprise search. Great candidates like Coveo are 'windows only' which disqualifies them from my list. I suppose you could consider the GSA as not enterprise ready, but I think appliances make the OS issue irrelevant. I've also omitted mentioning other projects because they have not yet shipped a 'Version 1.0' release - that's testware, no matter who it's from. And I'm sure there are open source projects where a single person is making all the calls - I don't consider that enterprise ready either.

I’ll be looking for the day when the big guys start value pricing their software licenses and help bring the market into line with today’s reality.

If you think I've unfairly represented the market, let me know - I'm not shy about posting comments that differ with my viewpoint.

 

s/Miles

 

January 04, 2012

My search platform ate my homework

In a recent article on inforword.com, Peter Wayner wrote a nifty piece discussing 11 programming trends to watch. It's interesting in general, but I found one trend really rang true for me with respect to enterprise search.

He calls his 9th trend Accuracy fades as scalability trumps all. He points out that most applications are fine with close approximations, based mainly on the assumption that at internet scale, if we miss an instance of something today, we'll probably see it again tomorrow. That brought to mind something I'm working on right now for a customer who needs 100% confidence in their search platform to meet some very stringent requirements. The InfoWorld article reminded me of a dirty little secret of nearly all enterprise search platforms, a secret you may not know (yet); but which could be important to you.

Search platform developers make assumptions about your data, and most search platforms do not index all of your content... by design! Don't get me wrong: these assumptions let them produce pretty good accuracy every time; and even 100% accuracy sometimes. And pretty good is fine most of the time. In fact, as a friend told me years ago, sometimes 'marginally acceptable' is just fine.

The theory seems to be that a search index might miss a particular term in a few documents, but any really important use of the term will clearly be indexed somewhere else and our users will get results from these other documents. In fact, some search platforms have picked an arbitrary size limit, and won't index any content past that limit even if it misses major sections of large documents. Google, in fact, is one of the few who actually document this - once the GSA has indexed 2 MB of text or 2.5MB of HTML in a file, it stops indexing that file and 'discards' the rest. This curious behavior works most of the time for most data (although there is an odd twist that will bite you if you feed GSA a large list of URLs or ODBC records). To be honest, most search platforms do this sort of trimming as well; they just don't mention it too often during the sales process.

Now, in legal markets like eDiscovery, it's pretty darned critical to get every document that contains a particular term. It's not OK to go to court and report that you missed one or more critical document because your search engine truncates or ignores some terms or some documents. That excuse might have worked in elementary school or even in high school, but it just doesn't cut it in demanding enterprise search environments.

It may not be a problem for you; just be sure that, if it is a requirement for you, you include it in your RFI/RFQ documents.

 

 

November 08, 2011

Are you spending too much on enterprise search?

If your organization uses enterprise search, or if you are in the market for a new search platform, you may want to attend our webinar next week "Are you spending too much for search?". The one hour session will address:

  • What do users expect?
  • Why not just use Google?
  • How much search do you need?
  • Is an RFI a waste of time?   

Date: Wednesday, November 16 2011

Time: 11AM Pacific Standard Time / 1900 UTC

Register today!

August 22, 2011

Searching for Sarah at SharePoint Conference 2011

Just noticed one of the most interesting sessions at last May's Enterprise Search Summit is coming to the October Microsoft SharePoint Conference! We blogged about it back in May.

Basically, Booz & Company did an evaluation of SharePoint 2010 search - FAST Search for SharePoint as I recall - versus the Google Search Appliance they had been using. At one point, the search business owner was trying to find the last name of a woman she had met in the firm; and when she searched for 'Sarah', hoping to find her in the directory, the GSA returned 60 men in the result list. Can you guess why? A hint: metadata (check the earlier article, or come to SPC 2011 to find out).

Now in fact, we think the GSA could have been tuned to emulate this OOB behavior by SharePoint; but this is a reminder that not every search platform works great in every environment. Buyer beware!

Ever had a similar experience? Let us know about it!

 

May 31, 2011

It's not Google unless it says it's Google

A few years back, one of our customers told us that, if he could just license the 'Powered by Google' icon, he was sure most of the users would stop complaining. Not long after that, we heard that our friend Andy Feit, who was VP of search at Verity, hired a marketing research team to compare the quality of search engines when one was "Powered by Verity" and the other was 'Powered by Google'. Andy found that people thought the Google results were significantly better - even though both test cases were, in fact, powered by Verity. The mere presence of the Google icon seemed to make people think the results were better.

At the recent ESS, a woman from Booz & Company talked about their previous enterprise search experience involving Google. A few years back, Booz used FAST ESP on SharePoint 2003 and it simply sucked. Users asked for Google by name. When they upgraded to SharePoint 2007, Booz gave the users what they wanted: they went with a Google Search Appliance. The trouble was that they built a custom interface with a generic search button. Users' responses? "Search still sucks - why don't we just use Google?" even as they were using Google!

This can teach us a number of lessons:

1. Analyze what you need search to do for you before you buy it.

2. Understand how your content and search platform play together.

3, It ain't Google unless you tell your users it's Google.

By the way: in 2010 Booz rolled out FAST Search for SharePoint, and it seems that the results are a bit better now that they understand their search requirements and the nature of their content and metadata.