5 posts categorized "Enterprise Search"

August 09, 2018

Fake Search?

Enterprise search was once easy. It was often bad - but understanding the results was pretty easy. If the query term(s) were in the document, it was there in the results. Period. The more times the terms appeared, the higher the result appeared in the result list. And when the user typed a multi-term query, documents with all of the terms displayed higher in the result list than those with only some of the terms.

And some search platforms could 'explain' why a particular document was ranked where it was. Those of us who have been in the business a while may remember the Verity Topic and Verity K2 product lines. One of the most advanced capabilities in these was the 'explain' function. It would reverse engineer the score for an individual document and report the critical 'why' the selected document was ranked as it was.

"People Like You"

Now, search results are generally better, but every now and then, you’ll see a result that surprises you, especially on sites that are enhanced with machine learning technologies. A friend of mine tells of a query she did on Google while she was looking for summer clothes, but the top result was a pair of shoes. She related her surprise: "I asked for summer clothes, and Google shows me SHOES?".  But, she admitted, "Those shoes ARE pretty nice!"

How did that happen? Somewhere, deep in the data Google maintains on her search history, it concluded that "people like her" purchased that pair of shoes.

In the enterprise, we don't have the volume of content and query activity of the large Internet players, but we do tend to have more focused content and a narrower query vocabulary. ML/AI tools like MLLib, part of both Mahout and Spark, can help our search platforms generate such odd yet often relevant results; but these technologies are still limited when it comes to explaining the 'why' for a given result. And those of us who still exhibit skepticism when it comes to computers, that capability would be nice.

Are you using or planning to implement) ML-in-search? A skeptic? Which camp you're in? Let me hear from you! miles.kehoe@ideaeng.com.

May 03, 2018

Lucidworks expands focus in new funding round

Lucidworks, the commercial organization with the largest pool of Solr committers, announced today a new funding round of $50M US from venture firms Top Tier Capital Partners and Silver Lake Waterman, as well as additional participation from existing investors Shasta Ventures, Granite Ventures, and Allegis Capital.

While a big funding round for a privately held company isn't uncommon here in 'the valley', what really caught my attention is where and how Lucidworks will use the new capital. Will Hayes, Lucidworks' CEO, intends to focus the investment on what he calls "smart data experiences" that go beyond simply artificial intelligence and machine learning. The challenge is to provide useful and relevant results by addressing what he calls "the last mile" problem in current AI:  enabling mere mortals to find useful insights in search without having to understand the black art of data science and big data analysis. The end target is to drive better customer experiences and improved employee productivity.

A number of well-known companies utilize Lucidworks Fusion already, many along with AI and ML tools and technologies. I've long thought that to take advantage of 'big data' like Google,  Amazon, and others do, you needed huge numbers of users and queries to confidently provide meaningful suggestions in search results.  While that helps, Hayes explained that smaller organizations will be able to benefit from the technology in Fusion because of both smaller and more focused data sets, even with a smaller pool of queries. With the combination of these two characteristics, Lucidworks expects to deliver many of the benefits of traditional machine learning and AI-like results to enterprise-sized content. It will be interesting to see what Lucidworks does in the next several releases of Fusion!

April 23, 2018

Poor Data Quality gives Enterprise Search a Bad Rap

If you’re involved in managing the enterprise search instance at your company, there’s a good chance that you’ve experienced at least some users complaining about the poor results they see. A common lament search teams hear is “Why didn’t we use Google?” Even more telling is that many organizations that used the Google Search Appliance on their sites heard the same lament.

We're often asked to help a client improve results on an internal search platform; and sometimes, the problem is the platform. Not every platform handles every use case equally, and sometimes that shows up. Occasionally, the problem is a poor or misconfigured search, or simply an instance that hasn’t been managed properly. The renowned Google public search engine does well not because it is a great search platform. In fact, Google has become less of a search platform and more of a big data analytics engine.

Our business is helping clients select, implement, and manage Intranet search. Frequently, the problem is not the search platform. Rather, the culprit is poor data quality. 

Enterprise data isn’t created with search in mind. There is little incentive for authors to attach quality metadata in the properties fields Adobe PDF Maker, Microsoft Office, and other document publishing tools support. To make matters worse, there may be several versions of a given document as it goes through creation, editing, and updating; and often the early drafts, as well as the final version, are in the same directory or file share. Very rarely will a public facing website have such issues.

We have an updated two-part series on data quality and search, starting here. We hope you find it helpful; let us know if you have any questions!

February 22, 2018

Search Is the User Experience, not the kernel

In the early days of what we now call 'enterprise search', there was no distinction between the search product and the underlying technology. Verity Topic ran on the Verity kernel and Fulcrum ran on the Fulcrum kernel, and that's the way it was - until recently.

In reality, writing the core of an enterprise search product is tough. It has to efficiently create an index of all the works in virtually any kind of file; it has to provide scalability to index millions of documents; and it has to respect document level security using a variety of protocols. And all of this has to deliver results in well under a second. And now, machine learning is becoming an expected capability as well. All for coding that no user will ever see.

Hosted search vendor Swiftype provides a rich search experience for administrators and for uses, but Elastic was the technology under the covers. And yesterday, Coveo announced that their popular enterprise search product will also be available with the Elastic engine rather than only with the existing Coveo proprietary kernel. This marks the start of a trend that I think may become ubiquitous.  

Lucidworks, for example, is synonymous with Solr; but conceptually there is no reason their Fusion product couldn't run on a different search kernel - even on Elastic. However, with their investment in Solr, that does seem unlikely, especially with their ability to federate results from Elastic and other kernels with their App Studio, part of the recent Twigkit acquisition.

Nonetheless, Enterprise search is not the kernel: it's the capabilities exposed for the operation, management, and search experience of the product.

Of course, there are differences between Elastic and Coveo, for example, as well as with other kernels. But in reality, as long as the administrative and user experiences get the work done, what technology is doing the work under the covers matters only in a few fringe cases. And ironically, Elastic, like many other platforms, has its own potentially serious fringe conditions. At the UI level, solving those cases on multiple kernels is probably a lot less intense than managing and maintaining a proprietary kernel.

And this may be an opportunity for Coveo: until now, it's been a Cloud and Windows-only platform. This may mark their entry into multiple-platform environments.

February 20, 2018

Search, the Enterprise Orphan

It seems that everywhere I go, I hear how bad enterprise search is. Users, IT staff, and management complain, and eventually organizations decide that replacing their existing vendor is the best solution. I’d wager that companies switch their search platforms more frequently than any other mission-critical application

While the situation is frustrating for organizations that use search, the current state isn’t as bad for the actual search vendors: if prospects are universally unhappy with a competing product, it’s easier to sell a replacement technology that promises to be everything the current platform is not. It may seem that the only loser is the current vendor; and they are often too busy converting new customers to the platform to worry much.

But in fact, switching search vendors every few years is a real problem for the organization that simply wants its employees and users to find the right content accurately, quickly and without any significant user training. After all, employees are born with the ability to use Google!

 

Higher level story

Why is enterprise search so bad? In my experience, search implemented and managed properly is pretty darned good. As I see it, the problem is that at most organizations, search doesn’t have an owner.  On LinkedIn, a recent search for “vice president database” jobs shows over 1500 results. Searching for “vice president enterprise search”? Zero hits.

This means that search, recognized as mission-critical by senior management, often doesn’t have an owner outside of IT, whose objective is to keep enterprise applications up and running. Search may be one of the few enterprise applications where “up and running” is just not good enough.

Sadly, there is often no “search owner”; no “search quality team”; and likely no budget for measuring and maintaining result quality.

Search Data Quality

We’ve all heard the expression “Garbage In, Garbage Out”. What is data quality when it comes to search? And how can you measure it?

Ironically, enterprise content authors have an easy way to impact search data quality; but few use it. The trick? Document Properties – also known as ‘metadata’.

When you create any document, there is always data about the document – metadata. Some of the metadata ‘just happens’: the file date, its size, and the file name and path. Other metadata depends on the author-provided properties like a title, subject, and other fielded data like that maintained in the Office ‘Properties’ tab. And there are tools like the Stanford Named Entity Recognition tool (licensed under the GNU General Public License) that can perform advanced metadata extraction from the full text of a document

Some document properties happen automatically. In Microsoft Office, for example, the Properties form provides a way to define field values including the author name, company and other fields. The problem is, few people go to the effort of filling the property fields correctly, so you end up for bad metadata. And bad data is arguably worse than no metadata.

On the enterprise side, I heard about an organization that wanted to reward employees who authored popular content for the intranet. The theory was that recognizing and rewarding useful content creation would help improve the overall quality and utility of the corporate intranet.

An organization we did a project for a few years ago were curious about poor metadata in their intranet document repository, so they did a test. After some testing of their Microsoft Office documents, , they discovered that one employee had authored nearly half of all their intranet content! It turned out that one employee, an Office Assistant, had authored the document that everyone in the origination used as the starting point for a of their common standard reports.

Solving the Problem

Enterprise search technology has advanced to an amazing level. A number of search vendors have even integrated machine learning tools like Spark to surface popular content for frequent queries. And search-related reporting has become a standard part of nearly all search product offerings, so metrics such as top queries and zero hits are available and increasingly actionable.

To really take advantage of these new technological solution, you need to have a team of folks to actively participate in making your enterprise search a success so you can break the loop of “buy-replace”.

Start by identifying an executive owner, and then pull together a team of co-conspirators who can help. Sometimes just by looking at the reports you have and taking action can go a long way.

Review the queries with no results and see if there are synonyms that can find the right content without even changing the content.  Identify the right page for your most popular queries and define one or two “best bets’. If you find that some frequent queries don’t really have relevant content? Work with your web team to create appropriate content.

Funding? Find the right person in your organization to convince that spending a little money on fixing the problems now will break the “buy-replace’ problem and save some significant but needlessly recurring expenses.

Like so many things, a little ongoing effort can solve the problem.