November 06, 2019

Big Money in Enterprise Search

VC firms seem attracted to the Enterprise Search space

Just today, it was announced that Canadian-based Coveo closed a ~170MUS round, following the Lucidworks’ recent $100MUS round. Earlier this year we’ve seen  Algolia come in with $110M of funding, and of course the recent Elasticsearch’s IPO – sure looks it looks like 2019 will have been a good year for the leading technologies.

Stay tuned as we learn more about the trend!

November 14, 2018

Do you have big data? Or just lot of it?

Of course, I’m a search nerd. I've been involved in enterprise search for over 20 years. I see search and big data as related technologies, but in most cases, I do not see them as synonymous.

And I'd also say that, while most enterprises have a lot of data, the term ‘big data’ is not applicable to most organizations.

Consider that Google (and others) define ‘big data’ as “extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions”.

Yes, the data that Amazon, Google, Facebook, and others collect qualifies as big data. These companies mine everything you do when you're using their sites. Amazon wants to be able to report “people like you bought …” to sell more product; Google wants to know what ‘people like you’ look at after a query so they can suggest it to the next person like you; Facebook.. well, they want to know what to try to sell you as you chat with and about your friends. Is search involved? Maybe; but more often some strong machine learning and internal analytics are key.

Do consulting firms like Ernst & Young or PWC have big data? Well, my bet is they have alot of information about their clients, business practices, accounting, etc.. but is it ‘big data’? Probably not.

Solr, Elastic and other search technologies can search-enable huge sets of data, so often big data is indexed to be searchable by humans. And both Solr and Elastic come with some great analytical tools.. Kibana on Elastic, and Banana, the port of Kibana for Solr based engines.

But again, is that big data Or just lots of it?

I’d vote the latter.

 

August 09, 2018

Fake Search?

Enterprise search was once easy. It was often bad - but understanding the results was pretty easy. If the query term(s) were in the document, it was there in the results. Period. The more times the terms appeared, the higher the result appeared in the result list. And when the user typed a multi-term query, documents with all of the terms displayed higher in the result list than those with only some of the terms.

And some search platforms could 'explain' why a particular document was ranked where it was. Those of us who have been in the business a while may remember the Verity Topic and Verity K2 product lines. One of the most advanced capabilities in these was the 'explain' function. It would reverse engineer the score for an individual document and report the critical 'why' the selected document was ranked as it was.

"People Like You"

Now, search results are generally better, but every now and then, you’ll see a result that surprises you, especially on sites that are enhanced with machine learning technologies. A friend of mine tells of a query she did on Google while she was looking for summer clothes, but the top result was a pair of shoes. She related her surprise: "I asked for summer clothes, and Google shows me SHOES?".  But, she admitted, "Those shoes ARE pretty nice!"

How did that happen? Somewhere, deep in the data Google maintains on her search history, it concluded that "people like her" purchased that pair of shoes.

In the enterprise, we don't have the volume of content and query activity of the large Internet players, but we do tend to have more focused content and a narrower query vocabulary. ML/AI tools like MLLib, part of both Mahout and Spark, can help our search platforms generate such odd yet often relevant results; but these technologies are still limited when it comes to explaining the 'why' for a given result. And those of us who still exhibit skepticism when it comes to computers, that capability would be nice.

Are you using or planning to implement) ML-in-search? A skeptic? Which camp you're in? Let me hear from you! [email protected].

May 03, 2018

Lucidworks expands focus in new funding round

Lucidworks, the commercial organization with the largest pool of Solr committers, announced today a new funding round of $50M US from venture firms Top Tier Capital Partners and Silver Lake Waterman, as well as additional participation from existing investors Shasta Ventures, Granite Ventures, and Allegis Capital.

While a big funding round for a privately held company isn't uncommon here in 'the valley', what really caught my attention is where and how Lucidworks will use the new capital. Will Hayes, Lucidworks' CEO, intends to focus the investment on what he calls "smart data experiences" that go beyond simply artificial intelligence and machine learning. The challenge is to provide useful and relevant results by addressing what he calls "the last mile" problem in current AI:  enabling mere mortals to find useful insights in search without having to understand the black art of data science and big data analysis. The end target is to drive better customer experiences and improved employee productivity.

A number of well-known companies utilize Lucidworks Fusion already, many along with AI and ML tools and technologies. I've long thought that to take advantage of 'big data' like Google,  Amazon, and others do, you needed huge numbers of users and queries to confidently provide meaningful suggestions in search results.  While that helps, Hayes explained that smaller organizations will be able to benefit from the technology in Fusion because of both smaller and more focused data sets, even with a smaller pool of queries. With the combination of these two characteristics, Lucidworks expects to deliver many of the benefits of traditional machine learning and AI-like results to enterprise-sized content. It will be interesting to see what Lucidworks does in the next several releases of Fusion!

April 23, 2018

Poor Data Quality gives Enterprise Search a Bad Rap

If you’re involved in managing the enterprise search instance at your company, there’s a good chance that you’ve experienced at least some users complaining about the poor results they see. A common lament search teams hear is “Why didn’t we use Google?” Even more telling is that many organizations that used the Google Search Appliance on their sites heard the same lament.

We're often asked to help a client improve results on an internal search platform; and sometimes, the problem is the platform. Not every platform handles every use case equally, and sometimes that shows up. Occasionally, the problem is a poor or misconfigured search, or simply an instance that hasn’t been managed properly. The renowned Google public search engine does well not because it is a great search platform. In fact, Google has become less of a search platform and more of a big data analytics engine.

Our business is helping clients select, implement, and manage Intranet search. Frequently, the problem is not the search platform. Rather, the culprit is poor data quality. 

Enterprise data isn’t created with search in mind. There is little incentive for authors to attach quality metadata in the properties fields Adobe PDF Maker, Microsoft Office, and other document publishing tools support. To make matters worse, there may be several versions of a given document as it goes through creation, editing, and updating; and often the early drafts, as well as the final version, are in the same directory or file share. Very rarely will a public facing website have such issues.

We have an updated two-part series on data quality and search, starting here. We hope you find it helpful; let us know if you have any questions!

March 06, 2018

Lucidworks Announces Search as a Service

Not Your Grandfather's Site Search

Some of you may know that New Idea Engineering spun off a company called SearchButton.com in the mid-90s, offering Verity-powered site search for thousands of clients. Sadly, our investors insisted we violate the cardinal rule of business I learned at HP - "be profitable" - so when the "dot-com' bubble exploded, we were back to being New Idea Engineering again!

I remain a fan of hosted search to this day and have been pleasantly surprised to see companies like Algolia; Swiftype - now part of Elastic; and a few other "Search as a service" organizations reinventing the capabilities that we - along with a competitor Atomz> - offered more than 20 years ago! And I include with them the 'cloud-based' search services offed by other established enterprise search companies like Coveo, Microsoft, and until recently, Google.

That said, we've always strived to be fully vendor neutral when it comes to recommending products and services to our clients, and we go out of our way to understand and work with all of the major enterprise search vendors.

Over the last several months I've had the opportunity to use early releases of a product Lucidworks announced this morning: Lucidworks Site Search. As I said, I am a fan of hosted search - or 'search as a service'; and in full disclosure, I was a Lucidworks employee a few years back and yes, a shareholder.

I had an opportunity to talk with Will Hayes, Lucidworks' CEO, about Lucidworks' entry into the hosted search market. Even in its initial release, it looks pretty impressive.

First, Lucidworks Site Search is powered by the newest release of their enterprise product, Fusion 4.0, announced just last week and available for download. One of the exciting new capabilities in Fusion 4 is the full integration with Spark to enhance search with machine learning. It's not quite Google's "people like you" out of the box, but it's a giant step towards AI in the enterprise.

Fusion 4 also provides the ability to create, test, and move into production custom 'portable' search applications. When I first looked at the product last week, I confess to not having the vision to see just how powerful that capability is. It seems that the Fusion Site Search announced this morning is an example of a powerful, custom search app written specifically for site search.

But Lucid has great plans for their Site Search product. It can be run in the cloud, initially on AWS but soon expanding to other cloud services including Azure and Google. And reliability, you can elect to have Lucidworks Site Search span multiple data centers and even across multiple cloud services. As you'd expect in an enterprise product, it supports a wide variety of document formats, security, faceted navigation and a full management console. Finally, I understand that plans are in the works Lucidworks Site Search to be installed "on-prem" and even federate results (respecting document security) from the cloud and from your local instance at the same time.

Over the coming weeks and months I'll be writing more about Fusion 4, Lucid Site Search, and search apps. Stay tuned!

February 22, 2018

Search Is the User Experience, not the kernel

In the early days of what we now call 'enterprise search', there was no distinction between the search product and the underlying technology. Verity Topic ran on the Verity kernel and Fulcrum ran on the Fulcrum kernel, and that's the way it was - until recently.

In reality, writing the core of an enterprise search product is tough. It has to efficiently create an index of all the works in virtually any kind of file; it has to provide scalability to index millions of documents; and it has to respect document level security using a variety of protocols. And all of this has to deliver results in well under a second. And now, machine learning is becoming an expected capability as well. All for coding that no user will ever see.

Hosted search vendor Swiftype provides a rich search experience for administrators and for uses, but Elastic was the technology under the covers. And yesterday, Coveo announced that their popular enterprise search product will also be available with the Elastic engine rather than only with the existing Coveo proprietary kernel. This marks the start of a trend that I think may become ubiquitous.  

Lucidworks, for example, is synonymous with Solr; but conceptually there is no reason their Fusion product couldn't run on a different search kernel - even on Elastic. However, with their investment in Solr, that does seem unlikely, especially with their ability to federate results from Elastic and other kernels with their App Studio, part of the recent Twigkit acquisition.

Nonetheless, Enterprise search is not the kernel: it's the capabilities exposed for the operation, management, and search experience of the product.

Of course, there are differences between Elastic and Coveo, for example, as well as with other kernels. But in reality, as long as the administrative and user experiences get the work done, what technology is doing the work under the covers matters only in a few fringe cases. And ironically, Elastic, like many other platforms, has its own potentially serious fringe conditions. At the UI level, solving those cases on multiple kernels is probably a lot less intense than managing and maintaining a proprietary kernel.

And this may be an opportunity for Coveo: until now, it's been a Cloud and Windows-only platform. This may mark their entry into multiple-platform environments.

February 20, 2018

Search, the Enterprise Orphan

It seems that everywhere I go, I hear how bad enterprise search is. Users, IT staff, and management complain, and eventually organizations decide that replacing their existing vendor is the best solution. I’d wager that companies switch their search platforms more frequently than any other mission-critical application

While the situation is frustrating for organizations that use search, the current state isn’t as bad for the actual search vendors: if prospects are universally unhappy with a competing product, it’s easier to sell a replacement technology that promises to be everything the current platform is not. It may seem that the only loser is the current vendor; and they are often too busy converting new customers to the platform to worry much.

But in fact, switching search vendors every few years is a real problem for the organization that simply wants its employees and users to find the right content accurately, quickly and without any significant user training. After all, employees are born with the ability to use Google!

 

Higher level story

Why is enterprise search so bad? In my experience, search implemented and managed properly is pretty darned good. As I see it, the problem is that at most organizations, search doesn’t have an owner.  On LinkedIn, a recent search for “vice president database” jobs shows over 1500 results. Searching for “vice president enterprise search”? Zero hits.

This means that search, recognized as mission-critical by senior management, often doesn’t have an owner outside of IT, whose objective is to keep enterprise applications up and running. Search may be one of the few enterprise applications where “up and running” is just not good enough.

Sadly, there is often no “search owner”; no “search quality team”; and likely no budget for measuring and maintaining result quality.

Search Data Quality

We’ve all heard the expression “Garbage In, Garbage Out”. What is data quality when it comes to search? And how can you measure it?

Ironically, enterprise content authors have an easy way to impact search data quality; but few use it. The trick? Document Properties – also known as ‘metadata’.

When you create any document, there is always data about the document – metadata. Some of the metadata ‘just happens’: the file date, its size, and the file name and path. Other metadata depends on the author-provided properties like a title, subject, and other fielded data like that maintained in the Office ‘Properties’ tab. And there are tools like the Stanford Named Entity Recognition tool (licensed under the GNU General Public License) that can perform advanced metadata extraction from the full text of a document

Some document properties happen automatically. In Microsoft Office, for example, the Properties form provides a way to define field values including the author name, company and other fields. The problem is, few people go to the effort of filling the property fields correctly, so you end up for bad metadata. And bad data is arguably worse than no metadata.

On the enterprise side, I heard about an organization that wanted to reward employees who authored popular content for the intranet. The theory was that recognizing and rewarding useful content creation would help improve the overall quality and utility of the corporate intranet.

An organization we did a project for a few years ago were curious about poor metadata in their intranet document repository, so they did a test. After some testing of their Microsoft Office documents, , they discovered that one employee had authored nearly half of all their intranet content! It turned out that one employee, an Office Assistant, had authored the document that everyone in the origination used as the starting point for a of their common standard reports.

Solving the Problem

Enterprise search technology has advanced to an amazing level. A number of search vendors have even integrated machine learning tools like Spark to surface popular content for frequent queries. And search-related reporting has become a standard part of nearly all search product offerings, so metrics such as top queries and zero hits are available and increasingly actionable.

To really take advantage of these new technological solution, you need to have a team of folks to actively participate in making your enterprise search a success so you can break the loop of “buy-replace”.

Start by identifying an executive owner, and then pull together a team of co-conspirators who can help. Sometimes just by looking at the reports you have and taking action can go a long way.

Review the queries with no results and see if there are synonyms that can find the right content without even changing the content.  Identify the right page for your most popular queries and define one or two “best bets’. If you find that some frequent queries don’t really have relevant content? Work with your web team to create appropriate content.

Funding? Find the right person in your organization to convince that spending a little money on fixing the problems now will break the “buy-replace’ problem and save some significant but needlessly recurring expenses.

Like so many things, a little ongoing effort can solve the problem.

December 04, 2017

Search Indices are Not Content Repositories

Recently on Quora, someone asked for help with a corrupt Elasticsearch index. A number of folks responded, all recommending that he simply rebuild the search index and move on.

The bad news turns out that this person didn't have any source documents: he was so impressed with what Elasticsearch did that he had been using it as his primary storage for content. When it crashed, his content was gone. This is not an indictment of Elasticsearch: it can happen to any complex software product whether Elastic, Solr or SharePoint.

In my reply, I told him how sorry I was for his loss, and suggested he get to work restoring or recreating his content. I even offered to call and tell him how sorry I was for his loss. 

Then I launched into what I really felt I needed to say - there for his behalf, and here for yours.  I suggested - no, actually I insisted - that you NEVER use ANY search index as your primary store for content. Let me be more specific: NEVER. EVER.

Some commercial platforms such as Solr and commercial software based on Solr (ie, Lucidworks) have a reasonably robust ability to replicate the index over multiple servers or nodes which provides some safety (I’m thinking SOLR Cloud here); others do not. But the replication is a copy of the INDEX, which is NOT your documents.

The search index is optimized for retrieval. Databases, CMS, file systems and other tech are for storage.

For one, I’m not sure any search engine stores the entire document of any type. Conceptually, most search indices have two ‘logical’ (if not physical) files 

One of these files you can think of as a database table with one row per document, with field values (Title, Author, etc). This file generally stores the URL, file name, database row as well, basically ‘where do I go to find this full document?’ - and maybe a few other field values.

The second file is a list of all the (non-stopwords) in all of your documents. The word itself is stored once, along with a list of byte offsets in the document where the word appears (multiple byte offsets, one for each instance of the word). It also has a pointer to all docs which have that word. Again: Stop words are generally NOT indexed, so they are usually not in the index.

(There is more detail in an older article on my website Relational Databases vs. Full-Text Search Engines - New Idea Engineering)

COULD you rebuild the full document? Well, depends on the search platform. In most platforms I've seen, it would be difficult because stop words are not even stored. Recreating a document that omits ‘the’, ’a’, ‘an’, ‘and’ etc. MIGHT be human readable but it is NOT the original document.

Secondly, not all search engine indices are replicated for redundancy. The assumption is that if you lose the file system where the content lives, you can still search; you just can't retrieve any documents until you restore the original content.

And some platforms do not give you a way to access the index, short of searching. And a search index is an index, not a repository.

Finally, some platforms are better at redundant failover of indices than others. If the platform you use is one of those that do not have redundancy BY DEFAULT.. like some very popular platforms - and you use that index as the primary data store for your doc and the index dies.. you’re what we used to call SOL - ‘sure outta luck’.

The moral of the story? DO NOT USE A SEARCH INDEX AS THE PRIMARY DATASTORE. Specific enough?

October 11, 2017

A Search Center of Excellence is still relevant managing enterprise search

I was having a conversation with an old friend who manages enterprise search her organization, a biotech company back east. We've worked together on search projects going back to my days at Verity - for you young'uns, that is what we call BG: 'before Google'.

Based on an engagement we did sometime after Google but before Solr, "Centers of Excellence" or "COE" had become very popular, and we decided we could define the rules and responsibilities of a  Search Center of Excellence or SCOE: the team that manages the full breadth of operation and management for enterprise search. We began preaching the gospel of the SCOE at trade show events and on our blog where you can find that original article.

My friend and I had a great conversation about how successful they had been managing three generations of search platforms now with the SCOE; and how they still maintain the responsibilities the SCOE assumed years back with only a few meetings a year to review how search is doing, address any concerns, and map out enhancements as they become available. 

It worked then, and it works now. The SCOE is a great idea! Let me know if you'd like to talk about it.