178 posts categorized "Enterprise search"

July 21, 2014

What does it take to qualify as 'Big Data'?

If you've been on a deserted island for a couple of decades, you may not have heard the hot new buzz phrase: Big Data. And you many not have heard of "Hadoop", the application that accidentally solved the problem of Big Data.

Hadoop was originally designed as a way for the open source Nutch crawler to store its content prior to indexing. Nutch was fine for crawling sites; but of you wanted to crawl really massive data sets – say the Internet – you needed a better way to store the content (thank goodness Doug Cutting didn’t work at a database giant or we’d all be speaking SQL now!) GigaOm has a great series on the history of Hadoop http://bit.ly/1jOMHiQ I recommend for anyone interested in how it all began and evolved,

After a number of false starts, brick walls, and subsequent successes, Hadoop is a technology that really enables what we now call ‘big data’- usually written as "Big Data". But what does this mean?  After all, there are companies with a lot of data – and there are companies with limited content size that changes rapidly every day. But which of these really have data that meets the 'Big" definition.  

Consider a company like AT&T or Xerox PARC, which licenses its technology to companies worldwide. As part of a license agreement, PARC agrees to defend its licensees if an intellectual property lawsuit ever crosses the transom. Both companies own over tens of thousands patents going back to its founding in the early 20th century. Just the digital content to support these patents and inventions must number on the tens of millions of documents, much of which is in formats no longer supported by any modern search platform. Heck, to Xerox, WordStar and Peachtext probably seem pretty recent! But about the only time they have to access their content search is when a licensee needs help defending a licensee against an IP claim. I don’t know how often that is, but I’d bet less than a dozen times a year.

Now consider a retail giant like Amazon or Best Buy. In raw size, I’d bet Amazon has hundreds of millions of items to index: books, products, videos, tunes. Maybe more. But that’s not what makes Amazon successful. I think it’s the ability to execute billions of queries every day – again, maybe more – and return damn good results in well under a second, along with recommendations for related products. Best buy actually has retail stores, so they have to keep purchase data, but also buying patterns so they know what products to stock in any given retail location.

A healthcare company like UnitedHealth must have its share of corporate intranet content. But unlike many corporations, these companies must process millions of medical transactions every week: doctor visits, prescriptions, test results, and more. They need to process these transactions, but they also must keep these transactions around for legally defined durations.

Finally, consider a global telecom company like Ericsson or Verizon. They’ve got the usual corporate intranet, I’m sure. They have financial transactions like Amazon and UHG. But they also have telecomm transaction records that must count in the billions a month: phone calls and more. And given the politics of the world, many of these transactions have to be maintained and searchable for months, if not years.

These four companies have a number of common traits with respect to search; but each has its own specific demands. Which ones count as ‘big data’ as it’s usually defined? And which just have ‘a bunch of content?

As it turns out that’s a touch question. At one point, there was a consensus that ‘big data’ required three things, known as the “Three V’s of Big Data’. This escalated to the ‘5 V’s of Big Data’, then the “7 V’s”– and I’ve even seen some define the “10 V’s of Big Data”. Wow.. and growing!

Let’s take a look at the various “V’s” that are commonly used to define ‘Big Data’.

Depending on who you ask, there are four, five, seven or more ‘requirements’ that define ‘big data. These are usually referred to as the “Vs of Big Data”, and these usually include:

Volume: The scale of your data – basically, how many ‘entries’ or ‘items’, you have. For Xerox, how many patents; for a telecom company, how many phone ‘transactions’ have there been.   

Variety: Basically this means how many different types of data you have. Amazon has mouse clicks, product views, unique titles, subscribers, financial transactions and more. For UHG and Ericsson, I’d guess the majority of their content is transactional: phone call metadata (originating and receiving phone number, duration of the call, time of day, etc.). In the enterprise, variety can also mean data format and structure. Some claim that 90% of enterprise data is unstructured, which adds yet another challenge.

Veracity: The boils down whether the data is trustworthy and meaningful. I remember a survey HP did years ago to find out what predictors were useful to know whether a person waking into a random electronics store would walk out with an HP PC. Using HP products at work or at home we the big predictors; but the fact that the most likely day was Tuesday was perhaps spurious and not very valuable.

Velocity: How fast is the data coming in and/or changing. Amazon has a pretty good idea on any given day how many transactions they can expect, and Verizon knows how much call data they can expect. But things change: A new product becomes available, or a major world event triggers many more phone calls than usual.

Viability: If you want to track trends, you need to know what data points are the most useful in predicting the future. A good friend of mine bought a router on Amazon; and Amazon reported that people who bought that router also bought.. men’s extra large jeans. Now, he tells me he did think they were nice jeans, but that signal may not have had long viability.

Value: How useful or important is the data in making a prediction, or in improving business decisions. That was easy!

Variability: This often refers to how internally consistent the data is. To a data point as an accurate predictor, that data point is ideally consistent across the wide range of content. Blood pressure, for example, is generally in a small range; and for a given patient, should be relatively consistent over time. When there is a change, UHG may want to understand the cause.

Visualization: Rows and columns of data can look pretty intimidating and it’s not easy to extract meaning from them. But as they say, ‘a picture is worth a thousand words’, so being able to see charts or graphs can help meaning and trends jump out at you.  I’d use Lucidworks’ SiLK product as an example of a great visualization tool for big data, but there are many others.

Validity: This seems like another way to say the data has veracity, but it may be a subtle point. If you’re recording click-thru data, or prescriptions, or intellectual property, you have to know that the data is accurate and internally consistent. In my HP anecdote above, is the fact that more people bought HP PCs on Tuesday a valid finding? Or is it simply noise? You’ll probably need a human researcher to make these kinds of calls.

Venue: With respect to Big Data, this means where the data came from and where it will be used. Content collected from automobiles and from airplanes may look similar in a lot of ways to the novice. In the same way, data from the public Internet versus data collected from a private cloud may look almost identical. But making decisions for your intranet based on data collected from Bing or Google may prove to be a risk.

Vocabulary: What describes or defines the various items of the data. Ericsson has to know which bit of data represent a phone number and which represent the time of day. Without some idea of the schema or taxonomy, we’ll be hard pressed to reach reasonable decisions from Big Data.

Volatility: This may seem like velocity above, but volatility in Big Data really means how long is the data value, how long do you need to keep it around.  Healthcare companies may need to keep the data a lot longer than

Vagueness: This final one is credited to Venkat Krishnamurthy of YarcData just last month at the Big Data Innovation Summit here in Silicon Valley.  In a way, it addresses the confidence we can have in the results suggested by the data. Are we seeing real trends, or are we witnessing a black swan?

In the application of Big Data not all of these various V’s are as valid or valuable to the casual (or serious) observer. But as in so many things, interpreting the data is to the person making the call. Big Data is only a tool: use it wisely!

Some resources I used in collection data for this article include the follow web sites and blogs:

IBM’s Big Data & Analytics Hub 

MapR's Blog: Top 10 Big Data Challenges – A Serious Look at 10 Big Data V’s 

See also Dr. Kirk Borne’s Top 10 List on Data Science Central   

Bernard Marr’s LinkedIn post on The 5 Vs Everyone Must Know 

 

July 07, 2014

Data quality as critical to 'big data' as it is to search

For years, we've been preaching in the wilderness about the important of 'data quality' - the new name for 'garbage in, garbage out'. Maybe it gets so little respect because it was first cited on April Fools Day (back in 1963, according to Wikipedia). Bad content has caused enterprise search owners headaches for years - heck, one of our most popular posts, Sixty Guys named Sarah is really about a data quality problem.  

Last Monday, Fortune Magazine posted an article called Big data's dirty problem, telling us all that:

"Inaccuracies, misspellings, and obsolete information makes achieving the big data utopia a slog for businesses and researchers"

For those of us who have worked in search for a while, it comes as no surprise. (It's also a reason why you need great search along with a big data distro to succeed).

So many companies approach enterprise search with what I call a 'fire and forget' mentality. Google on the public web makes it look so easy - how hard can it be?

At so many companies, we've seen this vicious cycle: Pick the search platform that looks best, install it, ignore it, and repeat in two to four years. No - really, ask yourself: how long have you used your current enterprise search platform?

Now ask yourself "How often do we review top queries, top misspellings, zero hit query reports?"  In my experience, there is an inverse correlation between longevity of search platform and money invested in monitoring and maintaining the search platform. In search, big data, and life, you get what you pay for.

Looking to end the vicious circle of 'roll out, replace, repeat' with your enterprise OR big data apps? You might start with a data audit - we can help.

Miles Kehoe/July 6, 2014

May 14, 2013

Open Source Search Myth 5 - Total Cost of Ownership

This is part of a series addressing the misconception that open sounce search is too risky for companies to use. You can find the introduction to the series here; and Part 4, Features and Capabilities, here.

Part 5: Total Cost of Ownership

Total cost of ownership, TCO, is a big deal to large users of search technology. Usually, the component of TCO with respect to search is the license fee; enterprise search was historically an expensive proposition. But in fact there are other major components of TCO including implementation/operations, hardware cost, and ongoing support come to mind.

Walter Underwood, one of the key developers at Ultraseek and later the guy who did the Netflix relevancy contest, once explained the difference between commercial and open source search. Let me paraphrase: 

"With commercial search, you spend a lot of money to license it; then you spend a lot of money to implement it.

With open source search, you download the software for free; then you spend alot of money implementing it."

But there is another big element: how much iron do you need? A few years ago we helped a company switch search platform. Their business was search enabling small-town newspaper archives going back to the 1890s, via OCR'd content. They add tens of thousands of documents - historical newspaper articles - every day. 

The commercial platform they replaced required major expense in new servers as they content grew. Every year.

As it turns out, the ROI for swapping out their old search engine was easy: they needed less new hardware every year than with the old engine. And so much less that the ROI period was less than a year.

A different project we did when we were still doing business as New Idea Engineering involved a comparison between Microsoft SharePoint 2010 and search with Solr. Our customer wanted to know if the switch would, indeed, require fewer servers to do the job. It turns out that it was quite reasonable to replace the 12 servers Microsoft FAST required with 6 or fewer servers running Solr. Half the cost of servers; half the cost of energy; half the cost of maintenance. Like the concept?

Now, I'll agree that LucidWorks - my employer - markets a proprietary search platform based on Solr. And we do not license the product for free. But compared to most commercial platforms, LucidWorks Search is pretty darned reasonable. And you still get the cost savings in energy, iron, and scalability.

Less hardware. Better search. How is the TCO of open source a liability compared to most commercial search platforms?

 

 

March 20, 2013

Open Source Search Myth 3: Skills Required In-House

This is part of a series addressing the misconception that open source search is too risky for companies to use. You can find the introduction to the series here; this is Part 3 of the series; for Part 2 click Potentially Expensive Customizations.

Part 3: Skills Required In-House

One of the hallmarks of enterprise software in general is that it is complex. People in large organizations who manage instances of enterprise search as no less likely than their non-technical peers to believe that "if Google can make search so good on the internet, enterprise search must be trivial". Sadly, that is the killer myth of search.

Google on the internet - or Bing or Baidu or whichever site you use and love - is good because of the supporting technology, NOT simply because of search. I'd wager that most of what people like about Google et al has very little to do with search and a great deal to do constant monitoring and tweaking of the platform.

Consider: at the Google 'command line' (the search box), you can type in an arithmetic equation such as "2+3" get 5. You can enter a FedEx tracking number and get a suggestion to link to FedEx for information. It's cool that Google provides those capabilities and others; but those features are there because Google has programs looking at search behavior for all of its users every day in order to understand user intent. When something unusual comes up, humans get involved and make judgments. When it makes sense, Google implements another capability - in front of the search engine, not within it.

Enterprise search is the same - except that very few companies invest money in managing and running their search; so no matter how well you tune it at the beginning, quality deteriorates over time. Enterprise search is not 'fire and forget'.

 Any company that rolls out a mission critical application and does NOT have their own skilled team in house is going to pay a consulting form thousands of dollars a day forever. 'Nuff said.

 

March 18, 2013

Solr 4 Training 3/27 in Northern Virginia/DC area

Interrupting my series on whether open source search is a good idea in the enterprise to tell you about an opportunity to attend LucidWorks' Solr Bootcamp in Reston, Virginia on Wednesday March 27. Lucid staff and Lucene/Solr committers Erick Erickson and Erik Hatcher will be there, along with Solr pro Joel Bernstein. Heck, I'll even be there!

The link is here; for readers of our blog, use discount code SOLR4VA-5OFF for a discount.

Course Outline:

  • What's new in Solr 4
  • Solr 4 Functional Overview
  • Solr Cloud Deep Dive
  • Solr 4 Expert Panel Case Studies
  • Workshop and Open lab

And ask the guys how you can get involved in Solr as a contributor or committer!

 

March 15, 2013

Open Source Search Myth 2: Potentially Expensive Customizations

This is part of a series addressing the misconception that open source search is too risky for companies to use. You can find the introduction to the series here; this is Part 2 of the series; for Part 3 click Skills Required In House.

Part 2: Potentially Expensive Customization

Which is more expensive: open source or proprietary search platforms?

Commercial enterprise search vendors often quote man-years of effort to create and deploy what, in many cases, should be relatively straightforward site search.  Sure, there are tough issues: unusual security; the need to mark-up content as part of indexing; multi-language issues; and vaguely defined user requirements.

Not to single them out, but Autonomy implementations were legend for taking years. Granted, this was usually eDiscovery search, so the sponsor - often a Chief Risk Officer - had no worries about budget. Anything that would keep the CRO and his/her fellow executives out of jail was reasonable. But even with easier tasks such as search-enabling an intranet site, took more time and effort than it needed because no one scoped out the work. This is one reason so many IDOL projects hire large numbers of IDOL contractors for such long projects.

FAST was also famous for lengthy engagements. 

FAST once quoted a company we later worked with a one year $500K project to assist in moving from ESP Version 4.x to ESP Version 5.x. These were two versions that were, for all purposes, the same user interface, the same API, the same command line tools. Really? One year?

True story: I joked with one of the sales guy that FAST even wanted 6 months to roll out a web search for a small intranet; I thought two weeks was more like it. He put me on the spot a year later and challenged me to help one of his customers, and sure enough, we took almost a month to bring up search! But we had a constraint: the new FAST search had to be callable from the existing custom CMS, which had hard-coded calls to Verity K2 - the customer did not have time to re-write the CMS.

Thus, part of our SOW was to write a front-end that would accept search requests using the Verity K2 DLL; intercept the call; and perform the search in FAST ESP. Then, intercepting the K2 results list processing calls, deliver the FAST results to the CMS that thought it was talking with Verity. And we did it in less that 20% of the time FAST wanted to index a generic HTML-bases web site.

On the other hand, at LucidWorks we frequently have 5-day engagements to set up the Solr and LucidWorks Search; index the user's content; and integrate results in the end user application. I think for most engagements, other Solr and open source implementations are comparable. 

Let me ask: which was the more "expensive" implementation?

March 13, 2013

Open Source Search Myth 1 - Enhancements by Committee

This is part of a series addressing the misconception that open source search is too risky for companies to use. You can find the introduction to the series here; this is Part 1 of the series; for Part 2, click Potentially Expensive Customizations.

Part 1: Enhancements Subject to Committee

The original article back on LinkedIn called out one 'flaw' of open source search the belief that updates and improvements were made only on a timetable selected by the community - presumably the committers.  

One of the hallmarks of Apache open code projects is that, when you make a change or make an enhancement to the code, you submit the changes back to the Apache project. 

My employer, LucidWorks, enhances Solr, and we push back changes we make for consideration of the entire Solr community. Many of these changes are accepted and become part of Solr - almost all because of demand/need we see in our commercial customers, which helps everyone. 

Occasionally, a customer has a specific need and asks us to develop capabilities that are not part of the standard release. Sometimes the enhancements are on the Apache project plan; and sometimes they are unique. In any case, we create the enhancement and submit them for consideration in the standard Solr trunk. Once we’ve done so, anyone can download our enhancements and use them. And, as we do at LucidWorks, anyone can write enhancements for themselves and make them available.

Compare this to commercial search vendors who update on their own (typically unpublished) schedule; and no one can add a feature on their own. The vendor decides, and the consumer can only hope. And you pay upward of 20% of the list price every year in anticipation of the change you hope for but cannot add on your own.

And no matter what happens to Solr, our customers have the source code to self-support forever - no involuntary forced conversion. 

March 11, 2013

Open source search engine - a good idea?

My transition to LucidWorks has been a busy one with little time for other interests and hobbies (like flying!). But a week or so back I spotted a November 2012 post on the LinkedIn Enterprise Search Engine Professionals group asking the question in this post's title. 

Of course, LucidWorks provides deep support for Solr, and markets a Solr-based enterprise search product; but it's my work with enterprise search technology for the last 20+ years that really drives my response. Sadly, my reply was longer than LinkedIn allows.. so I posted a shorter link there and have come back here to reply in full. It's going to take a few posts though, so bear with me if you will. 

First, my response to the poster: Eight years ago open source was cool, but was probably not 'enterprise ready'.  Enterprise search is hard, but years ago the Apache projects (Lucene and Solr) began working to solve the tough issues - ones that were not commercially worth it for the 8 to 10 major commercial enterprise search companies.

Then a funny thing happened: Solr got better and better; and the commercial vendors started merging. Verity got sucked into Autonomy, which got sucked into HP. FAST got sucked into Microsoft. Vivisimo got sucked into IBM. And with every acquisition, the time and money that enterprises had invested in commercial search became totally wasted - when the platform you based your search on got acquired, you had to move to the new engine. A painful, expensive and long process,

As I blogged just a few weeks ago, open source search is now the default SAFE choice for enterprises that need search. You may have to do some coding, or find a skilled expert/team to help; but you own your destiny. Lucid (my company) does sell support for Solr; there are other fine companies, large and small, that do so. We're fortunate enough to employ a good number of the committers - no majority, which is probably best for the community. 

The original poster, an employee of a proprietary search vendor, may have had his reasons. Nonetheless, he listed five reasons he felt that open source search for enterprises was a bad idea - based on a three year old report by my friend Hadley Reynolds - taken a bit out of context. These 'disadvantages' are listed and linked below.  

* Enhancements on community timetable only

* Potentially expensive customization

* Requirement for search development skills in-house or ready-to-hand

* Production functionality may trail in specific features relative to commercial search firms

* Maintenance/system life costs can become significant

In the next several posts, I'm going to address and refute these one at a time. Stay tuned.

 

February 14, 2013

A paradigm shift in enterprise search

I've been involved in enterprise search since before the 'earthquake World Series' between the Giants and the A's in 1989. While our former company became part of LucidWorks last December, we still keep abreast of the market. But being a LucidWorks employee has brought me to a new realization: commercial enterprise search is pretty much dead.

Think back a few years: FAST ESP, Autonomy IDOL (including the then-recently acquired Verity), Exalead, and Endeca were the market. Now, every one of those companies has become part of a larger business. Some of the FAST technology lives on, buried in SharePoint 2013; Autonomy has suffered as part of HP because - well, because HP isn't what it was when Bill and Dave ran it. Current management doesn't know what they have in IDOL, and the awful deal they cut was probably based on optimistic sales numbers that may or may not have existed. Exalead, the engine I hoped would take the place of FAST ESP in the search market is now part of Dassault and is rarely heard of in search. And Endeca, the gem of a search platform optimized for the lucrative eCommerce market, has become one of three or four search-related companies in the Oracle stable. 

Microsoft is finally taking advantage of the technology acquired in the FAST acquisition for SharePoint 2013, but as long as it's tied to SharePoint - even with the ability to index external content - it's not going to be an enterprise-wide distribution - or a 'big data' solution. SharePoint Hadoop? Aslongf as you bring SQL Server. Mahout? Pig? I don't think so. There are too many companies that want or need Linux for their servers rather than Windows.

Then there is Google, the ultimate closed-box solution. As long as you use the Google search button/icon, users are happy – at least at first. If you have sixty guys named Sarah? Maybe not.

So what do we have? A few good options generally from small companies that tend to focus on hosted eCommerce - SLI Systems and Dieselpoint; and there’s Coveo, a strong Windows platform offering.

Solr is the enterprise search market now. My employer, LucidWorks, was the first, and remains the primary commercial driver to the open source Apache project. What's interesting is the number of commercial products based on Solr and it's underlying platform, Lucene.

Years ago, commercial search software was the 'safe choice'. Now I think things have changed: open source search is the safe choice for companies where search is mission. Do you agree?

I'll be writing more about why I believe this to be the case over the coming weeks and months: stay tuned.

/s/Miles

 

December 18, 2012

Last call for submiting papers to ESS NY

This Friday, December 21, is the last day for submitting papers and workshops to ESS in NY in May 21-22. See the information site at the Enterprise Search Summit Call for Speakers page.

If you work with enterprise search technologies (or supporting technologies), chances are the things you've learned would be valuable to other folks. If you have an in-depth topic, write it up as a 3 hour workshop; if you have a success story, or lessons learned you can share, submit a talk for a 30-45 minute session.

I have to say, this conference has enjoyed a multi-year run in terms of quality of talks and excellent Spring weather.. see you in May?