Asking the question: GOOD; asking it over and over: BAD – where social engagement in the workplace fails.


Using social tools within the enterprise is a valuable thing. It lets people ask questions to a bigger audience than just those sitting within hearing distance of their desk.

I’ve discussed this in earlier posts (ESS (Enterprise Social Software) – user adoption, and Let’s share!). It’s incredibly valuable to be able to draw on the knowledge of others. That’s why it’s good to be able to ask questions. The answer given helps not just the asker, but can help others, and at the same time, others can add to the answer creating even more value.

Where I feel this all falls down though is that, often, there is no real way to capture that knowledge that came about from the questions asked. Continue reading

A quote from 1958

Technology, so adept in solving problems of man and his environment, must be directed to solving a gargantuan problem of its own creation. A mass of technical information has been accumulated and at a that has far outstripped means for making it available to those working in science and engineering.

Charles P. Bourne & Douglas C. Engelbart, 
SRI Journal, Vol.2, No. 1, 1958


Search – it started earlier than you think.

A very brief history of search

In this post Martin White describes the history of search. It began earlier than you think…

Intranet Focus provides information management and intranet management consulting services. They also regularly publish a Research Note packed with great stuff.

In the November issue of their Research Note, there is an interesting piece on the history of Search. Martin White, the Managing Director, has granted me permission to publish it here (see below).

By the way – Martin has recently published a book –
Enterprise Search: Enhancing Business Performance.

It’s certainly on my Christmas list this year…

A very brief history of search

Search came into prominence with the advent of the web search services in the 1990s, notably Alta Vista, Google, Microsoft and Yahoo. However the history of search technology goes back much further than this. Arguably the story starts with Douglas Engelbart, a remarkable electrical engineer whose main claim to fame is that he invented the mouse that is now a standard control device for personal computers. In 1959 Engelbart started up the Augmented Human Intellect program at the Stanford Research Institute in Menlo Park, California. One of his research students was Charles Bourne, who worked on whether it would be possible to transform the batch search retrieval technology developed in the 1950s into a service based on a large mainframe computer which users could connect to over a network.

By 1963 SRI was able to demonstrate the first ‘online’ information retrieval service using a cathode ray tube (CRT) device to interact with the computer. It is worth remembering that the computers being used for this service had 64K of core memory. Even at this early stage of development the facility to cope with spelling variants was implemented in the software.  Other pioneers included System Development Corporation, Massachusetts Institute of Technology and Lockheed. The main focus of these online systems was to provide researchers with access to large files of abstracts of scientific literature to support research into space technology and other large scale scientific and engineering projects.

These services were only able to search short text documents, such as abstracts of scientific papers. In the late 1960s two new areas of opportunity arose which prompted work into how to search the full text of documents. One was to support the work of lawyers who needed to search through case reports to find precedents. The second was also connected to the legal profession, and arose from the US Department of Justice deciding to break up what it regarded as monopolies in the computer industry (targeting IBM) and later the telecommunications industry, where AT&T was the target. These actions led IBM in particular to make a massive investment into full-text search which by 1969 led to the development of STAIRS (Storage and Information Retrieval System) which was subsequently released in 1973 as a commercial IBM application. This was the first enterprise search application and remained in the IBM product catalogue until the mid-1990s.

One of the core approaches to information retrieval is the use of the vector space model for computing relevance developed by Professor Gerald Salton of Cornell University over a period of two decades starting in 1963.  The vector space model procedure uses a cosine vector coefficient to compare the similarity of the content of the document to the query terms. This is the basis for most of the enterprise search applications with the notable exceptions of Recommind (which uses Probabilistic Latent Semantic Indexing) and Autonomy.

In 1984 Dr. Michael Porter, at the University of Cambridge, wrote Muscat for the Cambridge University MUSeum CATaloguing project. Over the ensuing decade this software was arguably the first to use probability theory in natural language querying, focusing on the relative value of a word – either in the search expression, or in the document being indexed. Identifying links and correlations between significant words that co-exist together across the whole document collection creates a probabilistic model of concepts. Using a probabilistic approach to determining relevance dates back to research undertaken at the RAND Corporation in the late 1950s and by the late 1980s there was a substantial amount of research into the use of Bayesian probability models for information retrieval.

The history of Autonomy dates back to the formation in 1991 of Cambridge Neurodynamics by Dr. Mike Lynch. Cambridge Neurodynamics used neutral network and pattern recognition approaches to fingerprint recognition. In 1996 Dr. Lynch founded Autonomy together with Richard Gaunt with $15 million in funding from investors including Apax Venture Capital, Durlacher and the English National Investment Company (ENIC). The novel step was not just the use of Bayesian statistics but the combination of these statistical approaches with non-linear adaptive signal processing (used by Cambridge Neurodynamics for analysing fingerprint images) of text.  For that time the level of investment in a company with no commerical track record was quite remarkable. In 1998 the company was floated on EASDAQ which capitalised the company at around $150 million, and its shares rose quickly from $15 in October 1999 to $120 in March 2000. This valued the company at over $5 billion.

The company was floated on the London Stock Exchange in 2000, and became the only publicly-quoted search company in the world. This was important for procurements in both the corporate and public sector given that all other search companies remain privately held and do not disclose earnings and profits other than under a non-disclosure agreement with a prospective customer.

Latent Semantic Indexing dates from the late 1980s and Probabilistic Latent Semantic Indexing from the late 1990s and among other features provide solutions to the issues raised by different words having the same meaning and the same word having different meanings.

A big thanks to Martin for this information, and for bringing to my attention the names of Gerald Salton, and Douglas Englebart. I recommend that you click on the below links and read more about the fascinating work that these two have done.

I also highly recommend that you checkout  Intranet Focus’s site, and read some of the great stuff there. 

Recommended Reading
  • Gerald Salton (Wikipedia)
  • Douglas Englebart Institue (website)
  • Intranet Focus (website)
  • Martin White (Goggle hits)
  • Enterprise Search: Enhancing Business Performance.

Search in Real-life

Discovered this gem of a video…

I want Google Search (again)

I have come across this sentiment often (that is – users want “Google Search”. (See my earlier post “We want Google“)

TSG’s blog post really captures some great ways of handling this…


  • 43% of Total Google Search Queries Are Local (

Promise #14 – Beta Testing SLIKK

Refer14 Unfulfilled Promises


In my post “Beta Testing SLIKK” I mentioned how I had applied for an “invitation” to Beta test the SLIKK engine.


Promise Fulfilled

See my post that was finished recently here:


Beta Testing SLIKK – My feedback

In my earlier post “Beta testing SLIKK” I described how I requested an invitation to Beta test SLIKK – a site that was offering a new way of searching.

Well after about a week, I got my invitation, and sat down to give SLIKK a test drive.

Here are my findings…

SLIKK Search Application

The SLIKK Search application is a Search Interface that aims to provide the “new” way of searching.

SLIKK Features

On the surface, SLIKK looks like a great tool. Its features include:


SLIKK can be configured to return search results from either Google, Yahoo/Bing, or SLIKK’s own results. Google Results are selected by default.


SLIKK provides search results based on source material:

  • Web
  • Images
  • News
  • Video
  • Blogs
  • Twitter


With Multi-view, a spilt screen can be displayed to show you two different groups of results. (For example – “Web” search results on the left, and “Video” search results on the right.)


SLIKK offers the ability to open the source page, that the search result is pointing to, in a small “child” window. This is not a preview, but the actual page. Further to that you can open multiple ” source pages” and have these open either in a series of tabs or “tiled. Then you have the choice of changing it to full screen, etc.


You can select from a selection of sites (Google Maps, Twitter, etc) or you can enter your own, so that these appear in the top of the SLIKK page.

What I thought of SLIKK

At first glance SLIKK appears to be a great application.

However, when I looked closer at each feature, I started to think “ok…but what is the real advantage that is offered here?”

Search Engine – You can select the search engine that you want the search results from. Really – I can easily do the same by going to the Google site and executing a search there, or going to the Bing search and executing the search there.

Content Type – This is nothing that the “legacy” search engines didn’t already offer. However – to be able to get Twitter results was definitely something I was happy with.

Multi-View – Initially I thought that this was pretty cool. But , to be honest, there wasn’t really that much advantage to this feature. The only value I saw was if you wanted to see, side-by-side, search results for something while viewing what was being tweeted about it the same time. But then…how often do you want to do that?

Open Search Results – Note – this is not a “preview” feature similar to what Google offers. It is a “child window” with the source site in it. In these times of tabbed browsers, I wa struggling to find a real advantage to this.

My Links – When I first clicked on this (and saw the screen displayed above), I thought that it would offer real value. But all it does is display the name of the site in the top of the screen which, when clicked on, will open the site in a new tab, or window. In short – bookmarks/favorites.

  I found that SLIKK was not actually that Slick. I certainly applaud the owners of SLIKK for what they are doing, but I feel that the big Search Engines are already able to offer so much more.

Beta Community

SLIKK have a Beta program in place. And there is a forum, and a blog (as well as a Facebook page etc). They do seem quite receptive to input from users and appear to be trying hard to create something that people want.

I wish them the best of luck.

  • Beta Testing SLIKK
  • Search the way you like it, with Slikk
  • New Search Engine Launches at DEMO Spring 2012
  • Search and browse simultaneously with Slikk

Running a perfect Enterprise Search project

Search experts discussing an enterprise search project

How should a perfect Enterprise Search project be run?

In this post, based on a LinkedIn discussion, I describe a meeting of some of the key players in Search who get into a great discussion about how running an Enterprise Search project…


  • Charlie Hull
  • Martin White
    (Author of Enterprise Search: Enhancing Business Performance)
  • Ken Stolz
  • Otis Gospodnetić
  • Jan Høydahl
  • Stephanus van Schalkwyk
  • Helge Legernes
  • Gaston Gonzalez
  • Mike Green

A conversation about Running a perfect Enterprise Search project

It was Friday evening, and Charlie was meeting his friends for a drink. They all worked in IT and had, between them, years of experience, especially in enterprises and enterprise search, and liked to get together to catch up with what each was doing.

After a few pints and small talk, Charlie said “Guys, what do you all reckon would be the best way to build a large-scale enterprise search project?”

Martin, who had a lot of experience in this area, looked up and said, “The main thing is that you should never underestimate what is required to get the best from a search investment.”

Charlie nodded in agreement. “But how can we help the client understand what sort of a commitment is needed?”

Ken suggested using an Agile/Scrum approach for the analysis of what the client needed as well as the development of the search UI.

“Hear hear” called out the others. Otis took the chance to follow that up with “you need someone who really understands what search is all about”. Martin glanced at him and nodded. Otis carried on. “Someone who cares about search metrics, and knows what changes need to be made to improve them.”

Jan chimed in “I agree with you on some points. You‘ve got to make sure that you include all the stakeholders, and also, educate the customer. Get everyone in the same room, and start with a big picture, narrowing it down to what is actually required. And, yes, create demo’s of the search system using “real data”. It helps the customer understand the solution better.” “However,” he continued. “I’m still careful about forcing a Scrum approach on a customer that might be unfamiliar with it.”

Stephanus put down his glass. “I’ve just finished a Phase I implementation at a client. The critical thing is to make sure you is that you set the client’s expectations and get buy-in from their technical people. Especially in security and surfacing. And I agree with Jan. There are still a lot of companies that don’t use Agile, or Scrum, at the moment.”

Sitting next to Stephanus was Helge. He began to speak. “There are a few important things. Make sure you’ve got Ambassadors – people who really care and promote the project. And ask the important question – ‘How can the search solution support the business so that they can become more competitive?’ It might be necessary to tackle this department by department. Get the business users and content owners together, but as Stephanus just said, don’t forget IT. And make sure that the governance of the system is considered.

Stephanus smiled. “Yes – the workshop idea is a definite must.”

Gaston, who was sitting next to Charlie, said “An Agile approach has worked for me in the past. Creating prototypes is important. Most clients don’t know what they want until they see something tangible.” “Ok,” said Charlie, “how has that worked?”

Gaston continued “Build a small team consisting of  a UI designer, a developer, a search engineer, someone from the IA team, and no more than two of the business users. Having someone there from QA is also handy. Start with a couple of day-long workshops to go over project objectives, scoping and requirements gathering. Use one-week sprints, and then aim to produce workable prototypes. At the end of the week, schedule a time where the prototype can be demoed. The point is to get feedback about what is working, and what the goal for the next sprint should be.

Mike, the last one in the group, looked around at everyone, and then back at Charlie, and said. “Charlie – there’s a lot of great advice here. One important thing to remember is that you have to work with the client to ensure that the search solution is part of the strategy. As the others have already mentioned, work with the client and educate them. Getting all the stakeholders together for some common education, collaboration and planning can really go a long way towards getting the necessary buy-in and commitment needed for a successful project. It also is great for setting expectations and making sure everyone is on the same page.”

Charlie was impressed. He had some pretty smart friends. “Thanks guys. You’ve all had some excellent points. Let me buy you all another round”.

 Key Takeaways – Running an Enterprise Search project

  • Don’t underestimate what is required to get the best from a search investment.
  • Lead the users through the process gently. Use demonstrations and an Agile approach when trying to understand what their real user requirements are. Do the same for the development of the search UI.
  • Have at least one person who really understands search, and search metrics.
  • Ensure that you have buy-in from the departments involved, and especially IT.
  • Produce workable prototypes – these help the users understand what they are getting.
  • Ensure that everyone involved is on the same journey – include educating the users.


Martin White’s book

Martin White (who was involved in this discussion) has written a book – Enterprise Search: Enhancing Business Performance. You can check it out on Amazon here.


Interesting Resources

  • Why All Search Projects Fail by Martin White (CMS Wire)
  • Designing the Search Experience: The Information Architecture of Discovery by Tony Russell-Rose
  • How to Evaluate Enterprise Search Options by James A Martin (
  • Developing an enterprise search strategy by Martin White (Intranet Focus)

Disclosure – some of these links are affiliate links.

Why giving the users what they want is not enough – the Importance of communication

What follows is a post that I published on AIIM’s site as an “Expert Blogger”. (The original can be read here)


Why giving the users what they want is not enough – the Importance of communication

As you are all most likely aware, giving the users what they want is not the right thing. Why? Because, often, the users don’t know really what they want.

Consider the following example:

A large restaurant chain has restaurants across the globe. Each restaurant needs to maintain documentation such as construction plans of each restaurant, recipes, procedures, and methodologies, etc. The “critical” documents are kept in a legacy ECM system and several SharePoint doclibs store the non-critical documents. These systems are located centrally, and are all globally accessible.

The business users work primarily with the legacy ECM system, but often also need to work with the documents in SharePoint. When a document was needed, a search was either done in SharePoint, or in the legacy system, using its rather complicated search feature.

Performing searches in two different places wasn’t easy, or efficient. And so, the users cried out “Give us a one central place where we can perform a search” When asked for more details they business users replied “Make it like Google”.

The restaurant’s IT-people (who might have been a little too enthusiastic) swung into action, without anymore questions. They found a tool that would allow SharePoint to “talk” with the legacy ECM system and crawl all the documents, indexing everything it could.

After working many weeks getting things set up, and configured, the IT-people sat and watched as SharePoint crawled through the content. Once finished, initial tests were done to ensure that a search action would actually return content. It was working perfectly. And it was “just like Google”.

A demonstration of the Search system was given to the users, who were ecstatic. They were able to easily enter search terms, and get results from the SharePoint, doclibs as well as the legacy system’s repositories. It was fantastic. It was easy to use, and there was no extensive training required. There was much cheering and showering the IT-people with small gifts. After further testing, the search facility was officially moved into production.

For the first couple of month the users were keen to use the “enterprise search facility”. But then, gradually, complaints started being heard. “The search results contained too many hits”, “Why wasn’t it more like the search feature in the legacy system?”, or “the search results were just showing the title of the document.” Users went back to using the legacy system’s search feature for the “important” documents, and the SharePoint search was just used for the documents in the document libraries. Namely, the “central” search facility was a failure.

What had gone wrong here? The business users wanted a single search facility, and they wanted it “like Google”. And that’s what the IT department had delivered – there was a single box where users could type in words they wanted find. And the search would return documents from all the different document repositories.

In this case, however, the users didn’t really know what they wanted. Yes, they wanted “easy”, but they also wanted something that allowed granular searches to be done (just like their “old” search tool). They also wanted to know where the search results came from. And they wanted the “important” documents to appear at the top of the search results.

The IT team should have asked more, and then they should have listened more. And then they should have repeated this process. Until it was understood what the Business really needed.  The team had followed a Waterfall approach, where requirements were asked up front, and then were not allowed to change. Agile programming techniques could have been used where a “finished’ product is shown to the users several times during the project. The users could give feedback which would lead to a better understanding of what they want, as well as the ability to refine the solution.

Fortunately, the IT team had the opportunity to improve the search system. They did add a small button to the search result screen, where users could provide immediate feedback. Working with this, as well as sending out regular “satisfaction” questionnaires, the IT team was able to identify areas of improvement. These include not only changes that were required on the user interface, and results screen, but it also allowed the IT team to see where further refinements were needed in the indexing process. Every four months, the improvements were presented to the business, and then implemented.

Now, the business users don’t use anything else.

Enterprise Search – 5 important factors to consider

Is True Enterprise Search actually possible

The idea of “Enterprise Search” is an attractive one. It certainly would be its weight in gold to have a single search location where keywords can be entered, and within seconds, results would be displayed that include both structured, and unstructured, content from across the numerous repositories, silos, systems, archives, file shares, cabinets, clouds, etc, etc.

Is true Enterprise Search possible?

But is true Enterprise Search really possible? I know there are several tools that provide “Enterprise Search” functionality, but these usually allow you to search over a fixed number of different repositories, usually containing similar data. Maybe it’s a set of defined documents, or a database, or similar. You certainly get the opportunity to make available content from disparate sources, but can you consider that “enterprise”.

If you consider what’s involved running a search across the “Enterprise”, it should be quite easy, right?

What to think about when considering Enterprise Search

There are several factors that you should keep in mind when considering Enterprise Search…

Where is your data and content?

First off, you need to be able to identify where your structured, and unstructured, data and content is. Remember, here we are dealing with the complete enterprise, so don’t forget that this includes files shares, hard drives, database system, ERP systems, ECM systems, etc, etc. And what happens if new “sources” are added?

What sort of Content have you got?

Next, you need to know what sort of content you have. Can the Enterprise Search application “read”, or parse, the data/content you have? There certainly are ways to make it possible to do this. You can install an ifilter, for example. But, you’ll need one for every format that you have in your enterprise.

Can you connect to all the sources?

You need a way that your Search application can connect to all of the different “sources.” In principle, this is, again, possible. (However, I would imagine that this would require a lot of configuration).

How often is that content changing?

How frequently is your data, and content, changing? For example, in an ECM system, is the content constantly being changed (as new documents are added). Maybe several major and minor versions are kept of each document. Do you need to index all versions, or only the latest? What about data in your ERP system? How accurate do you want your search results to be? Do you just keep continuously indexing?

What security is already on the content?

Do you want users to be able to see results of data, or content, that, if they had used the native application, they do not have rights to? If there are disparate security systems in place, how do you translate ACLs from them into a common format? Do you use “early binding”, or “late-binding”?  

It’s not that simple

As you can see, it’s not that simple. The above factors need to be thought about when considering Enterprise Search.

Until we have a way to be able to “capture” all information from an undefined number of sources, with an undefined number of data, and file, formats, with disparate sets of ACLs, I return to my opening question: “Is True Enterprise Search actually possible?”

What are your thoughts on this?

This post was the first post I published on AIIM’s site as an “Expert Blogger”. It has been slightly remodified. (The original can be read here). 

Related articles
  • Trends and Challenges of Enterprise Search Discussed in Online Presentation
  • Huge problems for search in the enterprise
  • How should a “Perfect” Search project be run?