Types of information search – exploratory and focalized

Types of Information Search

Types of information search – exploratory search and focalized search

There are different types of information search.This post looks at two of the different types and the factors surrounding them.


How People Search

For a lot of people, performing a search involves keying in some words or a phrase and then hoping that what they are looking for will be returned.

If they are the person is lucky, this works … Often it doesn’t.


Search Approaches 

When it comes to Search, there are different search approaches.

To name a few:

  • web,
  • e-commerce,
  • enterprise,
  • desktop,
  • mobile,
  • social, and real-time,
  • discovery and
  • information governance.

Types of Information Search - Search uses

Search Goals

As well as the various search approaches, users have different search goals. That is – of these, the following have different purposes:

  • search – “Show me what you’ve got
  • relevance ranking“Show me the most relevant results
  • relevance feedback – “Show me what’s popular”,
  • user interaction – “I want to search for something in a way that is most appropriate to what I’m doing
  • result navigation – “I want to be able to navigate through the search results” 
  • document viewing – “Show me the document that you have listed”

Each works in a different way, depending on the purpose of the search.


 

Different Types of Information Search

There are, essentially two types of Information search:

  1. Focalized Search
  2. Exploratory Search

 

Focalized search:

Types of Information Search - Focalized search

Focalized search is one of the types of Information 

With focalized search, the user knows exactly what they are looking for. They know where to find it. And they are, generally, only interested in the best document or website.

Examples of focalized search engines are Web-search engines. And are best suited for web portals, personal search, mobile or social search.

These engines return the best results (and not all possibly relevant results). As such, they don’t have advanced navigation. And relevance feedback is based on what’s popular.

With these search engines you can’t use wildcards, or do fuzzy searches. The “find more” or “find similar” search techniques might be present, but often become very slow on larger collections of information.

Exploratory search

Types of Information Search - exploratory
Exploratory search is another type of Information search.

It includes situations where searchers either:

  1. need to learn about the topic in order to understand how to achieve their goal, or
  2. they don’t know how to achieve their goals (either the technology or the process), or
  3. they don’t even know what their goals are in the first place.

Examples of this include:

  • discovery,
  • compliance,
  • investigative,
  • intelligence and
  • information governance search applications.

Users generally combine querying and browsing strategies to foster learning and investigation.

For exploratory search to be used effectively, things such as content analytics, text-mining technology and advanced result navigation and visualization come into play. As do document based relevant feedback, taxonomy support and extensive meta-data management.

Exploratory search makes use of faceted search. With faceted search, users are able to explore search results further by “drilling down”, or filtering, on the results that are available.

This ability to filter,or drill-down, makes use of the metadata of the item that is listed. (Metadata is “extra information” that each item has.For example, the  “author”, the “publisher”, the “department” that the information belongs to, etc)


 

Were you paying attention?

Just to see if you were paying attention, see if you can match up the search approaches with the search goals and the type of search that would be best suited.

Click here to download the PDF.


 
 Recommend Resources
(Important Disclosure)

Related Post

How Search Engines Work 2016

How Search engines work 2016

 

How Search Engines Work 2016 is vastly different than how they worked in 2004.

Search has come a long way since the days of PageRank and 10 blue links (see how Google used to work).

With an ever expanding amount of inputs, variables, and penalties, this infographic (by SEO Book) looks at how search works today and what that means for both users and webmasters.

 


 

How Search Engines Work 2016

How Search Engines work in 2016

Internet Marketing Graphics by SEOBook

 


Useful Resources

(Important Disclosure)

Related Post

Asking the question: GOOD; asking it over and over: BAD – where social engagement in the workplace fails.

same_tune

Using social tools within the enterprise is a valuable thing. It lets people ask questions to a bigger audience than just those sitting within hearing distance of their desk.

I’ve discussed this in earlier posts (ESS (Enterprise Social Software) – user adoption, and Let’s share!). It’s incredibly valuable to be able to draw on the knowledge of others. That’s why it’s good to be able to ask questions. The answer given helps not just the asker, but can help others, and at the same time, others can add to the answer creating even more value.

Where I feel this all falls down though is that, often, there is no real way to capture that knowledge that came about from the questions asked. Continue reading

Related Post

"We use Google…to find out about our own company"

Using 3rd party tools to find what I wantYou wouldn’t believe the number of times I have heard people say that when they want to find out about their own company, they use Google

Case in point – I was at a well-known appliance store the other day, that has branches throughout the country.I asked the girl at the checkout whether there was a store in one particular city. While she looked furtively at her screen, I took a peek over her shoulder. It was the company’s intranet. I advised her to open up a new tab in her browser, go to Google, and type in the name of the store plus the word “branches”. She obediently followed my instructions, and two minutes later she was able to give me an answer.

I won’t talk about the magic that Google performs to bring you the information that you want. I do want to talk, however, about why people are going to an outside facility rather than using the companies own resource…findability  and usability.

Findability does not just mean being able to search for something and getting results. It also means that the information on the intranet is structured in a logical way that allows people to navigate to information quickly. Often, little thought has gone into the way information should be presented:

  • What information do the users (in this case all staff ranging from back office workers to those at the client interface) need access to?
    Analytics will show you what is being accessed the most. Well thought surveys can return valuable information. Even talking to staff members individually,or in groups, can add a lot of value.
  • How can the navigation structure be set up so that it is intuitive?
    Use the feedback you got. Perform a card sort to help build up a understanding of how the staff want information grouped. Put together a “mock navigation”,using a suitable tool such as Optimal’s Treejack, and see how easy it is for user’s to find what they are looking for.
  • What other ways are there that the information can be accessed quickly? Short-cuts, quick links, FAQs.
    Create a screen mock-up, and test how easy it is for staff to find the information. Use a tool that allows this to be simulated on-line, and set up real-life scenarios involving staff members with different functions to determine whether improvements can be made.
  • Pay attention to the questions that are often asked by staff.
    These will usually turn up questions that get repeatedly asked. “How is xyz done?”, “Where do I find information on our widgets?”. These questions make up the basis for the FAQs or a wiki.
  • What’s the Best Way to Train New Intranet Users?
  • A short history of intranets and what’s next with social, mobile and cloud
  • 5 Critical Aspects Of Your User Experience #UserExperience
  • Social Intranets, the Lemming Curve and ‘Down With People’
  • Using The Sharepoint Intranet Portal
  • 5 Views on Intranet Trends for 2014

 

Related Post

A quote from 1958

Technology, so adept in solving problems of man and his environment, must be directed to solving a gargantuan problem of its own creation. A mass of technical information has been accumulated and at a that has far outstripped means for making it available to those working in science and engineering.

FACETS OF THE TECHNICAL INFORMATION PROBLEM
Charles P. Bourne & Douglas C. Engelbart, 
SRI Journal, Vol.2, No. 1, 1958

 

Related Post

Search – it started earlier than you think.

A very brief history of search

In this post Martin White describes the history of search. It began earlier than you think…

Intranet Focus provides information management and intranet management consulting services. They also regularly publish a Research Note packed with great stuff.

In the November issue of their Research Note, there is an interesting piece on the history of Search. Martin White, the Managing Director, has granted me permission to publish it here (see below).

By the way – Martin has recently published a book –
Enterprise Search: Enhancing Business Performance.



It’s certainly on my Christmas list this year…

A very brief history of search

Search came into prominence with the advent of the web search services in the 1990s, notably Alta Vista, Google, Microsoft and Yahoo. However the history of search technology goes back much further than this. Arguably the story starts with Douglas Engelbart, a remarkable electrical engineer whose main claim to fame is that he invented the mouse that is now a standard control device for personal computers. In 1959 Engelbart started up the Augmented Human Intellect program at the Stanford Research Institute in Menlo Park, California. One of his research students was Charles Bourne, who worked on whether it would be possible to transform the batch search retrieval technology developed in the 1950s into a service based on a large mainframe computer which users could connect to over a network.

By 1963 SRI was able to demonstrate the first ‘online’ information retrieval service using a cathode ray tube (CRT) device to interact with the computer. It is worth remembering that the computers being used for this service had 64K of core memory. Even at this early stage of development the facility to cope with spelling variants was implemented in the software.  Other pioneers included System Development Corporation, Massachusetts Institute of Technology and Lockheed. The main focus of these online systems was to provide researchers with access to large files of abstracts of scientific literature to support research into space technology and other large scale scientific and engineering projects.

These services were only able to search short text documents, such as abstracts of scientific papers. In the late 1960s two new areas of opportunity arose which prompted work into how to search the full text of documents. One was to support the work of lawyers who needed to search through case reports to find precedents. The second was also connected to the legal profession, and arose from the US Department of Justice deciding to break up what it regarded as monopolies in the computer industry (targeting IBM) and later the telecommunications industry, where AT&T was the target. These actions led IBM in particular to make a massive investment into full-text search which by 1969 led to the development of STAIRS (Storage and Information Retrieval System) which was subsequently released in 1973 as a commercial IBM application. This was the first enterprise search application and remained in the IBM product catalogue until the mid-1990s.

One of the core approaches to information retrieval is the use of the vector space model for computing relevance developed by Professor Gerald Salton of Cornell University over a period of two decades starting in 1963.  The vector space model procedure uses a cosine vector coefficient to compare the similarity of the content of the document to the query terms. This is the basis for most of the enterprise search applications with the notable exceptions of Recommind (which uses Probabilistic Latent Semantic Indexing) and Autonomy.

In 1984 Dr. Michael Porter, at the University of Cambridge, wrote Muscat for the Cambridge University MUSeum CATaloguing project. Over the ensuing decade this software was arguably the first to use probability theory in natural language querying, focusing on the relative value of a word – either in the search expression, or in the document being indexed. Identifying links and correlations between significant words that co-exist together across the whole document collection creates a probabilistic model of concepts. Using a probabilistic approach to determining relevance dates back to research undertaken at the RAND Corporation in the late 1950s and by the late 1980s there was a substantial amount of research into the use of Bayesian probability models for information retrieval.

The history of Autonomy dates back to the formation in 1991 of Cambridge Neurodynamics by Dr. Mike Lynch. Cambridge Neurodynamics used neutral network and pattern recognition approaches to fingerprint recognition. In 1996 Dr. Lynch founded Autonomy together with Richard Gaunt with $15 million in funding from investors including Apax Venture Capital, Durlacher and the English National Investment Company (ENIC). The novel step was not just the use of Bayesian statistics but the combination of these statistical approaches with non-linear adaptive signal processing (used by Cambridge Neurodynamics for analysing fingerprint images) of text.  For that time the level of investment in a company with no commerical track record was quite remarkable. In 1998 the company was floated on EASDAQ which capitalised the company at around $150 million, and its shares rose quickly from $15 in October 1999 to $120 in March 2000. This valued the company at over $5 billion.

The company was floated on the London Stock Exchange in 2000, and became the only publicly-quoted search company in the world. This was important for procurements in both the corporate and public sector given that all other search companies remain privately held and do not disclose earnings and profits other than under a non-disclosure agreement with a prospective customer.

Latent Semantic Indexing dates from the late 1980s and Probabilistic Latent Semantic Indexing from the late 1990s and among other features provide solutions to the issues raised by different words having the same meaning and the same word having different meanings.

A big thanks to Martin for this information, and for bringing to my attention the names of Gerald Salton, and Douglas Englebart. I recommend that you click on the below links and read more about the fascinating work that these two have done.

I also highly recommend that you checkout  Intranet Focus’s site, and read some of the great stuff there. 

Recommended Reading
  • Gerald Salton (Wikipedia)
  • Douglas Englebart Institue (website)
  • Intranet Focus (website)
  • Martin White (Goggle hits)
  • Enterprise Search: Enhancing Business Performance.

Related Post

Search in Real-life

Discovered this gem of a video…

Related Post

I want Google Search (again)

I have come across this sentiment often (that is – users want “Google Search”. (See my earlier post “We want Google“)

TSG’s blog post really captures some great ways of handling this…

 

  • 43% of Total Google Search Queries Are Local (prweb.com)

Related Post

Promise #14 – Beta Testing SLIKK

Refer14 Unfulfilled Promises

Background

In my post “Beta Testing SLIKK” I mentioned how I had applied for an “invitation” to Beta test the SLIKK engine.

Verdict

Promise Fulfilled

See my post that was finished recently here:

——————————-


Related Post

Beta Testing SLIKK – My feedback

In my earlier post “Beta testing SLIKK” I described how I requested an invitation to Beta test SLIKK – a site that was offering a new way of searching.

Well after about a week, I got my invitation, and sat down to give SLIKK a test drive.

Here are my findings…

SLIKK Search Application

The SLIKK Search application is a Search Interface that aims to provide the “new” way of searching.

SLIKK Features

On the surface, SLIKK looks like a great tool. Its features include:

SEARCH ENGINE

SLIKK can be configured to return search results from either Google, Yahoo/Bing, or SLIKK’s own results. Google Results are selected by default.

CONTENT TYPE

SLIKK provides search results based on source material:

  • Web
  • Images
  • News
  • Video
  • Blogs
  • Twitter

MULTI-VIEW

With Multi-view, a spilt screen can be displayed to show you two different groups of results. (For example – “Web” search results on the left, and “Video” search results on the right.)

OPEN SEARCH RESULTS

SLIKK offers the ability to open the source page, that the search result is pointing to, in a small “child” window. This is not a preview, but the actual page. Further to that you can open multiple ” source pages” and have these open either in a series of tabs or “tiled. Then you have the choice of changing it to full screen, etc.

MY LINKS

You can select from a selection of sites (Google Maps, Twitter, etc) or you can enter your own, so that these appear in the top of the SLIKK page.

What I thought of SLIKK

At first glance SLIKK appears to be a great application.

However, when I looked closer at each feature, I started to think “ok…but what is the real advantage that is offered here?”

Search Engine – You can select the search engine that you want the search results from. Really – I can easily do the same by going to the Google site and executing a search there, or going to the Bing search and executing the search there.

Content Type – This is nothing that the “legacy” search engines didn’t already offer. However – to be able to get Twitter results was definitely something I was happy with.

Multi-View – Initially I thought that this was pretty cool. But , to be honest, there wasn’t really that much advantage to this feature. The only value I saw was if you wanted to see, side-by-side, search results for something while viewing what was being tweeted about it the same time. But then…how often do you want to do that?

Open Search Results – Note – this is not a “preview” feature similar to what Google offers. It is a “child window” with the source site in it. In these times of tabbed browsers, I wa struggling to find a real advantage to this.

My Links – When I first clicked on this (and saw the screen displayed above), I thought that it would offer real value. But all it does is display the name of the site in the top of the screen which, when clicked on, will open the site in a new tab, or window. In short – bookmarks/favorites.

Overall…
  I found that SLIKK was not actually that Slick. I certainly applaud the owners of SLIKK for what they are doing, but I feel that the big Search Engines are already able to offer so much more.

Beta Community

SLIKK have a Beta program in place. And there is a forum, and a blog (as well as a Facebook page etc). They do seem quite receptive to input from users and appear to be trying hard to create something that people want.

I wish them the best of luck.

  • Beta Testing SLIKK
  • Search the way you like it, with Slikk
  • New Search Engine Slikk.com Launches at DEMO Spring 2012
  • Search and browse simultaneously with Slikk

Related Post