Deep Web, Dark Internet and Darknets

From IT

(Difference between revisions)
Jump to: navigation, search
(The Deep Web)
(How big is it?)
Line 43: Line 43:
====How big is it?====
====How big is it?====
-
It seems that the Deep Web is 500 times bigger than the Surface Web, containing 7.500 terabytes of data and 550 billion of documents <ref name=bergman2001/>.
+
It seems that the Deep Web is 500 times bigger than the Surface Web, containing 7.500 terabytes of data and 550 billion of documents (Bergman, Michael K. (August 2001). "The Deep Web: Surfacing Hidden Value". The Journal of Electronic Publishing 7).

Revision as of 20:39, 4 January 2011

Contents

Introduction

Deep Web, Dark Internet and Darknet represent three concepts that might seem similar and that are sometimes used as synonimous. Really, these three concepts represent three very different worlds.

Hearing terms like these, they may seem fascinating, perhaps because are subjects that does not usually get talk about on the Web. More in deep, when a user surfs the Web, he thinks that all the information available all over the world are also available on it. There are, on the other hand, areas of the World Wide Web that cannot be easily accessed by the public.

In the following sections, these three concepts will be explained, in a way in which the differences between them can be clear.

The Deep Web

The Deep Web it's simply that part of the Internet which is not indexed by search engines.
Before to begin, we have to consider a preliminary difference between the Surface Web and the Deep Web. [1]

Surface Web is the term used to identify that portion of the World Wide Web that is indexed by conventional search engines: in other words, is what you can find by using general web search engines.

On the contrary, the Deep Web is defined as that portion of the Wolrd Wide Web that is not accessible through a research executed using general search engines, and is much bigger than the previous one (http://www.internettutorials.net/deepweb.asp).

Beyond the trillion pages a search engines such as Google knows, there is a really vast world of hidden data. This content could be:

  • the content of database, that is accessible only by query;
  • files such as multimedia ones, images, software;
  • the content on web sites protected by passwords or other kinds of restrictions;
  • the content of "full text" articles and books;
  • the content of social networks;
  • financial information;
  • medical research.

Nowadays, we have to consider also other kinds of content, such as:

  • blog postings;
  • bookmarks and citations in bookmarking sites;
  • flight schedules.

Search engines and the Deep Web

Search engines rely on crawlers, or spiders, that wander the Web following the trails of hyperlinks that link the Web together: it means that spidersindex the addresses of the pages they discover.

The negative aspect referable to this indiscriminate crawl approach had been replaced with the so called "popularity of pages" in a search engine like Google: in other words, the most popular pages, and so those that register the highest frequence of research, have priority both for crawling and displaying results.

In the Deep Web, happens that spiders, when finding a page, don't know what to do with it: it means that spiders can record these pages, but aren't able to display the content of them. The most frequent reasons can be refereable to technical barriers (database driven content, for instance) or decisions taken by the owners of web sites (the necessity to be register with a password to access the site, for instance), that make impossible for spiders to do their work. (http://websearch.about.com/od/invisibleweb/a/invisible_web.htm)

Another important reason refers to the linkage: if a web document is not linked to another, it will never be discovered.

To search the Deep Web

In order to search Deep Web content, it's necessary to use specific portals, such as for instance CompletePlanet (http://www.completeplanet.com/). Because of the presence of thousands of databases that contain Deep Web content, CompletePlanet offers the possibility to navigate these Deep Web databases.
The linked image [2] shows the home page of the portal, with the list of all the available dynamic searcheable databases. it's in fact possible to to go to various topic areas (medicine, art&design, science, politics, and so on) and find content that are not display by using conventional search engines [3].
Other Deep Web search engines are (http://websearch.about.com/od/invisibleweb/tp/deep-web-search-engines.htm):

  • Clusty, that is a meta search engine able to combine results from different sources and give back the best possible result;
  • SurfWax, that gives the possibility to obtain results from different search engines at the same time, and to create personalized set of sources;
  • InternetArchive, that gives access to specific searcheable topics such as live music, audio and printed materials;
  • Scirus, that is dedicated only to scientific material;
  • USA.gov for access information and databases from the USA government.

How big is it?

It seems that the Deep Web is 500 times bigger than the Surface Web, containing 7.500 terabytes of data and 550 billion of documents (Bergman, Michael K. (August 2001). "The Deep Web: Surfacing Hidden Value". The Journal of Electronic Publishing 7).