Deep Web, Dark Internet and Darknets

From IT

Revision as of 18:34, 4 January 2011 by Erica Prato (Talk | contribs)
Jump to: navigation, search

Contents

Introduction

Deep Web, Dark Internet and Darknet represent three concepts that might seem similar and that are sometimes used as synonimous. Really, these three concepts represent three very different worlds.

Hearing terms like these, they may seem fascinating, perhaps because are subjects that does not usually get talk about on the Web. More in deep, when a user surfs the Web, he thinks that all the information available all over the world are also available on it. There are, on the other hand, areas of the World Wide Web that cannot be easily accessed by the public.

In the following sections, these three concepts will be explained, in a way in which the differences between them can be clear.

The Deep Web

The Deep Web it's simply that part of the Internet which is not indexed by search engines.
Before to begin, we have to consider a preliminary difference between the Surface Web and the Deep Web. [1]

Surface Web is the term used to identify that portion of the World Wide Web that is indexed by conventional search engines: in other words, is what you can find by using general web search engines.

On the contrary, the Deep Web is defined as that portion of the Wolrd Wide Web that is not accessible through a research executed using general search engines, and is much bigger than the previous one (http://www.internettutorials.net/deepweb.asp).

Beyond the trillion pages a search engines such as Google knows, there is a really vast world of hidden data. This content could be:

  • the content of database, that is accessible only by query;
  • files such as multimedia ones, images, software;
  • the content on web sites protected by passwords or other kinds of restrictions;
  • the content of "full text" articles and books;
  • the content of social networks;
  • financial information;
  • medical research.

Nowadays, we have to consider also other kinds of content, such as:

  • blog postings;
  • bookmarks and citations in bookmarking sites;
  • flight schedules.

Search engines and the Deep Web

Search engines rely on crawlers, or spiders, that wander the Web following the trails of hyperlinks that link the Web together: it means that spidersindex the addresses of the pages they discover.

The negative aspect referable to this indiscriminate crawl approach had been replaced with the so called "popularity of pages" in a search engine like Google: in other words, the most popular pages, and so those that register the highest frequence of research, have priority both for crawling and displaying results.

In the Deep Web, happens that spiders, when finding a page, don't know what to do with it: it means that spiders can record these pages, but aren't able to display the content of them. The most frequent reasons can be refereable to technical barriers (database driven content, for instance) or decisions taken by the owners of web sites (the necessity to be register with a password to access the site, for instance), that make impossible for spiders to do their work.

Another important reason refers to the linkage: if a web document is not linked to another, it will never be discovered.

Personal tools