How was the dark web created?

  • by

How was the dark web created? Why is it so hard to stop?

“Dark Web” is “a scary name for parts of the Internet”. Public search engines don’t show any result from dark web.

Journalists use the phrase “dark web”, much like “deep web” sparingly and such, with little technical clarity. This is not entirely incorrect, except that the name sounds as if it is a technical concept.

In fact, it is like saying “black market”, meaning any economic activity that the government don’t approve. This is the major part of what makes it so difficult to “close”. This is not the same as closing a domain name or seizing a web server. It is about closing “a behavior pattern” that seems intrinsic to human nature.

How was the dark web created

Dark Web

The first article I saw that used the term “dark web” was quoting some “Internet experts” who coined the phrase. This “expert” was, of course, selling something. At that time it only referred to all web content that was not available to be indexed by search engines like Google, and therefore not easily visible publicly. You can also read how to access the dark web.

It included a lot of content that was only in sites that needed login.

By that time, the popularity of the Internet was exploding very much.

The content was on the Internet, and so much of it was available through Google, that it was hard to imagine that what you could see through Google was only a fraction of the actual content on the web.

For example, we can talk about Oracle’s Developer Network. I remember the massive pain in the ass when I needed to patch and upgrade the Oracle server to a very accurate version number, to make some software that my company ran properly. It was very painful because I simply could not Google it. I had to enter Oracle’s developer network, then I had to use Oracle’s special search features, because those pages were not visible to Google.

The only way to view content on the World Wide Web is to ask for it by name.

In the beginning of the web people sent links to each other through pre-existing channels of communication such as email or newsgroups.

As the web became more popular, you found links to other pages on pages you already knew about. In the mid-90s the primary use of your “home page” was to link what you thought.

Then we saw a lot of sites that were larger versions of a home page, just a larger list of links, sorted by category and perhaps with a shorter summary. Some of them became very large (Yahoo), however, as the web stopped it, it became inexcusable to do it by hand.

If the domain name and IP addresses work, I will skip the technical part, if you are interested in researching it yourself; there is a lot of information.

It took some time for the search engines to develop what they are now. Following dark web links in the World Wide Web provides information to search engines in the same way that humans do. That’s why we call their indexing programs “web crawlers” or sometimes “spiders”:

The web crawler program starts with a set of known links.
It downloads those pages and looks for more links in each page.
Then it follows those more links, downloading more of those pages.
Then it looks for more links in those pages.

Sometimes people make it easier for search engines to add their web site to their index, by visiting the search engine’s web site and uploading links to their pages. Obviously they want to get some people.

Today people use search engines a lot (mostly Google, randomly). It is not unreasonable to say that most web content is accessed through first-time viewing at Google. Even if you already know which site you are looking for, most people write enough names in Google to find it again.

But the reality is that there is only a fraction of the content on the web.

In addition, interactions between search engines and other web sites have evolved.

In the past, search engines primarily indexed “static” web pages, web pages that did not change. You asked for a link, the web server found the file matching that link and handed you its contents. The search engine ignored “dynamic” web pages, which were created on-demand by a database reading program. The endless circle of dynamic web pages was likely to lead the search engine’s web crawler.

And if the site needs a login to get the data, the search engine did not have a login and simply left that site.

However, crawlers and sites today are much more sophisticated about what they are crawling, which is why you see things like newspaper or magazine articles appearing in your Google results, even if you own the page Go to see, it asks you to login. Learn how to access Deep Web from phone.

In addition to less-easily publicly viewed content on the web, there are completely different Internet protocols that people use to fetch and view content, like TOR (The Onion Router).

Mostly they work like VPNs, i.e. they “run” over existing Internet networking protocols such as TCP / IP. Often, like a VPN, they use encryption for a eWasterdopper to establish what they are looking for.

It is important to remember that the Internet does a lot of things like sending Morse code broadcasts via radio, that is, you start broadcasting and everyone who “near you” (in the context of the network) listens to your broadcast. But, everyone except the recipient ignores any broadcast, don’t start with an address saying for them.

So, it is easy for anyone in the same network vicinity to monitor packets back and forth. Of course, there are millions of people on the Internet. To some extent you can “hide in the crowd”, but the reality is that if someone is looking at you specifically, or the person or web site you’re trying to communicate with, it’s a lot easier for you. Is that they can spot you.

People use encryption, mathematically scratching the contents of the packet to make it (almost) impossible to figure out what is in them unless you know exactly how to land them. But you cannot see the address details, or the packet will not get to where it is going. So, not even an eagle can understand what is in.

Packets, but they can still easily listen to “metadata” and understand – who is talking to whom and when. This can convey an amazing amount of information to the eavesdropper.

So, some people use more complex technology, like TOR, to further obscure the details of who they are talking to. Tow Onion stands for router. The idea is that it adds multiple layers of ambiguity in the communication of packets from origin to destination.

TOR fully supports people sending messages between their computers, all in encrypted form, so that it is extremely difficult to detect where the packet is coming from or where it is going. The program on your PC opens a connection to the computer running the TOR program, and keeps it open, sending data to the TOR program in encrypted form. The TOR program then bounces the data to another TOR computer, which bounces it to another, etc., and eventually it reaches its destination.

If someone is watching a sender or receiver, they can see that the computer is talking to a TOR node. But, enough people are cooperating in the TOR, and enough traffic is flowing back and forth to track the packet. Doing as it bounces around through the TOR is impractical.

The TOR was created to oppose censorship and the ability of authoritarian governments to control their citizens’ communications. But human nature is human nature. Free speech is essential for a free society. But people can also use speech for criminal activities like the black market.

Leave a Reply

Your email address will not be published. Required fields are marked *