Have an account?
It looks like you're new here. If you want to get involved, click one of these buttons!
Apply for Membership
Who's Online (1)
Looking to introduce yourself? Look no further, and click here! We also have IRC! [irc.evilzone.org #iexploit]
General Security Discussions
A Brief History of the Internet
This is a piece of work that I did for a Unit in my ICT at sixth form so its not that in depth but nether the less I feel its not bad.
Please send me an email if any of this information is incorrect and I will correct it. admin[at]hackinghq[dot]com
Investigate and Outline the History of the Internet
In this report I will be researching and investigating the history of the internet. I will pick out some of the milestones in the internets development and study them in depth. I will then outline these milestones and show a thorough understanding of them.
ARPANET (Advanced Research Projects Agency Network) 1969
ARPANET is the first of the milestones of internet development that we will look at. ARPANET which stands for Advanced Research Projects Agency Network was created in the cold war by the US Department of Defence.
The ARPA Network was the first successful packet switching network in the world to our knowledge. Packet switching is a vast improvement on the older technology of Circuit Switching. Packet Switching is still to this day the core system used to transfer data over the web. Packet Switching is a method designed to take any form of data immaterial of what it contains and split it into smaller packages called packets. These packets are then sent over a shared network but are dealt with as separate items. A scientist at ARPA called Lawrence Roberts designed his own version of packet switching and this was at the core of ARPANET.
One of the first ideas of a computer network was thought of in 1962 and was the creation of a man called J.C.R Licklider. J.C.R Licklider was later on appointed to work at ARPA and whilst there managed to persuade a man called Ivan Sutherland that his ideas were of vital importance, unfortunately J.C.R Licklider had left working at ARPA before he could see any of his ideas develop. The first ARPANET contained 4 IMPâ€™s (Interface Message Processor) these four were placed at The University of California Los Angeles, The Stanford Research Institute, The University of California Santa Barbara and The University of Utah. After this the first ever message was sent by a student of UCLA, the message was â€œloginâ€ the computer managed to process the â€œloâ€ but then crashed. So the first ever message sent through some form of network was â€œLoâ€. After this the first computer network was created it was a simple 4 host network, but it worked. The first ever e-mail sent in 1971 was sent through this network and its message was something along the lines of â€œQWERTYâ€ but this is not accurate as the contents of the message have actually been forgotten.
APRANET used 1822 as its core host to host platform. 1822 referred as this because of its report name was a way to connect any host computer to an ARPANET router, mentioned earlier as an IMP. 1822 worked by a host computer typing a message and entering an address for the receiving computer much like modern day IP addresses this would then send to an IMP and this would route it to the destination, this is much like how modern day routers work. The difference being that in modern day technology we cannot fully rely on IP, 1822 however was 99.9% reliable because if the IMP could not send the message it would return a message to the originating host saying that it could not deliver. This however was flawed slightly because on the odd occasion an IMP would give out a false positive where it would return a â€œlostâ€ message when in fact the packet had been transmitted correctly. When the IMP had successfully sent a packet it would also return the message RFNM (Ready for Next Message). In 1983 however the Transmission Control Protocol replaced 1822 and ARPANET IMPâ€™s just became part of the ever expanding internet. Once e-mails had become functional they accounted for 75% of ARPANETâ€™s traffic. Later on in 1973 FTP (File Transfer Protocol) had been designed and was usable over ARPANET to send files from one host to another.
After this the growth of ARPANET was exponential and by 1981 alone there were 213 hosts on the network. This increased more and more as years went by and newer technology making it easier to connect to ARPANET came about.
ARPANET was the spark that started it all it got people thinking differently about computers and the power that they actually held within them and it made people think about the many different ways that we can use computers to make our lives easier.
Electronic Mail (E-Mail) 1971
E-mail is the biggest and most important milestone in communication over the internet and could be called one of the most vital parts of the internet today. Every day millions of emails are sent worldwide sharing knowledge and information between millions of people. Because of this, e-mail is a huge marketing and business tool for people to use this can be good and bad.
The idea of an email first came into play with the creation of timesharing computers. These computers could run more than one program and developers made software for users to be able to send message between different terminals. The only problem with this was that the software was limited to a group of users sharing one computer.
Ray Tomlinson is alleged to have sent the first email and also to be the first person to make use of the â€˜@â€™ sign to separate the name of the user and the users machine. In the early 1970â€™s Ray Tomlinson was working with a small group of people developing the operating system TENEX this operating system was to have built into it two pieces of software to allow local messaging these were SNDMSG and READMAIL. Later in 1971 Ray Tomlinson greatly improved SNDMSG to be used on ARPANET by adding CPYNET to it which allowed users to send messages over a network this was a huge breakthrough. Ray informed his staff of this achievement by sending them all an email with instructions on how to use the software. Jon Postel one of the main pioneers of the internet is said to have commented on Ray Tomlinsonâ€™s work by describing it as a â€˜nice hackâ€™ To make an addressing system Tomlinson use the â€˜@â€™ symbol and the syntax of that would be user@host this is the addressing method still used today. The early program was simple and command line only.
In 1972 the FTP program got two new additions to its software these were to be the commands MAIL and MLFL. MAIL and MLFL were added to the FTP software to allow standard network capabilities for the transport of emails. This then became the standard for sending email over ARPANET until around the 1980â€™s when SMTP was created which included many valuable improvements on MAIL and MLFL.
Email as a tool changed the world from ARPANET to Internet it allowed now millions of people worldwide to connect and had a whole world of possibilities. Possibly the most important mail protocol was SMTP (Simple mail transfer protocol) this protocol is still used to this day but the protocol did not put enough energy into finding out weather the person sending the message was who they said they were this lead to very simple forgery which was then exploited by viruses and worms. The biggest development in email technology however was the creation of the POP (Post office protocol) protocol. POP servers started appearing everywhere and quickly become the industry standard for the transfer of email. When POP was first initialised users had to pay per minute for the use of email most users of this set up huge discussion groups were the would email information these were known as newsgroups and all of them together created what is known as USENET.
From this the World Wide Web developed and user friendly interfaces for email such as Yahoo were created these were free to use. At this point email become commercialised and everyone wanted or had an email address. Hundreds of millions of people adopted the idea of email and got email addresses this quickly became one of the most important uses of the internet.
Email is one of the biggest milestones in internet history it allowed people anywhere in the world to talk to each other instantaneously regardless of time or date.
IPSS (International Packet Switched Service) 1978
IPSS or International Packet Switched Service was created in 1978 and was the collaborative works or the UK post office the Western Union International and the United States Tymnet. You would connect to the network via a PSS (Packet switched stream) modem or a X.25 PAD (Packet Assembler/Disassembler) its growth was rapid covering a worldwide scale by the early 1990â€™s. To access IPSS you needed to attain either dedicated access or a public dial access facility, this unfortunately costs money. IPSS is available 24 hours a day 7 days a week unless restricted by the overseas company or the local authorities in the area limiting the connections. The connections were offered in 3 different speeds the fastest obviously costing the most money.
Newsgroups and Bulletin Boards 1970â€™s
With the internet growing rapidly new technologies and new ways to communicate came about two of these are newsgroups and bulletin boards.
Newsgroups are internet discussion forums in newsgroups people discuss many different areas of interest from aviation to knitting. In a newsgroup the messages posted can be viewed by anyone that accesses the newsgroup. Most newsgroups became part of the Usenet system or were set up on the Usenet system to access newsgroups you need a newsreader such as UseNext. Usenet newsgroups have the same functionality as online discussion forums but they are technically different in the way that discussion forums are usually viewed through a web browser where as newsgroups are accessed through software called a newsreader.
Newsgroups allow posting to different groups as long as your post is on-topic, off-topic posts are frowned upon. The administrator of the newsgroup has to make a decision as to how long the posts stay there this is called the retention. The retention time differs from server to server one may be two days one may be 3 months if the server had a retention time of two days and you post an article that article will only exist on the server for two days this is used to save space and avoid the discussion becoming stagnant. Usenet groups come in two different types, text and binary, the end result of both of these is the same but the way the server handles the usersâ€™ bandwidth is different.
Newsgroups quickly became a place for people to start flame wars and troll but also became a very important place for information and friendship. There are over 100,000 newsgroups but only about 20,000 are active at this time.
Newsgroups are arranged into hierarchies to make them easy to navigate. On Usenet there were seven main hierarchies these were known as â€˜The Big 7â€™ they were as follows:
comp.* â€” Discussion of computer-related topics
news.* â€” Discussion of Usenet itself
sci.* â€” Discussion of scientific subjects
rec.* â€” Discussion of recreational activities (e.g. games and hobbies)
soc.* â€” Socializing and discussion of social issues.
talk.* â€” Discussion of contentious issues such as religion and politics.
misc.* â€” Miscellaneous discussionâ€”anything which doesn't fit in the other hierarchies.
Before 1986 these hierarchies were all part of one main hierarchy called â€˜net.*â€™. â€˜The Big 7â€™ came about during what is known as the â€˜Great Renamingâ€™ of 1986-1987. There were huge discussions about which newsgroups would be allowed but Usenet Cabal who effectively ran â€˜The Big 7â€™ at the time did not allow anything concerning recipes, drugs or sex.
A company called Deja News started to archive Usenet in the mid 1990â€™s they made a searchable web interface so people could search posts from newsgroups. Google bought the archive off Deja News and started to buy other archives in an attempt to archive all newsgroups and postings. Google also provided users with a search function but also provided them with a way to post to newsgroups within Usenet.
Newsgroups are the foundation for the idea of forums and the internet is full of forums so as we can see Newsgroups have a major part to play in the development of the internet.
TCP/IP (Transmission Control Protocol/Internet Protocol) & National Science Foundation (NSF) 1983
TCP/IP as it is commonly referred to stands for Transmission Control Protocol/Internet Protocol and is the Internet Protocol Suite and is a networking standard. TCP/IP is actually a whole family of protocols and, TCP and IP are only two of them.
TCP/IP was first used in 1983. TCP/IP had been in development many years before this and the project was run by the Defence Advanced Research Projects Agency (DARPA). But it was in 1983 that ARPANET fully migrated to using TCP/IP.
Like many suites it can be said that the internet protocol suite works in many layers the top layer being closer to the user and the bottom layers actually preparing the data to be transmitted. TCP/IP has four layers I will list these here from the highest layer to the lowest layer:
1. Application Layer
2. Transport Layer
3. Internet Layer
4. Link Layer
We will now look at each layer individually and in depth. The first layer is the link layer. The link layer is used to take care of all the hardware components of the network it pulls in the packets of the wire and then strips it of any link layer information and passes it on too the network layer which is the next level up.
The networking layer is the layer that contains the tools to send the information to its destination. The networking layer is not concerned with reliability this is the task of the transport layer. The networking layer contains the protocols IP (Internet Protocol) and ICMP (Internet Control Message Protocol). We use ICMP for certain utilities such as traceroute and ping. It is the IPâ€™s job to work out how to get a packet to its correct destination and when it receives on it becomes its duty to work out where it belongs. The IP does not care whether or not the packets actually reach there destination nor is it concerned with weather the packets come in the same order as they were sent, if the IP gets a corrupt packet it silently discards it without any errors being returned to the user. It is possible to send information between computers because each computer has a unique number attached to its NIC (Network Interface Controller).
When you send a packet it will be sent through many different computers to get to its destination. Machines determine where a packet is going next by using routing tables. Our routing tables contain 3 main bits of information these are; â€˜Addresses of Routersâ€™, â€˜Addresses they can handleâ€™ and â€˜The interface to which they are connectedâ€™. A packet will be sent to a machine and that machine will look and see if it has a direct root to the destination machine. Letâ€™s say we are sending a packet to another computer but our computer doesnâ€™t have a direct root it will send it to any computer in its list and then that computer will look in its list and see if it has a direct connection to the destination machine. This is essentially how our packets arrive at there destination but we must remember it is not the IPâ€™s job to make sure our sending is successful. We can use a tool called Traceroute(Unix) or Tracert(Windows) to follow one packet on its journey through all of the machines it runs through.
Example Input (Windows):
Command: C:\Documents and Settings\scottor.NETHERHALL.001>tracert 220.127.116.11
Example Input (UNIX):
Command: liquidfusi0n@ubuntubox:~$ traceroute 18.104.22.168
Tracing route to 22.214.171.124 over a maximum of 30 hops
1 1 ms <1 ms <1 ms 10.100.16.1
2 <1 ms <1 ms 9 ms 10.178.159.1
3 1 ms 1 ms 1 ms 10.106.254.117
4 3 ms 3 ms 3 ms 10.106.254.34
5 4 ms 4 ms 3 ms 126.96.36.199
You can see from the example output that to get to that address we had to go through more than one machine, every time we connect to another machine we call it a hop. Using the above tools we can limit how many hops we take amongst many other things.
Now we will look at the 2nd layer which is the transport layer. The transport layer consists of two components the first is TCP (Transmission Control Protocol) the second is UDP (User Datagram Protocol). TCP is a reliable way to transport our packets where as UDP isnâ€™t.
TCP works on the ports system we will look at how the server handles TCP first then look at what UDP is and why we might use it. All TCP and UDP packets contain an identification number which is the port number the packet is to be sent to. It is important to remember that port numbers are not hardware-based. On a server there will be a port open and it will be what we call â€œlisteningâ€ this means that it is listening for any incoming packets. We can only have one process listening on one port unless the processes are using different protocols. When the transmission control protocol receives data it checks the port number and sends the data to that port the listening machine will then accept that request.
UDP can be looked at basically as IP with port numbers. UDP is roughly as reliable as IP is and this can be the main reason as to why people would choose not to use it. The reasons people use UDP is because we do not have the limits of TCP and we are allowed access to IP-style datagrams this is helpful for people who are perhaps trying to create there own protocols. Two examples of processes that use UDP are the NFS (Network file system (Port 2049)) which no longer uses UDP because people felt it was a bad design choice and all newer versions of NFS use TCP. Our second example of an application that uses UDP is DHCP (Dynamic Host Configuration Protocol (Port 68)) this uses UDP because the requests and replies are short and fast.
There are some applications that will use both methods of transport one of these is DNS (Domain Name System). DNS uses both UDP and TCP and uses each different one for different kinds of scenarios for instance for short and easy tasks DNS will use UDP but for larger tasks or tasks that need more reliability DNS will switch over and use TCP. This is a good system to use as it maximises efficiency it is pointless using TCP for really small tasks when it is more efficient to use UDP and vice-versa.
DNS (Domain Name System) 1984
When the internet was first starting out users systems were identified by a 32 digit number known as an IP address and if computers wanted to connect to each other this address was needed. To make life easier for the end user human readable names were attached to these numbers so a user trying to access 127.0.0.1 could simply access â€œlocalhostâ€. Before DNS both the 32 bit address and the more user friendly name were stored in a master host file. It is the DNS that allows us to use our browser and access a page via an address such as â€˜www.google.comâ€™ rather than having to type the IP address of the server that Google is hosted on.
The way the DNS labelling system works is that we work from right to left. Lets look at â€˜www.securityoverride.comâ€™ from right to left the â€˜.comâ€™ is the top level and â€˜securityoverrideâ€™ is a sub-domain of that level and â€˜www.â€™ is a level below security override. Domain names are generally concatenated together with a period. We can go up to 127 different levels and each â€˜labelâ€™ is aloud to have up to 63 characters. Technically a domain can contain any character that can be represented in a octet but because of various reasons we now have a subset of the ASCII table allowing us to use the characters A to Z (Capital and Lowercase), 0-9 and the hyphen.
Without the development of the DNS people would still be accessing websites by their servers IP address. Test to see how many 32 bit IP address you can remember compares to how many words you can remember. DNS was a natural progression within the internet.
10,000 Hosts 1987
In 1987 there were 10,000 hosts on the internet. This growth was exponential and unexpected to happen that quickly the 10,000 compromised of clients and servers.
First Commercial Dial Up 1990
In 1990 the first ISP was founded and along with this so was the first ever commercial dial up service. It was a company to go by the name of The World who would be the ones to commercialize dial up internet. In this time there were many other ISP companies setting up such as PSINet, Netcom and UUNET but it was to be The World that would succeed the best with the commercial market.
We have The World to thank for the internet being made more easily accessed by even computer novices.
World Wide Web CERN 1991
CERN - the European Organization for Nuclear Research is a community of scientists from about 60 different countries and hosts about 7500 scientists. CERN also holds some of the worlds greatest scientists who are working on ground breaking discoveries. In 1989 a scientist from CERN called Tim Bereners-Lee invented the world-wide-web. The first concept of the World Wide Web was created so that scientists could have an automatic sharing of information between scientists at all different institutes and universities.
After this the first few web servers to ever exist were set up but the problem was that only a few people had access to using the NeXT platform on which the first browser was ran but CERN counteracted this by releasing a much simpler browser that could be used on any system. The first web server to go online in America was initialised in 1991 the issue being that users only had access at this time to two different kinds of browsers. The first being the originally developed browser which needed the NeXT platform to run and the second being the cross platform browser but this lacked and power user features. Later on more browsers were developed by independent programmers.
You can visit the first ever website
this is the site that was hosted on the first web server to go online in the USA. The development of the world-wide-web is possibly what has made the internet so accessible to everyone and without its development perhaps the internet might not have reached the mass scale that it has today.
First Widely Used Browser (Mosaic) 1993
Mosaic was developed at NCSA (National Centre for Supercomputing Applications) and must be credited as the browser that led to what is known as the internet boom. Mosaic was one of the first GUI browsers and its features are still replicated to this day in modern browsers such as Google Chrome, Internet Explorer 8 and Firefox 3.6. Some members of the Mosaic team went on to create another browser called Netscape Navigator however the two shared no code.
Mosaic was not the first web browser for Windows another little known program called Cello was but Mosaic even without being the first outshone the rest. Mosaic differed from the rest because it had a full time team of programmers working on it and the software itself was so simple to use and install even a amateur could do it. Mosaic had essentially made the internet accessible to you everyday person and because of this we hit the internet boom. Mosaic also had one feature that currently other browsers did not have and that was the ability to be able to display text and images inline with each other. On any other browser to view an image the browser would have to open a new windows for that image but with Mosaic you could view them alongside the text and this was very appealing to a lot of people.
Mosaic is probably responsible for the way we use our browsers today and for how we access the internet on a whole and is also responsible for initiating the 1990's internet boom.
Word â€˜Internetâ€™ in Daily Use 1996
By 1996 the word internet had become very commonplace. By this point most people knew what the internet was and it was seen as some form of buzz word at the time. It also earned a place along side the world-wide-web and the two were referred to as the same thing.
10 Million Hosts 1997
In 1997 the internet had 10 million hosts compromising of servers and clients. The internet is estimated to have grown 100% since 1987 when the internet consisted of 10,000 hosts.
Search Engines 1990
The first search engine to come into existence is believed to be Archie named after the word archives. This search engine was in existence before most websites were. The first few hundred websites came into existence in 1993 and were mainly hosted at colleges and universities but Archie being created in 1990 was around before they were. Archie was created by a man called Alan Emtage.
In 1990 Alan Emtage referred to Archie as pretty brain-damaged but about 3 years later showed more confidence in the abilities of Archie. Archie didn't have the same power as today's search engines but this is to be expected based on the time it was made. Archie had the ability to search for exact files so if you knew you were searching for a file called 'wow.txt' Archie would be able to search for this file. Archie did not have the capability to list the contents of a text file however but another search engine first adopted this feature that search engine was known as Gopher.
With the growing popularity of the World Wide Web the way that search engines worked changed quite a bit. One of the first ever a method of indexing and archiving the World Wide Web was created by a man named Martijn Koster; it was to be named ALIWEB (Archie-Like Indexing in the Web). ALIWEB never really took off as much as other competing search engines but Martijn Koster's work with robots was to play a vital part in future search engines such as Google.
Without the power of search engines how would we navigate our way around the World Wide Web how would we target what we were searching for to come up in relevance to our search string? Search Engines are not only a great development but they are an essential one this is what differs them. Email was not a necessary development just a good one. Search Engines however are essential to the World Wide Web.
Dotcom Bubble Burst 2000
Between 1995 and 2000 there was a metaphorical dotcom bubble. In this time companies who simple added an E- or a .com to their name were seeing huge increases in stock. In 2000 however the Federal Reserve put interest up and the bubble was burst. Companies started to realise that their stocks were going back down and the economy was coming to a slump.
Modern Day Technologies 2010
In this section we will look at 3 or 4 new technologies that have been developed in the last 10 years (2000 â€“ 2010).
Webcasting is a way for a person to stream an image from there webcam to the internet for people to watch. One great example of this would be the infamous Chris Pirillo who webcasts his life 24/7 for all to see. This is a pretty modern development and has been utilised in many different was from entertainment to illegally streaming movies and it is even heavily used in the pornography industry with the invention of cam sites.
VoIP (Voice over Internet Protocol) is a way to transmit our voice over the internet. Any person can go out and buy themselves a VoIP phone and it will connect to the internet to transmit the sound. VoIP calls are cheaper than traditional landline calls or mobile phone calls and VoIP is quickly growing in popularity. Another form of VoIP is the software Ventrillo and its competitor Team Speak. These two pieces of software allow a user to set up a server to which there friends can connect and all be in what is known as a channel from there they can talk to each other over the internet using a headset with a microphone built in.
HTML5 is a new HTML (Hypertext Mark-up Language) standard replacing HTML4 and will contain many features built in that HTML 4 didn't. We currently use Adobe's Flash for most of our animations or streaming videos. Flash has to be downloaded to use and is not built in to your web browser by default but you can very easily obtain it. HTML5 has features that are very similar to flash and will allow us to stream videos within the HTML its self which means that no longer will users have to download Flash because HTML5 will have made all features of Flash a standard.
This has been a pretty brief look into the history of the internet and there is tones more information to be accessed and a lot more developments we have just looked at some of the biggest ones to help give you a basic understanding of the internet and how it came about.
Dave Crocker. â€˜Email Historyâ€™. 15/3/2010.
Ian Peter. â€˜The history of emailâ€™.15/3/2010.
http://www.nethistory.info/History of the Internet/email.html>
Wikipedia. â€˜ARPANETâ€™. 7/3/2010.
Wikipedia. â€˜International packet switched serviceâ€™.15/3/2010.
Anonymous. â€˜PTI International Packet Switched Serviceâ€™.31/7/2001.15/3/2010.
Microsoft. â€˜What are newsgroupsâ€™.18/3/2010.
David Kristula. â€˜What are Discussion Boards and Newsgroupsâ€™.18/3/2010.
Wikipedia. â€˜Usenet Newsgroupsâ€™.18/3/2010.
Wikipedia. â€˜Internet Protocol Suiteâ€™. 12/4/10.
Jason Yanowitz. â€˜Under the hood of the Internet: An overview of the TCP/IP Protocol Suiteâ€™. 15/4/10.
Anonymous.'Where the web was born'. 16/5/10.
Wikipedia. 'Mosaic (web browser)'. 16/5/10.
Thank God for Wikipedia :D
Haha, that wasn't brief at all. Reminds me of the one in SO. Good article, nice research. :D
Nice research, sounds a lot more interesting that the articles we would do.
It is the same one Null Set :) And thanks Xinapse, always nice to hear good reports on your pain in the arse projects :)
Add a Comment