Web archiving is basically preservation of the world wide web.
It is the process of collecting portions of the web to ensure the information is preserved in an archive for future researchers, historians, and the public.
Web archivists typically employ web crawlers to capture webpages due to the massive size of information on the web.
It is the process of collecting portions of the web to ensure the information is preserved in an archive for future researchers, historians, and the public.
Web Crawlers are internet bots, or software programmes developed to systematically perform tasks on the internet. Typically web indexing and saving web pages.
As of now the largest web archiving platform available is the Internet Archive which is trying to create an archive of the entire world wide web.It was formed by Brewster Kahle and Bruce Gilliat.
The Internet Archive uses an archive called the "Wayback Machine" to archive the world wide web.Over a period of 12 years, ever since it was started in 2001, it has managed to save over 368 Billion webpages from the web.
A link to the internet archive:
Click here.
As of now the largest web archiving platform available is the Internet Archive which is trying to create an archive of the entire world wide web.It was formed by Brewster Kahle and Bruce Gilliat.
The Internet Archive uses an archive called the "Wayback Machine" to archive the world wide web.Over a period of 12 years, ever since it was started in 2001, it has managed to save over 368 Billion webpages from the web.
A link to the internet archive:
No comments:
Post a Comment