Mercurial > hg > cc > work
changeset 33:f899b1a922ce
getting started
author | Henry S. Thompson <ht@inf.ed.ac.uk> |
---|---|
date | Mon, 22 Apr 2024 15:17:02 +0100 |
parents | 539ce5728bae |
children | 052f4ff4eae6 |
files | LURID3.xml |
diffstat | 1 files changed, 35 insertions(+), 0 deletions(-) [+] |
line wrap: on
line diff
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/LURID3.xml Mon Apr 22 15:17:02 2024 +0100 @@ -0,0 +1,35 @@ +<?xml version='1.0'?> +<?xml-stylesheet type="text/xsl" href="../../../lib/xml/doc.xsl" ?> +<!DOCTYPE doc SYSTEM "../../../lib/xml/doc.dtd" > +<doc xmlns:x="http://www.w3.org/1999/xhtml"> + <head> + <title>LURID3: Longitudinal studies of the World Wide Web<x:br/> + <span style="font-size:80%">UKRI reference APP39557</span></title> + <author>Henry S. Thompson</author> + <date>22 Apr 2024</date> + </head> + <body> + <div> + <title>Vision</title> + <div> + <title>Motivation</title> + <p>Empirical evidence of how use of the Web has changed in the past provides crucial input to decisions about its future. Creative uses of the mechanisms the Web provides expand its potential, but also sometimes put it at risk, so it’s worrying that there’s surprisingly little empirical evidence available to guide standardization and planning more generally. Which aspects of the Web’s functionality are widely used? Hardly ever used? How is this changing over time?</p> + <p>The kind of evidence needed to answer such questions is hard to come by. + The proposed research builds on our previous work in this area [Thompson and +Tong 2018], [Thompson 2024], taking advantage of the computational resource Cirrus provides to validate and expand our work on the Common Crawl web archive.</p> + <p>Common Crawl (CC) is a very-large-scale web archive, containing +petabytes of data from more than 65 monthly/bi-monthly archives, totalling over +100 billion web pages. Collection began in 2008 with annual archives, +expanding steadily to the point that since 2017 archives were collected monthly +until 2023, since when it's been bi-monthly. Recent archives contain over 3x10^9 pages, about 50 Terabytes (compressed). Together with Edinburgh colleagues we have created local copies of 8 months of CC in a petabyte store attached to Cirrus. For our purposes it is important to note that the overlap between any two archives as measured by Jaccard similarity of page checksums is less than .02 [9].</p> + <p>The proposed work will build on results from our just-completed project +(<name>LURID2: Assessing the validity of Common Crawl</name>, EPSRC Access to +HPC Award from 2022–12 to 2023–04) on the Common Crawl web +archive (CC), in the course of which we accomplished almost all of our four main objectives.</p> + </div> + </div> + <div> + <title>Approach</title> + </div> + </body> +</doc>