How to crawl your own website to save to cache

You can use wget for that. After setting the http_proxy environment variable to point to your proxy run it with options similar to below (linux commands below).

export http_proxy=http://127.0.0.1:3128/

wget --cache=off --delete-after -m http://www.mywebsite.org

If you only need to heat the cache server with static files you can do one of the following things

  1. Use the find command and paste the output to either curl or wget like this
    for path in $(find /full/path/to/files/ -type f -printf "%f\n"); do wget --cache=off --delete-after -m https://static.domain.tld/rewriten-path/$path; done;
  2. By using Curl
    for path in $(find /full/path/to/files/ -type f -printf "%f\n"); do curl -I https://static.domain.tld/rewriten-path/$path; done;
  3. Another way is to make a list of files, then rewrite results into urls that you paste into curl
    find /full/path/to/files/ -type f -printf "%f\n" >> output.txt; xargs -n 1 curl -I https://static.domain.tld/ < output.txt

Skriv et svar