I was wondering if it was possible to get a tool that would recursively search through a webpage and save each page that it landed on as a separate .html file. Ive looked at wget, but I wasn’t sure if it fulfilled that specific functionality.
Any solution will work, as long as it works successfully for linux or windows.
Thanks.
Advertisement
Answer
I guess this is you’re solution : https://www.httrack.com/page/1/en/index.html
with a tutorial : http://www.wikihow.com/Copy-a-Website