Skip to content
Advertisement

Tag: wget

wget: Unsupported scheme on non-http URL

I have the following line in my shell script: When I tried to run the script it give me the following error: Does it mean wget supports http and ftp ONLY? Answer man wget shows: It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. Try curl, it supports file URLs. Also note you probably want

Get file from ubuntu server

I have an Ubuntu server running with DB backups. How to get the backups? I am using a *.pem file to login. I have been trying mail with mailx. I execute the command but without success. I get no error messages I am aware of that this is without attachments. How to get this working? With wget do I need

wget -O for non-existing save path?

I can’t wget while there is no path already to save. I mean, wget doens’t work for the non-existing save paths. For e.g: If /path/to/image/ is not previously existed, it always returns: How can i make it work to automatically create the path and save? Answer Try curl

Is wget or similar programs always available on POSIX systems?

Is there an HTTP client like wget/lynx/GET that is distributed by default in POSIX or *nix operating systems that could be used for maximum portability? I know most systems have wget or lynx installed, but I seem to remember installing some Ubuntu server systems using default settings and they had neither wget or lynx installed in the base package. I

wget: downloaded file name

I’m writing a script for Bash and I need to get the name of the downloaded file using wget and put the name into $string. For example, if I downloading this file below, I want to put its name, mxKL17DdgUhcr.jpg, to $string. Answer Use the basename command to extract the filename from the URL. For example:

How can I show the wget progress bar only? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question

How to download images from “wikimedia search result” using wget?

I need to mirror every images which appear on this page: http://commons.wikimedia.org/w/index.php?title=Special:Search&ns0=1&ns6=1&ns12=1&ns14=1&ns100=1&ns106=1&redirs=0&search=buitenzorg&limit=900&offset=0 The mirror result should give us the full size images, not the thumbnails. What is the best way to do this with wget? UPDATE: I update the solution below. Answer It is quite difficult to write all the script in stackoverflow editor, you can find the script at

A command to download a file other than Wget [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question

Advertisement