Scheduled Maintenance: We are aware of an issue with Google, AOL, and Yahoo services as email providers which are blocking new registrations. We are trying to fix the issue and we have several internal and external support tickets in process to resolve the issue. Please see: viewtopic.php?t=158230
$ curl -r -l 2 https://www.TARGET.com/
Warning: Invalid character is found in given range. A specified range MUST
Warning: have only digits in 'start'-'stop'. The server's response to this
Warning: request is uncertain.
curl: (7) Couldn't connect to server
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid Header</h2>
<hr><p>HTTP Error 400. The request has an invalid header name.</p>
</BODY></HTML>
Read the man page. And what does "click on all links" mean, exactly? That doesn't seem to make any sense at all in the context of curl or wget. What are you actually trying to do?
Head_on_a_Stick wrote:Read the man page. And what does "click on all links" mean, exactly? That doesn't seem to make any sense at all in the context of curl or wget. What are you actually trying to do?
Thanks.
Consider a web page that some links are in it. I like to use the wget or cURL for sending a request to that page, it's like clicking on all of the links.
I saw https://askubuntu.com/questions/639069/ ... e-webpages, but:
$ curl -r -l 2 https://www.URL.com/
Warning: Invalid character is found in given range. A specified range MUST
Warning: have only digits in 'start'-'stop'. The server's response to this
Warning: request is uncertain.
curl: (7) Couldn't connect to server
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid Header</h2>
<hr><p>HTTP Error 400. The request has an invalid header name.</p>
</BODY></HTML>
reinob wrote:That's what happens when you blindly type something somebody wrote in some forum.
Read the manual.
-r in wget is not the same as -r in curl.
In cURL:
-r, --range <range> Retrieve only the bytes within RANGE
--raw Do HTTP "raw"; no transfer decoding
I know a wget command like below cna do it, but it download whole the website:
reinob wrote:That's what happens when you blindly type something somebody wrote in some forum.
Read the manual.
-r in wget is not the same as -r in curl.
In cURL:
-r, --range <range> Retrieve only the bytes within RANGE
--raw Do HTTP "raw"; no transfer decoding
I know a wget command like below cna do it, but it download whole the website:
I just want wget send a request like click on all links.
You're going to have to define "click".
If you mean that every link on a webpage should be requested (GET /.../ HTTP/1.0, etc.) then recursive wget is what you want.
If you have a problem with wget actually storing the downloaded page, then you have another problem to deal with, which is easy enough (you can just wipe the folder when you're done). Or use "-O /dev/null".
If your "click" can also be a HEAD request (instead of a GET request), then you can use "wget --recursive --spider", which will "click" (HEAD) every link without downloading anything.
reinob wrote:That's what happens when you blindly type something somebody wrote in some forum.
Read the manual.
-r in wget is not the same as -r in curl.
In cURL:
-r, --range <range> Retrieve only the bytes within RANGE
--raw Do HTTP "raw"; no transfer decoding
I know a wget command like below cna do it, but it download whole the website:
I just want wget send a request like click on all links.
You're going to have to define "click".
If you mean that every link on a webpage should be requested (GET /.../ HTTP/1.0, etc.) then recursive wget is what you want.
If you have a problem with wget actually storing the downloaded page, then you have another problem to deal with, which is easy enough (you can just wipe the folder when you're done). Or use "-O /dev/null".
If your "click" can also be a HEAD request (instead of a GET request), then you can use "wget --recursive --spider", which will "click" (HEAD) every link without downloading anything.