how to download a file using just bash and nothing else (no curl, wget, perl, etc.)












38














I have a minimal headless *nix which does not have any command line utilities for downloading files (e.g. no curl, wget, etc). I only have bash.



How can I download a file?



Ideally, I would like a solution that would work across a wide range of *nix.










share|improve this question
























  • how about gawk
    – Neil McGuigan
    Mar 16 '17 at 21:50












  • I can't remember now if gawk was available, though I'd love to see a gawk based solution if you have one :)
    – Chris Snow
    Mar 16 '17 at 21:52






  • 1




    here's an example: gnu.org/software/gawk/manual/gawkinet/gawkinet.html#Web-page
    – Neil McGuigan
    Mar 16 '17 at 22:30
















38














I have a minimal headless *nix which does not have any command line utilities for downloading files (e.g. no curl, wget, etc). I only have bash.



How can I download a file?



Ideally, I would like a solution that would work across a wide range of *nix.










share|improve this question
























  • how about gawk
    – Neil McGuigan
    Mar 16 '17 at 21:50












  • I can't remember now if gawk was available, though I'd love to see a gawk based solution if you have one :)
    – Chris Snow
    Mar 16 '17 at 21:52






  • 1




    here's an example: gnu.org/software/gawk/manual/gawkinet/gawkinet.html#Web-page
    – Neil McGuigan
    Mar 16 '17 at 22:30














38












38








38


31





I have a minimal headless *nix which does not have any command line utilities for downloading files (e.g. no curl, wget, etc). I only have bash.



How can I download a file?



Ideally, I would like a solution that would work across a wide range of *nix.










share|improve this question















I have a minimal headless *nix which does not have any command line utilities for downloading files (e.g. no curl, wget, etc). I only have bash.



How can I download a file?



Ideally, I would like a solution that would work across a wide range of *nix.







bash command-line web






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Aug 1 '13 at 15:52

























asked Jul 22 '13 at 7:43









Chris Snow

1,77831528




1,77831528












  • how about gawk
    – Neil McGuigan
    Mar 16 '17 at 21:50












  • I can't remember now if gawk was available, though I'd love to see a gawk based solution if you have one :)
    – Chris Snow
    Mar 16 '17 at 21:52






  • 1




    here's an example: gnu.org/software/gawk/manual/gawkinet/gawkinet.html#Web-page
    – Neil McGuigan
    Mar 16 '17 at 22:30


















  • how about gawk
    – Neil McGuigan
    Mar 16 '17 at 21:50












  • I can't remember now if gawk was available, though I'd love to see a gawk based solution if you have one :)
    – Chris Snow
    Mar 16 '17 at 21:52






  • 1




    here's an example: gnu.org/software/gawk/manual/gawkinet/gawkinet.html#Web-page
    – Neil McGuigan
    Mar 16 '17 at 22:30
















how about gawk
– Neil McGuigan
Mar 16 '17 at 21:50






how about gawk
– Neil McGuigan
Mar 16 '17 at 21:50














I can't remember now if gawk was available, though I'd love to see a gawk based solution if you have one :)
– Chris Snow
Mar 16 '17 at 21:52




I can't remember now if gawk was available, though I'd love to see a gawk based solution if you have one :)
– Chris Snow
Mar 16 '17 at 21:52




1




1




here's an example: gnu.org/software/gawk/manual/gawkinet/gawkinet.html#Web-page
– Neil McGuigan
Mar 16 '17 at 22:30




here's an example: gnu.org/software/gawk/manual/gawkinet/gawkinet.html#Web-page
– Neil McGuigan
Mar 16 '17 at 22:30










7 Answers
7






active

oldest

votes


















57














If you have bash 2.04 or above with the /dev/tcp pseudo-device enabled, you can download a file from bash itself.



Paste the following code directly into a bash shell (you don't need to save the code into a file for executing):



function __wget() {
: ${DEBUG:=0}
local URL=$1
local tag="Connection: close"
local mark=0

if [ -z "${URL}" ]; then
printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
"${FUNCNAME[0]}" "${FUNCNAME[0]}"
return 1;
fi
read proto server path <<<$(echo ${URL//// })
DOC=/${path// //}
HOST=${server//:*}
PORT=${server//*:}
[[ x"${HOST}" == x"${PORT}" ]] && PORT=80
[[ $DEBUG -eq 1 ]] && echo "HOST=$HOST"
[[ $DEBUG -eq 1 ]] && echo "PORT=$PORT"
[[ $DEBUG -eq 1 ]] && echo "DOC =$DOC"

exec 3<>/dev/tcp/${HOST}/$PORT
echo -en "GET ${DOC} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
while read line; do
[[ $mark -eq 1 ]] && echo $line
if [[ "${line}" =~ "${tag}" ]]; then
mark=1
fi
done <&3
exec 3>&-
}


Then you can execute it as from the shell as follows:



__wget http://example.iana.org/


Source: Moreaki's answer upgrading and installing packages through the cygwin command line?



Update:
as mentioned in the comment, the approach outlined above is simplistic:




  • the read will trashes backslashes and leading whitespace.

  • Bash can't deal with NUL bytes very nicely so binary files are out.

  • unquoted $line will glob.






share|improve this answer



















  • 8




    So you answered your own question at the same time as you asked it. That's an interesting time machine you have ;)
    – Meer Borg
    Jul 22 '13 at 7:59






  • 10




    @MeerBorg - when you ask a question, look for the tick box 'answer your own question' - blog.stackoverflow.com/2011/07/…
    – Chris Snow
    Jul 22 '13 at 8:08










  • @eestartup - I don't think you can vote for your own answer. Can I explain the code? Not yet! But it does work on cygwin.
    – Chris Snow
    Jul 22 '13 at 9:24






  • 3




    Just a note: This won't work with some configurations of Bash. I believe Debian configures this feature out of their distribution of Bash.
    – user26112
    Jul 22 '13 at 15:57






  • 1




    Urgh, while this is a nice trick, it can too easily cause corrupt downloads. while read like that trashes backslashes and leading whitespace and Bash can't deal with NUL bytes very nicely so binary files are out. And unquoted $line will glob ... None of this I see mentioned in the answer.
    – ilkkachu
    May 16 '17 at 11:53



















19





+50









Use lynx.



It is pretty common for most of Unix/Linux.



lynx -dump http://www.google.com


-dump: dump the first file to stdout and exit



man lynx


Or netcat:



/usr/bin/printf 'GET / n' | nc www.google.com 80


Or telnet:



(echo 'GET /'; echo ""; sleep 1; ) | telnet www.google.com 80





share|improve this answer



















  • 4




    The OP has "*nix which does not have any command line utilities for downloading files", so no lynx for sure.
    – Celada
    Jul 25 '14 at 14:06






  • 2




    Note lynx -source is closer to wget
    – Steven Penny
    Dec 20 '14 at 8:34










  • Hey, so this is a really late comment but how do you save the output of the telnet command to a file? Redirecting with ">" outputs both the file's contents and telnet output such as "Trying 93.184.216.34... Connected to www.example.com.". I'm in a situation where I can only use telnet, I'm trying to make a chroot jail with the least frameworks possible.
    – pixelomer
    Sep 11 '18 at 9:39





















10














Adapted from Chris Snow answer
This can also handle binary transfer files



function __curl() {
read proto server path <<<$(echo ${1//// })
DOC=/${path// //}
HOST=${server//:*}
PORT=${server//*:}
[[ x"${HOST}" == x"${PORT}" ]] && PORT=80

exec 3<>/dev/tcp/${HOST}/$PORT
echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3
(while read line; do
[[ "$line" == $'r' ]] && break
done && cat) <&3
exec 3>&-
}



  • i break && cat to get out of read

  • i use http 1.0 so there's no need to wait for/send a connection:close


You can test binary files like this



ivs@acsfrlt-j8shv32:/mnt/r $ __curl http://www.google.com/favicon.ico > mine.ico
ivs@acsfrlt-j8shv32:/mnt/r $ curl http://www.google.com/favicon.ico > theirs.ico
ivs@acsfrlt-j8shv32:/mnt/r $ md5sum mine.ico theirs.ico
f3418a443e7d841097c714d69ec4bcb8 mine.ico
f3418a443e7d841097c714d69ec4bcb8 theirs.ico





share|improve this answer























  • This won't handle binary transfer files—it will fail on null bytes.
    – Wildcard
    Feb 2 '18 at 2:40










  • @Wildcard, i do not understand , i've edited with a binary file transfer example (containing null bytes), can you point me what i'm missing ?
    – 131
    Feb 2 '18 at 7:58








  • 1




    @Wildcard, heheh, yeah that looks like it should work, since it reads the actual file data with cat. I'm not sure if that's cheating (since it's not purely the shell), or a nice solution (since cat is a standard tool, after all). But @131, you might want to add a note about why it works better than the other solutions here.
    – ilkkachu
    Feb 2 '18 at 8:54










  • @Wildcard, I added the pure bash solution too as an answer below. And yes, cheating or not, this is a valid solution and worth an upvote :)
    – ilkkachu
    Feb 2 '18 at 10:41



















7














Taking the "just Bash and nothing else" strictly, here's one adaptation of earlier answers (@Chris's, @131's) that does not call any external utilities (not even standard ones) but also works with binary files:



#!/bin/bash
download() {
read proto server path <<< "${1//"/"/ }"
DOC=/${path// //}
HOST=${server//:*}
PORT=${server//*:}
[[ x"${HOST}" == x"${PORT}" ]] && PORT=80

exec 3<>/dev/tcp/${HOST}/$PORT

# send request
echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3

# read the header, it ends in a empty line (just CRLF)
while IFS= read -r line ; do
[[ "$line" == $'r' ]] && break
done <&3

# read the data
nul=''
while IFS= read -d '' -r x || { nul=""; [ -n "$x" ]; }; do
printf "%s$nul" "$x"
done <&3
exec 3>&-
}


Use with download http://path/to/file > file.



We deal with NUL bytes with read -d ''. It reads until a NUL byte, and returns true if it found one, false if it didn't. Bash can't handle NUL bytes in strings, so when read returns with true, we add the NUL byte manually when printing, and when it returns false, we know there are no NUL bytes any more, and this should be the last piece of data.



Tested with Bash 4.4 on files with NULs in the middle, and ending in zero, one or two NULs, and also with the wget and curl binaries from Debian. The 373 kB wget binary took about 5.7 seconds to download. A speed of about 65 kB/s or a bit more than 512 kb/s.



In comparison, @131's cat-solution finishes in less than 0.1 s, or almost a hundred times faster. Not very surprising, really.



This is obviously silly, since without using external utilities, there's not much we can do with the downloaded file, not even make it executable.






share|improve this answer























  • Isn't echo a standalone -non shell- binary ? (:p)
    – 131
    Feb 2 '18 at 10:56






  • 1




    @131, no! Bash has echo and printf as builtins (it needs a builtin printf to implement printf -v)
    – ilkkachu
    Feb 2 '18 at 11:00



















4














Use uploading instead, via SSH from your local machine



A "minimal headless *nix" box means you probably SSH into it. So you can also use SSH to upload to it. Which is functionally equivalent to downloading (of software packages etc.) except when you want a download command to include in a script on your headless server of course.



As shown in this answer, you would execute the following on your local machine to place a file on your remote headless server:



wget -O - http://example.com/file.zip | ssh user@host 'cat >/path/to/file.zip'


Faster uploading via SSH from a third machine



The disadvantage of the above solution compared to downloading is lower transfer speed, since the connection with your local machine usually has much less bandwidth than the connection between your headless server and other servers.



To solve that, you can of course execute the above command on another server with decent bandwidth. To make that more comfortable (avoiding a manual login on the third machine), here is a command to execute on your local machine.



To be secure, copy & paste that command including the leading space character ' '. See the explanations below for the reason.



 ssh user@intermediate-host "sshpass -f <(printf '%sn' yourpassword) 
ssh -T -e none
-o StrictHostKeyChecking=no
< <(wget -O - http://example.com/input-file.zip)
user@target-host
'cat >/path/to/output-file.zip'
"


Explanations:




  • The command will ssh to your third machine intermediate-host, start downloading a file to there via wget, and start uploading it to target-host via SSH. Downloading and uploading use the bandwidth of your intermediate-host and happen at the same time (due to Bash pipe equivalents), so progress will be fast.


  • When using this, you have to replace the two server logins (user@*-host), the target host password (yourpassword), the download URL (http://example.com/…) and the output path on your target host (/path/to/output-file.zip) with appropriate own values.


  • For the -T -e none SSH options when using it to transfer files, see these detailed explanations.


  • This command is meant for cases where you can't use SSH's public key authentication mechanism – it still happens with some shared hosting providers, notably Host Europe. To still automate the process, we rely on sshpass to be able to supply the password in the command. It requires sshpass to be installed on your intermediate host (sudo apt-get install sshpass under Ubuntu).


  • We try to use sshpass in a secure way, but it will still not be as secure as the SSH pubkey mechanism (says man sshpass). In particular, we supply the SSH password not as a command line argument but via a file, which is replaced by bash process substitution to make sure it never exists on disk. The printf is a bash built-in, making sure this part of the code does not pop up as a separate command in ps output as that would expose the password [source]. I think that this use of sshpass is just as secure as the sshpass -d<file-descriptor> variant recommended in man sshpass, because bash maps it internally to such a /dev/fd/* file descriptor anyway. And that without using a temp file [source]. But no guarantees, maybe I overlooked something.


  • Again to make the sshpass usage safe, we need to prevent the command from being recorded to the bash history on your local machine. For that, the whole command is prepended with one space character, which has this effect.


  • The -o StrictHostKeyChecking=no part prevents the command from failing in case it never connected to the target host. (Normally, SSH would then wait for user input to confirm the connection attempt. We make it proceed anyway.)


  • sshpass expects a ssh or scp command as its last argument. So we have to rewrite the typical wget -O - … | ssh … command into a form without a bash pipe, as explained here.







share|improve this answer































    3














    If you have this package libwww-perl



    You can simply use:



    /usr/bin/GET





    share|improve this answer





























      3














      Based on @Chris Snow recipe. I made some improvements:




      • http scheme check (it only supports http)

      • http response validation (response status line check, and split header and body by 'rn' line, not 'Connection: close' which is not true sometimes)

      • failed on non-200 code (it's important to download files on the internet)


      Here is code:



      function __wget() {
      : ${DEBUG:=0}
      local URL=$1
      local tag="Connection: close"

      if [ -z "${URL}" ]; then
      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
      return 1;
      fi
      read proto server path <<<$(echo ${URL//// })
      local SCHEME=${proto//:*}
      local PATH=/${path// //}
      local HOST=${server//:*}
      local PORT=${server//*:}
      if [[ "$SCHEME" != "http" ]]; then
      printf "sorry, %s only support httpn" "${FUNCNAME[0]}"
      return 1
      fi
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
      [[ $DEBUG -eq 1 ]] && echo "SCHEME=$SCHEME" >&2
      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST" >&2
      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT" >&2
      [[ $DEBUG -eq 1 ]] && echo "PATH=$PATH" >&2

      exec 3<>/dev/tcp/${HOST}/$PORT
      if [ $? -ne 0 ]; then
      return $?
      fi
      echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
      if [ $? -ne 0 ]; then
      return $?
      fi
      # 0: at begin, before reading http response
      # 1: reading header
      # 2: reading body
      local state=0
      local num=0
      local code=0
      while read line; do
      num=$(($num + 1))
      # check http code
      if [ $state -eq 0 ]; then
      if [ $num -eq 1 ]; then
      if [[ $line =~ ^HTTP/1.[01][[:space:]]([0-9]{3}).*$ ]]; then
      code="${BASH_REMATCH[1]}"
      if [[ "$code" != "200" ]]; then
      printf "failed to wget '%s', code is not 200 (%s)n" "$URL" "$code"
      exec 3>&-
      return 1
      fi
      state=1
      else
      printf "invalid http response from '%s'" "$URL"
      exec 3>&-
      return 1
      fi
      fi
      elif [ $state -eq 1 ]; then
      if [[ "$line" == $'r' ]]; then
      # found "rn"
      state=2
      fi
      elif [ $state -eq 2 ]; then
      # redirect body to stdout
      # TODO: any way to pipe data directly to stdout?
      echo "$line"
      fi
      done <&3
      exec 3>&-
      }





      share|improve this answer























      • Nice enhancements +1
        – Chris Snow
        May 16 '17 at 18:43










      • It worked, But I found a concern, when I use this scripts, It keep wait several seconds when all data is read finished, this case not happen in @Chris Snow answer, anyone could explain this?
        – zw963
        May 19 '17 at 14:45










      • And, in this answer, echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3, ${tag} is not specified.
        – zw963
        May 19 '17 at 15:17










      • I edit this answer with tag variable is correct set, it work well now.
        – zw963
        May 19 '17 at 15:28










      • @zw963 Thanks for fixing the bug!
        – Yecheng Fu
        Sep 22 '17 at 9:35











      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "106"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f83926%2fhow-to-download-a-file-using-just-bash-and-nothing-else-no-curl-wget-perl-et%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      7 Answers
      7






      active

      oldest

      votes








      7 Answers
      7






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      57














      If you have bash 2.04 or above with the /dev/tcp pseudo-device enabled, you can download a file from bash itself.



      Paste the following code directly into a bash shell (you don't need to save the code into a file for executing):



      function __wget() {
      : ${DEBUG:=0}
      local URL=$1
      local tag="Connection: close"
      local mark=0

      if [ -z "${URL}" ]; then
      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
      return 1;
      fi
      read proto server path <<<$(echo ${URL//// })
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST"
      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT"
      [[ $DEBUG -eq 1 ]] && echo "DOC =$DOC"

      exec 3<>/dev/tcp/${HOST}/$PORT
      echo -en "GET ${DOC} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
      while read line; do
      [[ $mark -eq 1 ]] && echo $line
      if [[ "${line}" =~ "${tag}" ]]; then
      mark=1
      fi
      done <&3
      exec 3>&-
      }


      Then you can execute it as from the shell as follows:



      __wget http://example.iana.org/


      Source: Moreaki's answer upgrading and installing packages through the cygwin command line?



      Update:
      as mentioned in the comment, the approach outlined above is simplistic:




      • the read will trashes backslashes and leading whitespace.

      • Bash can't deal with NUL bytes very nicely so binary files are out.

      • unquoted $line will glob.






      share|improve this answer



















      • 8




        So you answered your own question at the same time as you asked it. That's an interesting time machine you have ;)
        – Meer Borg
        Jul 22 '13 at 7:59






      • 10




        @MeerBorg - when you ask a question, look for the tick box 'answer your own question' - blog.stackoverflow.com/2011/07/…
        – Chris Snow
        Jul 22 '13 at 8:08










      • @eestartup - I don't think you can vote for your own answer. Can I explain the code? Not yet! But it does work on cygwin.
        – Chris Snow
        Jul 22 '13 at 9:24






      • 3




        Just a note: This won't work with some configurations of Bash. I believe Debian configures this feature out of their distribution of Bash.
        – user26112
        Jul 22 '13 at 15:57






      • 1




        Urgh, while this is a nice trick, it can too easily cause corrupt downloads. while read like that trashes backslashes and leading whitespace and Bash can't deal with NUL bytes very nicely so binary files are out. And unquoted $line will glob ... None of this I see mentioned in the answer.
        – ilkkachu
        May 16 '17 at 11:53
















      57














      If you have bash 2.04 or above with the /dev/tcp pseudo-device enabled, you can download a file from bash itself.



      Paste the following code directly into a bash shell (you don't need to save the code into a file for executing):



      function __wget() {
      : ${DEBUG:=0}
      local URL=$1
      local tag="Connection: close"
      local mark=0

      if [ -z "${URL}" ]; then
      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
      return 1;
      fi
      read proto server path <<<$(echo ${URL//// })
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST"
      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT"
      [[ $DEBUG -eq 1 ]] && echo "DOC =$DOC"

      exec 3<>/dev/tcp/${HOST}/$PORT
      echo -en "GET ${DOC} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
      while read line; do
      [[ $mark -eq 1 ]] && echo $line
      if [[ "${line}" =~ "${tag}" ]]; then
      mark=1
      fi
      done <&3
      exec 3>&-
      }


      Then you can execute it as from the shell as follows:



      __wget http://example.iana.org/


      Source: Moreaki's answer upgrading and installing packages through the cygwin command line?



      Update:
      as mentioned in the comment, the approach outlined above is simplistic:




      • the read will trashes backslashes and leading whitespace.

      • Bash can't deal with NUL bytes very nicely so binary files are out.

      • unquoted $line will glob.






      share|improve this answer



















      • 8




        So you answered your own question at the same time as you asked it. That's an interesting time machine you have ;)
        – Meer Borg
        Jul 22 '13 at 7:59






      • 10




        @MeerBorg - when you ask a question, look for the tick box 'answer your own question' - blog.stackoverflow.com/2011/07/…
        – Chris Snow
        Jul 22 '13 at 8:08










      • @eestartup - I don't think you can vote for your own answer. Can I explain the code? Not yet! But it does work on cygwin.
        – Chris Snow
        Jul 22 '13 at 9:24






      • 3




        Just a note: This won't work with some configurations of Bash. I believe Debian configures this feature out of their distribution of Bash.
        – user26112
        Jul 22 '13 at 15:57






      • 1




        Urgh, while this is a nice trick, it can too easily cause corrupt downloads. while read like that trashes backslashes and leading whitespace and Bash can't deal with NUL bytes very nicely so binary files are out. And unquoted $line will glob ... None of this I see mentioned in the answer.
        – ilkkachu
        May 16 '17 at 11:53














      57












      57








      57






      If you have bash 2.04 or above with the /dev/tcp pseudo-device enabled, you can download a file from bash itself.



      Paste the following code directly into a bash shell (you don't need to save the code into a file for executing):



      function __wget() {
      : ${DEBUG:=0}
      local URL=$1
      local tag="Connection: close"
      local mark=0

      if [ -z "${URL}" ]; then
      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
      return 1;
      fi
      read proto server path <<<$(echo ${URL//// })
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST"
      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT"
      [[ $DEBUG -eq 1 ]] && echo "DOC =$DOC"

      exec 3<>/dev/tcp/${HOST}/$PORT
      echo -en "GET ${DOC} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
      while read line; do
      [[ $mark -eq 1 ]] && echo $line
      if [[ "${line}" =~ "${tag}" ]]; then
      mark=1
      fi
      done <&3
      exec 3>&-
      }


      Then you can execute it as from the shell as follows:



      __wget http://example.iana.org/


      Source: Moreaki's answer upgrading and installing packages through the cygwin command line?



      Update:
      as mentioned in the comment, the approach outlined above is simplistic:




      • the read will trashes backslashes and leading whitespace.

      • Bash can't deal with NUL bytes very nicely so binary files are out.

      • unquoted $line will glob.






      share|improve this answer














      If you have bash 2.04 or above with the /dev/tcp pseudo-device enabled, you can download a file from bash itself.



      Paste the following code directly into a bash shell (you don't need to save the code into a file for executing):



      function __wget() {
      : ${DEBUG:=0}
      local URL=$1
      local tag="Connection: close"
      local mark=0

      if [ -z "${URL}" ]; then
      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
      return 1;
      fi
      read proto server path <<<$(echo ${URL//// })
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST"
      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT"
      [[ $DEBUG -eq 1 ]] && echo "DOC =$DOC"

      exec 3<>/dev/tcp/${HOST}/$PORT
      echo -en "GET ${DOC} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
      while read line; do
      [[ $mark -eq 1 ]] && echo $line
      if [[ "${line}" =~ "${tag}" ]]; then
      mark=1
      fi
      done <&3
      exec 3>&-
      }


      Then you can execute it as from the shell as follows:



      __wget http://example.iana.org/


      Source: Moreaki's answer upgrading and installing packages through the cygwin command line?



      Update:
      as mentioned in the comment, the approach outlined above is simplistic:




      • the read will trashes backslashes and leading whitespace.

      • Bash can't deal with NUL bytes very nicely so binary files are out.

      • unquoted $line will glob.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited May 16 '17 at 13:05

























      answered Jul 22 '13 at 7:43









      Chris Snow

      1,77831528




      1,77831528








      • 8




        So you answered your own question at the same time as you asked it. That's an interesting time machine you have ;)
        – Meer Borg
        Jul 22 '13 at 7:59






      • 10




        @MeerBorg - when you ask a question, look for the tick box 'answer your own question' - blog.stackoverflow.com/2011/07/…
        – Chris Snow
        Jul 22 '13 at 8:08










      • @eestartup - I don't think you can vote for your own answer. Can I explain the code? Not yet! But it does work on cygwin.
        – Chris Snow
        Jul 22 '13 at 9:24






      • 3




        Just a note: This won't work with some configurations of Bash. I believe Debian configures this feature out of their distribution of Bash.
        – user26112
        Jul 22 '13 at 15:57






      • 1




        Urgh, while this is a nice trick, it can too easily cause corrupt downloads. while read like that trashes backslashes and leading whitespace and Bash can't deal with NUL bytes very nicely so binary files are out. And unquoted $line will glob ... None of this I see mentioned in the answer.
        – ilkkachu
        May 16 '17 at 11:53














      • 8




        So you answered your own question at the same time as you asked it. That's an interesting time machine you have ;)
        – Meer Borg
        Jul 22 '13 at 7:59






      • 10




        @MeerBorg - when you ask a question, look for the tick box 'answer your own question' - blog.stackoverflow.com/2011/07/…
        – Chris Snow
        Jul 22 '13 at 8:08










      • @eestartup - I don't think you can vote for your own answer. Can I explain the code? Not yet! But it does work on cygwin.
        – Chris Snow
        Jul 22 '13 at 9:24






      • 3




        Just a note: This won't work with some configurations of Bash. I believe Debian configures this feature out of their distribution of Bash.
        – user26112
        Jul 22 '13 at 15:57






      • 1




        Urgh, while this is a nice trick, it can too easily cause corrupt downloads. while read like that trashes backslashes and leading whitespace and Bash can't deal with NUL bytes very nicely so binary files are out. And unquoted $line will glob ... None of this I see mentioned in the answer.
        – ilkkachu
        May 16 '17 at 11:53








      8




      8




      So you answered your own question at the same time as you asked it. That's an interesting time machine you have ;)
      – Meer Borg
      Jul 22 '13 at 7:59




      So you answered your own question at the same time as you asked it. That's an interesting time machine you have ;)
      – Meer Borg
      Jul 22 '13 at 7:59




      10




      10




      @MeerBorg - when you ask a question, look for the tick box 'answer your own question' - blog.stackoverflow.com/2011/07/…
      – Chris Snow
      Jul 22 '13 at 8:08




      @MeerBorg - when you ask a question, look for the tick box 'answer your own question' - blog.stackoverflow.com/2011/07/…
      – Chris Snow
      Jul 22 '13 at 8:08












      @eestartup - I don't think you can vote for your own answer. Can I explain the code? Not yet! But it does work on cygwin.
      – Chris Snow
      Jul 22 '13 at 9:24




      @eestartup - I don't think you can vote for your own answer. Can I explain the code? Not yet! But it does work on cygwin.
      – Chris Snow
      Jul 22 '13 at 9:24




      3




      3




      Just a note: This won't work with some configurations of Bash. I believe Debian configures this feature out of their distribution of Bash.
      – user26112
      Jul 22 '13 at 15:57




      Just a note: This won't work with some configurations of Bash. I believe Debian configures this feature out of their distribution of Bash.
      – user26112
      Jul 22 '13 at 15:57




      1




      1




      Urgh, while this is a nice trick, it can too easily cause corrupt downloads. while read like that trashes backslashes and leading whitespace and Bash can't deal with NUL bytes very nicely so binary files are out. And unquoted $line will glob ... None of this I see mentioned in the answer.
      – ilkkachu
      May 16 '17 at 11:53




      Urgh, while this is a nice trick, it can too easily cause corrupt downloads. while read like that trashes backslashes and leading whitespace and Bash can't deal with NUL bytes very nicely so binary files are out. And unquoted $line will glob ... None of this I see mentioned in the answer.
      – ilkkachu
      May 16 '17 at 11:53













      19





      +50









      Use lynx.



      It is pretty common for most of Unix/Linux.



      lynx -dump http://www.google.com


      -dump: dump the first file to stdout and exit



      man lynx


      Or netcat:



      /usr/bin/printf 'GET / n' | nc www.google.com 80


      Or telnet:



      (echo 'GET /'; echo ""; sleep 1; ) | telnet www.google.com 80





      share|improve this answer



















      • 4




        The OP has "*nix which does not have any command line utilities for downloading files", so no lynx for sure.
        – Celada
        Jul 25 '14 at 14:06






      • 2




        Note lynx -source is closer to wget
        – Steven Penny
        Dec 20 '14 at 8:34










      • Hey, so this is a really late comment but how do you save the output of the telnet command to a file? Redirecting with ">" outputs both the file's contents and telnet output such as "Trying 93.184.216.34... Connected to www.example.com.". I'm in a situation where I can only use telnet, I'm trying to make a chroot jail with the least frameworks possible.
        – pixelomer
        Sep 11 '18 at 9:39


















      19





      +50









      Use lynx.



      It is pretty common for most of Unix/Linux.



      lynx -dump http://www.google.com


      -dump: dump the first file to stdout and exit



      man lynx


      Or netcat:



      /usr/bin/printf 'GET / n' | nc www.google.com 80


      Or telnet:



      (echo 'GET /'; echo ""; sleep 1; ) | telnet www.google.com 80





      share|improve this answer



















      • 4




        The OP has "*nix which does not have any command line utilities for downloading files", so no lynx for sure.
        – Celada
        Jul 25 '14 at 14:06






      • 2




        Note lynx -source is closer to wget
        – Steven Penny
        Dec 20 '14 at 8:34










      • Hey, so this is a really late comment but how do you save the output of the telnet command to a file? Redirecting with ">" outputs both the file's contents and telnet output such as "Trying 93.184.216.34... Connected to www.example.com.". I'm in a situation where I can only use telnet, I'm trying to make a chroot jail with the least frameworks possible.
        – pixelomer
        Sep 11 '18 at 9:39
















      19





      +50







      19





      +50



      19




      +50




      Use lynx.



      It is pretty common for most of Unix/Linux.



      lynx -dump http://www.google.com


      -dump: dump the first file to stdout and exit



      man lynx


      Or netcat:



      /usr/bin/printf 'GET / n' | nc www.google.com 80


      Or telnet:



      (echo 'GET /'; echo ""; sleep 1; ) | telnet www.google.com 80





      share|improve this answer














      Use lynx.



      It is pretty common for most of Unix/Linux.



      lynx -dump http://www.google.com


      -dump: dump the first file to stdout and exit



      man lynx


      Or netcat:



      /usr/bin/printf 'GET / n' | nc www.google.com 80


      Or telnet:



      (echo 'GET /'; echo ""; sleep 1; ) | telnet www.google.com 80






      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jul 22 '13 at 16:56

























      answered Jul 22 '13 at 15:49









      woodstack

      40424




      40424








      • 4




        The OP has "*nix which does not have any command line utilities for downloading files", so no lynx for sure.
        – Celada
        Jul 25 '14 at 14:06






      • 2




        Note lynx -source is closer to wget
        – Steven Penny
        Dec 20 '14 at 8:34










      • Hey, so this is a really late comment but how do you save the output of the telnet command to a file? Redirecting with ">" outputs both the file's contents and telnet output such as "Trying 93.184.216.34... Connected to www.example.com.". I'm in a situation where I can only use telnet, I'm trying to make a chroot jail with the least frameworks possible.
        – pixelomer
        Sep 11 '18 at 9:39
















      • 4




        The OP has "*nix which does not have any command line utilities for downloading files", so no lynx for sure.
        – Celada
        Jul 25 '14 at 14:06






      • 2




        Note lynx -source is closer to wget
        – Steven Penny
        Dec 20 '14 at 8:34










      • Hey, so this is a really late comment but how do you save the output of the telnet command to a file? Redirecting with ">" outputs both the file's contents and telnet output such as "Trying 93.184.216.34... Connected to www.example.com.". I'm in a situation where I can only use telnet, I'm trying to make a chroot jail with the least frameworks possible.
        – pixelomer
        Sep 11 '18 at 9:39










      4




      4




      The OP has "*nix which does not have any command line utilities for downloading files", so no lynx for sure.
      – Celada
      Jul 25 '14 at 14:06




      The OP has "*nix which does not have any command line utilities for downloading files", so no lynx for sure.
      – Celada
      Jul 25 '14 at 14:06




      2




      2




      Note lynx -source is closer to wget
      – Steven Penny
      Dec 20 '14 at 8:34




      Note lynx -source is closer to wget
      – Steven Penny
      Dec 20 '14 at 8:34












      Hey, so this is a really late comment but how do you save the output of the telnet command to a file? Redirecting with ">" outputs both the file's contents and telnet output such as "Trying 93.184.216.34... Connected to www.example.com.". I'm in a situation where I can only use telnet, I'm trying to make a chroot jail with the least frameworks possible.
      – pixelomer
      Sep 11 '18 at 9:39






      Hey, so this is a really late comment but how do you save the output of the telnet command to a file? Redirecting with ">" outputs both the file's contents and telnet output such as "Trying 93.184.216.34... Connected to www.example.com.". I'm in a situation where I can only use telnet, I'm trying to make a chroot jail with the least frameworks possible.
      – pixelomer
      Sep 11 '18 at 9:39













      10














      Adapted from Chris Snow answer
      This can also handle binary transfer files



      function __curl() {
      read proto server path <<<$(echo ${1//// })
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80

      exec 3<>/dev/tcp/${HOST}/$PORT
      echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3
      (while read line; do
      [[ "$line" == $'r' ]] && break
      done && cat) <&3
      exec 3>&-
      }



      • i break && cat to get out of read

      • i use http 1.0 so there's no need to wait for/send a connection:close


      You can test binary files like this



      ivs@acsfrlt-j8shv32:/mnt/r $ __curl http://www.google.com/favicon.ico > mine.ico
      ivs@acsfrlt-j8shv32:/mnt/r $ curl http://www.google.com/favicon.ico > theirs.ico
      ivs@acsfrlt-j8shv32:/mnt/r $ md5sum mine.ico theirs.ico
      f3418a443e7d841097c714d69ec4bcb8 mine.ico
      f3418a443e7d841097c714d69ec4bcb8 theirs.ico





      share|improve this answer























      • This won't handle binary transfer files—it will fail on null bytes.
        – Wildcard
        Feb 2 '18 at 2:40










      • @Wildcard, i do not understand , i've edited with a binary file transfer example (containing null bytes), can you point me what i'm missing ?
        – 131
        Feb 2 '18 at 7:58








      • 1




        @Wildcard, heheh, yeah that looks like it should work, since it reads the actual file data with cat. I'm not sure if that's cheating (since it's not purely the shell), or a nice solution (since cat is a standard tool, after all). But @131, you might want to add a note about why it works better than the other solutions here.
        – ilkkachu
        Feb 2 '18 at 8:54










      • @Wildcard, I added the pure bash solution too as an answer below. And yes, cheating or not, this is a valid solution and worth an upvote :)
        – ilkkachu
        Feb 2 '18 at 10:41
















      10














      Adapted from Chris Snow answer
      This can also handle binary transfer files



      function __curl() {
      read proto server path <<<$(echo ${1//// })
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80

      exec 3<>/dev/tcp/${HOST}/$PORT
      echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3
      (while read line; do
      [[ "$line" == $'r' ]] && break
      done && cat) <&3
      exec 3>&-
      }



      • i break && cat to get out of read

      • i use http 1.0 so there's no need to wait for/send a connection:close


      You can test binary files like this



      ivs@acsfrlt-j8shv32:/mnt/r $ __curl http://www.google.com/favicon.ico > mine.ico
      ivs@acsfrlt-j8shv32:/mnt/r $ curl http://www.google.com/favicon.ico > theirs.ico
      ivs@acsfrlt-j8shv32:/mnt/r $ md5sum mine.ico theirs.ico
      f3418a443e7d841097c714d69ec4bcb8 mine.ico
      f3418a443e7d841097c714d69ec4bcb8 theirs.ico





      share|improve this answer























      • This won't handle binary transfer files—it will fail on null bytes.
        – Wildcard
        Feb 2 '18 at 2:40










      • @Wildcard, i do not understand , i've edited with a binary file transfer example (containing null bytes), can you point me what i'm missing ?
        – 131
        Feb 2 '18 at 7:58








      • 1




        @Wildcard, heheh, yeah that looks like it should work, since it reads the actual file data with cat. I'm not sure if that's cheating (since it's not purely the shell), or a nice solution (since cat is a standard tool, after all). But @131, you might want to add a note about why it works better than the other solutions here.
        – ilkkachu
        Feb 2 '18 at 8:54










      • @Wildcard, I added the pure bash solution too as an answer below. And yes, cheating or not, this is a valid solution and worth an upvote :)
        – ilkkachu
        Feb 2 '18 at 10:41














      10












      10








      10






      Adapted from Chris Snow answer
      This can also handle binary transfer files



      function __curl() {
      read proto server path <<<$(echo ${1//// })
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80

      exec 3<>/dev/tcp/${HOST}/$PORT
      echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3
      (while read line; do
      [[ "$line" == $'r' ]] && break
      done && cat) <&3
      exec 3>&-
      }



      • i break && cat to get out of read

      • i use http 1.0 so there's no need to wait for/send a connection:close


      You can test binary files like this



      ivs@acsfrlt-j8shv32:/mnt/r $ __curl http://www.google.com/favicon.ico > mine.ico
      ivs@acsfrlt-j8shv32:/mnt/r $ curl http://www.google.com/favicon.ico > theirs.ico
      ivs@acsfrlt-j8shv32:/mnt/r $ md5sum mine.ico theirs.ico
      f3418a443e7d841097c714d69ec4bcb8 mine.ico
      f3418a443e7d841097c714d69ec4bcb8 theirs.ico





      share|improve this answer














      Adapted from Chris Snow answer
      This can also handle binary transfer files



      function __curl() {
      read proto server path <<<$(echo ${1//// })
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80

      exec 3<>/dev/tcp/${HOST}/$PORT
      echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3
      (while read line; do
      [[ "$line" == $'r' ]] && break
      done && cat) <&3
      exec 3>&-
      }



      • i break && cat to get out of read

      • i use http 1.0 so there's no need to wait for/send a connection:close


      You can test binary files like this



      ivs@acsfrlt-j8shv32:/mnt/r $ __curl http://www.google.com/favicon.ico > mine.ico
      ivs@acsfrlt-j8shv32:/mnt/r $ curl http://www.google.com/favicon.ico > theirs.ico
      ivs@acsfrlt-j8shv32:/mnt/r $ md5sum mine.ico theirs.ico
      f3418a443e7d841097c714d69ec4bcb8 mine.ico
      f3418a443e7d841097c714d69ec4bcb8 theirs.ico






      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Feb 2 '18 at 8:00

























      answered Feb 1 '18 at 23:08









      131

      20125




      20125












      • This won't handle binary transfer files—it will fail on null bytes.
        – Wildcard
        Feb 2 '18 at 2:40










      • @Wildcard, i do not understand , i've edited with a binary file transfer example (containing null bytes), can you point me what i'm missing ?
        – 131
        Feb 2 '18 at 7:58








      • 1




        @Wildcard, heheh, yeah that looks like it should work, since it reads the actual file data with cat. I'm not sure if that's cheating (since it's not purely the shell), or a nice solution (since cat is a standard tool, after all). But @131, you might want to add a note about why it works better than the other solutions here.
        – ilkkachu
        Feb 2 '18 at 8:54










      • @Wildcard, I added the pure bash solution too as an answer below. And yes, cheating or not, this is a valid solution and worth an upvote :)
        – ilkkachu
        Feb 2 '18 at 10:41


















      • This won't handle binary transfer files—it will fail on null bytes.
        – Wildcard
        Feb 2 '18 at 2:40










      • @Wildcard, i do not understand , i've edited with a binary file transfer example (containing null bytes), can you point me what i'm missing ?
        – 131
        Feb 2 '18 at 7:58








      • 1




        @Wildcard, heheh, yeah that looks like it should work, since it reads the actual file data with cat. I'm not sure if that's cheating (since it's not purely the shell), or a nice solution (since cat is a standard tool, after all). But @131, you might want to add a note about why it works better than the other solutions here.
        – ilkkachu
        Feb 2 '18 at 8:54










      • @Wildcard, I added the pure bash solution too as an answer below. And yes, cheating or not, this is a valid solution and worth an upvote :)
        – ilkkachu
        Feb 2 '18 at 10:41
















      This won't handle binary transfer files—it will fail on null bytes.
      – Wildcard
      Feb 2 '18 at 2:40




      This won't handle binary transfer files—it will fail on null bytes.
      – Wildcard
      Feb 2 '18 at 2:40












      @Wildcard, i do not understand , i've edited with a binary file transfer example (containing null bytes), can you point me what i'm missing ?
      – 131
      Feb 2 '18 at 7:58






      @Wildcard, i do not understand , i've edited with a binary file transfer example (containing null bytes), can you point me what i'm missing ?
      – 131
      Feb 2 '18 at 7:58






      1




      1




      @Wildcard, heheh, yeah that looks like it should work, since it reads the actual file data with cat. I'm not sure if that's cheating (since it's not purely the shell), or a nice solution (since cat is a standard tool, after all). But @131, you might want to add a note about why it works better than the other solutions here.
      – ilkkachu
      Feb 2 '18 at 8:54




      @Wildcard, heheh, yeah that looks like it should work, since it reads the actual file data with cat. I'm not sure if that's cheating (since it's not purely the shell), or a nice solution (since cat is a standard tool, after all). But @131, you might want to add a note about why it works better than the other solutions here.
      – ilkkachu
      Feb 2 '18 at 8:54












      @Wildcard, I added the pure bash solution too as an answer below. And yes, cheating or not, this is a valid solution and worth an upvote :)
      – ilkkachu
      Feb 2 '18 at 10:41




      @Wildcard, I added the pure bash solution too as an answer below. And yes, cheating or not, this is a valid solution and worth an upvote :)
      – ilkkachu
      Feb 2 '18 at 10:41











      7














      Taking the "just Bash and nothing else" strictly, here's one adaptation of earlier answers (@Chris's, @131's) that does not call any external utilities (not even standard ones) but also works with binary files:



      #!/bin/bash
      download() {
      read proto server path <<< "${1//"/"/ }"
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80

      exec 3<>/dev/tcp/${HOST}/$PORT

      # send request
      echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3

      # read the header, it ends in a empty line (just CRLF)
      while IFS= read -r line ; do
      [[ "$line" == $'r' ]] && break
      done <&3

      # read the data
      nul=''
      while IFS= read -d '' -r x || { nul=""; [ -n "$x" ]; }; do
      printf "%s$nul" "$x"
      done <&3
      exec 3>&-
      }


      Use with download http://path/to/file > file.



      We deal with NUL bytes with read -d ''. It reads until a NUL byte, and returns true if it found one, false if it didn't. Bash can't handle NUL bytes in strings, so when read returns with true, we add the NUL byte manually when printing, and when it returns false, we know there are no NUL bytes any more, and this should be the last piece of data.



      Tested with Bash 4.4 on files with NULs in the middle, and ending in zero, one or two NULs, and also with the wget and curl binaries from Debian. The 373 kB wget binary took about 5.7 seconds to download. A speed of about 65 kB/s or a bit more than 512 kb/s.



      In comparison, @131's cat-solution finishes in less than 0.1 s, or almost a hundred times faster. Not very surprising, really.



      This is obviously silly, since without using external utilities, there's not much we can do with the downloaded file, not even make it executable.






      share|improve this answer























      • Isn't echo a standalone -non shell- binary ? (:p)
        – 131
        Feb 2 '18 at 10:56






      • 1




        @131, no! Bash has echo and printf as builtins (it needs a builtin printf to implement printf -v)
        – ilkkachu
        Feb 2 '18 at 11:00
















      7














      Taking the "just Bash and nothing else" strictly, here's one adaptation of earlier answers (@Chris's, @131's) that does not call any external utilities (not even standard ones) but also works with binary files:



      #!/bin/bash
      download() {
      read proto server path <<< "${1//"/"/ }"
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80

      exec 3<>/dev/tcp/${HOST}/$PORT

      # send request
      echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3

      # read the header, it ends in a empty line (just CRLF)
      while IFS= read -r line ; do
      [[ "$line" == $'r' ]] && break
      done <&3

      # read the data
      nul=''
      while IFS= read -d '' -r x || { nul=""; [ -n "$x" ]; }; do
      printf "%s$nul" "$x"
      done <&3
      exec 3>&-
      }


      Use with download http://path/to/file > file.



      We deal with NUL bytes with read -d ''. It reads until a NUL byte, and returns true if it found one, false if it didn't. Bash can't handle NUL bytes in strings, so when read returns with true, we add the NUL byte manually when printing, and when it returns false, we know there are no NUL bytes any more, and this should be the last piece of data.



      Tested with Bash 4.4 on files with NULs in the middle, and ending in zero, one or two NULs, and also with the wget and curl binaries from Debian. The 373 kB wget binary took about 5.7 seconds to download. A speed of about 65 kB/s or a bit more than 512 kb/s.



      In comparison, @131's cat-solution finishes in less than 0.1 s, or almost a hundred times faster. Not very surprising, really.



      This is obviously silly, since without using external utilities, there's not much we can do with the downloaded file, not even make it executable.






      share|improve this answer























      • Isn't echo a standalone -non shell- binary ? (:p)
        – 131
        Feb 2 '18 at 10:56






      • 1




        @131, no! Bash has echo and printf as builtins (it needs a builtin printf to implement printf -v)
        – ilkkachu
        Feb 2 '18 at 11:00














      7












      7








      7






      Taking the "just Bash and nothing else" strictly, here's one adaptation of earlier answers (@Chris's, @131's) that does not call any external utilities (not even standard ones) but also works with binary files:



      #!/bin/bash
      download() {
      read proto server path <<< "${1//"/"/ }"
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80

      exec 3<>/dev/tcp/${HOST}/$PORT

      # send request
      echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3

      # read the header, it ends in a empty line (just CRLF)
      while IFS= read -r line ; do
      [[ "$line" == $'r' ]] && break
      done <&3

      # read the data
      nul=''
      while IFS= read -d '' -r x || { nul=""; [ -n "$x" ]; }; do
      printf "%s$nul" "$x"
      done <&3
      exec 3>&-
      }


      Use with download http://path/to/file > file.



      We deal with NUL bytes with read -d ''. It reads until a NUL byte, and returns true if it found one, false if it didn't. Bash can't handle NUL bytes in strings, so when read returns with true, we add the NUL byte manually when printing, and when it returns false, we know there are no NUL bytes any more, and this should be the last piece of data.



      Tested with Bash 4.4 on files with NULs in the middle, and ending in zero, one or two NULs, and also with the wget and curl binaries from Debian. The 373 kB wget binary took about 5.7 seconds to download. A speed of about 65 kB/s or a bit more than 512 kb/s.



      In comparison, @131's cat-solution finishes in less than 0.1 s, or almost a hundred times faster. Not very surprising, really.



      This is obviously silly, since without using external utilities, there's not much we can do with the downloaded file, not even make it executable.






      share|improve this answer














      Taking the "just Bash and nothing else" strictly, here's one adaptation of earlier answers (@Chris's, @131's) that does not call any external utilities (not even standard ones) but also works with binary files:



      #!/bin/bash
      download() {
      read proto server path <<< "${1//"/"/ }"
      DOC=/${path// //}
      HOST=${server//:*}
      PORT=${server//*:}
      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80

      exec 3<>/dev/tcp/${HOST}/$PORT

      # send request
      echo -en "GET ${DOC} HTTP/1.0rnHost: ${HOST}rnrn" >&3

      # read the header, it ends in a empty line (just CRLF)
      while IFS= read -r line ; do
      [[ "$line" == $'r' ]] && break
      done <&3

      # read the data
      nul=''
      while IFS= read -d '' -r x || { nul=""; [ -n "$x" ]; }; do
      printf "%s$nul" "$x"
      done <&3
      exec 3>&-
      }


      Use with download http://path/to/file > file.



      We deal with NUL bytes with read -d ''. It reads until a NUL byte, and returns true if it found one, false if it didn't. Bash can't handle NUL bytes in strings, so when read returns with true, we add the NUL byte manually when printing, and when it returns false, we know there are no NUL bytes any more, and this should be the last piece of data.



      Tested with Bash 4.4 on files with NULs in the middle, and ending in zero, one or two NULs, and also with the wget and curl binaries from Debian. The 373 kB wget binary took about 5.7 seconds to download. A speed of about 65 kB/s or a bit more than 512 kb/s.



      In comparison, @131's cat-solution finishes in less than 0.1 s, or almost a hundred times faster. Not very surprising, really.



      This is obviously silly, since without using external utilities, there's not much we can do with the downloaded file, not even make it executable.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Feb 2 '18 at 10:38

























      answered Feb 2 '18 at 10:32









      ilkkachu

      56.3k784156




      56.3k784156












      • Isn't echo a standalone -non shell- binary ? (:p)
        – 131
        Feb 2 '18 at 10:56






      • 1




        @131, no! Bash has echo and printf as builtins (it needs a builtin printf to implement printf -v)
        – ilkkachu
        Feb 2 '18 at 11:00


















      • Isn't echo a standalone -non shell- binary ? (:p)
        – 131
        Feb 2 '18 at 10:56






      • 1




        @131, no! Bash has echo and printf as builtins (it needs a builtin printf to implement printf -v)
        – ilkkachu
        Feb 2 '18 at 11:00
















      Isn't echo a standalone -non shell- binary ? (:p)
      – 131
      Feb 2 '18 at 10:56




      Isn't echo a standalone -non shell- binary ? (:p)
      – 131
      Feb 2 '18 at 10:56




      1




      1




      @131, no! Bash has echo and printf as builtins (it needs a builtin printf to implement printf -v)
      – ilkkachu
      Feb 2 '18 at 11:00




      @131, no! Bash has echo and printf as builtins (it needs a builtin printf to implement printf -v)
      – ilkkachu
      Feb 2 '18 at 11:00











      4














      Use uploading instead, via SSH from your local machine



      A "minimal headless *nix" box means you probably SSH into it. So you can also use SSH to upload to it. Which is functionally equivalent to downloading (of software packages etc.) except when you want a download command to include in a script on your headless server of course.



      As shown in this answer, you would execute the following on your local machine to place a file on your remote headless server:



      wget -O - http://example.com/file.zip | ssh user@host 'cat >/path/to/file.zip'


      Faster uploading via SSH from a third machine



      The disadvantage of the above solution compared to downloading is lower transfer speed, since the connection with your local machine usually has much less bandwidth than the connection between your headless server and other servers.



      To solve that, you can of course execute the above command on another server with decent bandwidth. To make that more comfortable (avoiding a manual login on the third machine), here is a command to execute on your local machine.



      To be secure, copy & paste that command including the leading space character ' '. See the explanations below for the reason.



       ssh user@intermediate-host "sshpass -f <(printf '%sn' yourpassword) 
      ssh -T -e none
      -o StrictHostKeyChecking=no
      < <(wget -O - http://example.com/input-file.zip)
      user@target-host
      'cat >/path/to/output-file.zip'
      "


      Explanations:




      • The command will ssh to your third machine intermediate-host, start downloading a file to there via wget, and start uploading it to target-host via SSH. Downloading and uploading use the bandwidth of your intermediate-host and happen at the same time (due to Bash pipe equivalents), so progress will be fast.


      • When using this, you have to replace the two server logins (user@*-host), the target host password (yourpassword), the download URL (http://example.com/…) and the output path on your target host (/path/to/output-file.zip) with appropriate own values.


      • For the -T -e none SSH options when using it to transfer files, see these detailed explanations.


      • This command is meant for cases where you can't use SSH's public key authentication mechanism – it still happens with some shared hosting providers, notably Host Europe. To still automate the process, we rely on sshpass to be able to supply the password in the command. It requires sshpass to be installed on your intermediate host (sudo apt-get install sshpass under Ubuntu).


      • We try to use sshpass in a secure way, but it will still not be as secure as the SSH pubkey mechanism (says man sshpass). In particular, we supply the SSH password not as a command line argument but via a file, which is replaced by bash process substitution to make sure it never exists on disk. The printf is a bash built-in, making sure this part of the code does not pop up as a separate command in ps output as that would expose the password [source]. I think that this use of sshpass is just as secure as the sshpass -d<file-descriptor> variant recommended in man sshpass, because bash maps it internally to such a /dev/fd/* file descriptor anyway. And that without using a temp file [source]. But no guarantees, maybe I overlooked something.


      • Again to make the sshpass usage safe, we need to prevent the command from being recorded to the bash history on your local machine. For that, the whole command is prepended with one space character, which has this effect.


      • The -o StrictHostKeyChecking=no part prevents the command from failing in case it never connected to the target host. (Normally, SSH would then wait for user input to confirm the connection attempt. We make it proceed anyway.)


      • sshpass expects a ssh or scp command as its last argument. So we have to rewrite the typical wget -O - … | ssh … command into a form without a bash pipe, as explained here.







      share|improve this answer




























        4














        Use uploading instead, via SSH from your local machine



        A "minimal headless *nix" box means you probably SSH into it. So you can also use SSH to upload to it. Which is functionally equivalent to downloading (of software packages etc.) except when you want a download command to include in a script on your headless server of course.



        As shown in this answer, you would execute the following on your local machine to place a file on your remote headless server:



        wget -O - http://example.com/file.zip | ssh user@host 'cat >/path/to/file.zip'


        Faster uploading via SSH from a third machine



        The disadvantage of the above solution compared to downloading is lower transfer speed, since the connection with your local machine usually has much less bandwidth than the connection between your headless server and other servers.



        To solve that, you can of course execute the above command on another server with decent bandwidth. To make that more comfortable (avoiding a manual login on the third machine), here is a command to execute on your local machine.



        To be secure, copy & paste that command including the leading space character ' '. See the explanations below for the reason.



         ssh user@intermediate-host "sshpass -f <(printf '%sn' yourpassword) 
        ssh -T -e none
        -o StrictHostKeyChecking=no
        < <(wget -O - http://example.com/input-file.zip)
        user@target-host
        'cat >/path/to/output-file.zip'
        "


        Explanations:




        • The command will ssh to your third machine intermediate-host, start downloading a file to there via wget, and start uploading it to target-host via SSH. Downloading and uploading use the bandwidth of your intermediate-host and happen at the same time (due to Bash pipe equivalents), so progress will be fast.


        • When using this, you have to replace the two server logins (user@*-host), the target host password (yourpassword), the download URL (http://example.com/…) and the output path on your target host (/path/to/output-file.zip) with appropriate own values.


        • For the -T -e none SSH options when using it to transfer files, see these detailed explanations.


        • This command is meant for cases where you can't use SSH's public key authentication mechanism – it still happens with some shared hosting providers, notably Host Europe. To still automate the process, we rely on sshpass to be able to supply the password in the command. It requires sshpass to be installed on your intermediate host (sudo apt-get install sshpass under Ubuntu).


        • We try to use sshpass in a secure way, but it will still not be as secure as the SSH pubkey mechanism (says man sshpass). In particular, we supply the SSH password not as a command line argument but via a file, which is replaced by bash process substitution to make sure it never exists on disk. The printf is a bash built-in, making sure this part of the code does not pop up as a separate command in ps output as that would expose the password [source]. I think that this use of sshpass is just as secure as the sshpass -d<file-descriptor> variant recommended in man sshpass, because bash maps it internally to such a /dev/fd/* file descriptor anyway. And that without using a temp file [source]. But no guarantees, maybe I overlooked something.


        • Again to make the sshpass usage safe, we need to prevent the command from being recorded to the bash history on your local machine. For that, the whole command is prepended with one space character, which has this effect.


        • The -o StrictHostKeyChecking=no part prevents the command from failing in case it never connected to the target host. (Normally, SSH would then wait for user input to confirm the connection attempt. We make it proceed anyway.)


        • sshpass expects a ssh or scp command as its last argument. So we have to rewrite the typical wget -O - … | ssh … command into a form without a bash pipe, as explained here.







        share|improve this answer


























          4












          4








          4






          Use uploading instead, via SSH from your local machine



          A "minimal headless *nix" box means you probably SSH into it. So you can also use SSH to upload to it. Which is functionally equivalent to downloading (of software packages etc.) except when you want a download command to include in a script on your headless server of course.



          As shown in this answer, you would execute the following on your local machine to place a file on your remote headless server:



          wget -O - http://example.com/file.zip | ssh user@host 'cat >/path/to/file.zip'


          Faster uploading via SSH from a third machine



          The disadvantage of the above solution compared to downloading is lower transfer speed, since the connection with your local machine usually has much less bandwidth than the connection between your headless server and other servers.



          To solve that, you can of course execute the above command on another server with decent bandwidth. To make that more comfortable (avoiding a manual login on the third machine), here is a command to execute on your local machine.



          To be secure, copy & paste that command including the leading space character ' '. See the explanations below for the reason.



           ssh user@intermediate-host "sshpass -f <(printf '%sn' yourpassword) 
          ssh -T -e none
          -o StrictHostKeyChecking=no
          < <(wget -O - http://example.com/input-file.zip)
          user@target-host
          'cat >/path/to/output-file.zip'
          "


          Explanations:




          • The command will ssh to your third machine intermediate-host, start downloading a file to there via wget, and start uploading it to target-host via SSH. Downloading and uploading use the bandwidth of your intermediate-host and happen at the same time (due to Bash pipe equivalents), so progress will be fast.


          • When using this, you have to replace the two server logins (user@*-host), the target host password (yourpassword), the download URL (http://example.com/…) and the output path on your target host (/path/to/output-file.zip) with appropriate own values.


          • For the -T -e none SSH options when using it to transfer files, see these detailed explanations.


          • This command is meant for cases where you can't use SSH's public key authentication mechanism – it still happens with some shared hosting providers, notably Host Europe. To still automate the process, we rely on sshpass to be able to supply the password in the command. It requires sshpass to be installed on your intermediate host (sudo apt-get install sshpass under Ubuntu).


          • We try to use sshpass in a secure way, but it will still not be as secure as the SSH pubkey mechanism (says man sshpass). In particular, we supply the SSH password not as a command line argument but via a file, which is replaced by bash process substitution to make sure it never exists on disk. The printf is a bash built-in, making sure this part of the code does not pop up as a separate command in ps output as that would expose the password [source]. I think that this use of sshpass is just as secure as the sshpass -d<file-descriptor> variant recommended in man sshpass, because bash maps it internally to such a /dev/fd/* file descriptor anyway. And that without using a temp file [source]. But no guarantees, maybe I overlooked something.


          • Again to make the sshpass usage safe, we need to prevent the command from being recorded to the bash history on your local machine. For that, the whole command is prepended with one space character, which has this effect.


          • The -o StrictHostKeyChecking=no part prevents the command from failing in case it never connected to the target host. (Normally, SSH would then wait for user input to confirm the connection attempt. We make it proceed anyway.)


          • sshpass expects a ssh or scp command as its last argument. So we have to rewrite the typical wget -O - … | ssh … command into a form without a bash pipe, as explained here.







          share|improve this answer














          Use uploading instead, via SSH from your local machine



          A "minimal headless *nix" box means you probably SSH into it. So you can also use SSH to upload to it. Which is functionally equivalent to downloading (of software packages etc.) except when you want a download command to include in a script on your headless server of course.



          As shown in this answer, you would execute the following on your local machine to place a file on your remote headless server:



          wget -O - http://example.com/file.zip | ssh user@host 'cat >/path/to/file.zip'


          Faster uploading via SSH from a third machine



          The disadvantage of the above solution compared to downloading is lower transfer speed, since the connection with your local machine usually has much less bandwidth than the connection between your headless server and other servers.



          To solve that, you can of course execute the above command on another server with decent bandwidth. To make that more comfortable (avoiding a manual login on the third machine), here is a command to execute on your local machine.



          To be secure, copy & paste that command including the leading space character ' '. See the explanations below for the reason.



           ssh user@intermediate-host "sshpass -f <(printf '%sn' yourpassword) 
          ssh -T -e none
          -o StrictHostKeyChecking=no
          < <(wget -O - http://example.com/input-file.zip)
          user@target-host
          'cat >/path/to/output-file.zip'
          "


          Explanations:




          • The command will ssh to your third machine intermediate-host, start downloading a file to there via wget, and start uploading it to target-host via SSH. Downloading and uploading use the bandwidth of your intermediate-host and happen at the same time (due to Bash pipe equivalents), so progress will be fast.


          • When using this, you have to replace the two server logins (user@*-host), the target host password (yourpassword), the download URL (http://example.com/…) and the output path on your target host (/path/to/output-file.zip) with appropriate own values.


          • For the -T -e none SSH options when using it to transfer files, see these detailed explanations.


          • This command is meant for cases where you can't use SSH's public key authentication mechanism – it still happens with some shared hosting providers, notably Host Europe. To still automate the process, we rely on sshpass to be able to supply the password in the command. It requires sshpass to be installed on your intermediate host (sudo apt-get install sshpass under Ubuntu).


          • We try to use sshpass in a secure way, but it will still not be as secure as the SSH pubkey mechanism (says man sshpass). In particular, we supply the SSH password not as a command line argument but via a file, which is replaced by bash process substitution to make sure it never exists on disk. The printf is a bash built-in, making sure this part of the code does not pop up as a separate command in ps output as that would expose the password [source]. I think that this use of sshpass is just as secure as the sshpass -d<file-descriptor> variant recommended in man sshpass, because bash maps it internally to such a /dev/fd/* file descriptor anyway. And that without using a temp file [source]. But no guarantees, maybe I overlooked something.


          • Again to make the sshpass usage safe, we need to prevent the command from being recorded to the bash history on your local machine. For that, the whole command is prepended with one space character, which has this effect.


          • The -o StrictHostKeyChecking=no part prevents the command from failing in case it never connected to the target host. (Normally, SSH would then wait for user input to confirm the connection attempt. We make it proceed anyway.)


          • sshpass expects a ssh or scp command as its last argument. So we have to rewrite the typical wget -O - … | ssh … command into a form without a bash pipe, as explained here.








          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 2 hours ago

























          answered Jul 24 '16 at 21:29









          tanius

          33837




          33837























              3














              If you have this package libwww-perl



              You can simply use:



              /usr/bin/GET





              share|improve this answer


























                3














                If you have this package libwww-perl



                You can simply use:



                /usr/bin/GET





                share|improve this answer
























                  3












                  3








                  3






                  If you have this package libwww-perl



                  You can simply use:



                  /usr/bin/GET





                  share|improve this answer












                  If you have this package libwww-perl



                  You can simply use:



                  /usr/bin/GET






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Aug 5 '13 at 21:43









                  stackexchanger

                  28514




                  28514























                      3














                      Based on @Chris Snow recipe. I made some improvements:




                      • http scheme check (it only supports http)

                      • http response validation (response status line check, and split header and body by 'rn' line, not 'Connection: close' which is not true sometimes)

                      • failed on non-200 code (it's important to download files on the internet)


                      Here is code:



                      function __wget() {
                      : ${DEBUG:=0}
                      local URL=$1
                      local tag="Connection: close"

                      if [ -z "${URL}" ]; then
                      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
                      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
                      return 1;
                      fi
                      read proto server path <<<$(echo ${URL//// })
                      local SCHEME=${proto//:*}
                      local PATH=/${path// //}
                      local HOST=${server//:*}
                      local PORT=${server//*:}
                      if [[ "$SCHEME" != "http" ]]; then
                      printf "sorry, %s only support httpn" "${FUNCNAME[0]}"
                      return 1
                      fi
                      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
                      [[ $DEBUG -eq 1 ]] && echo "SCHEME=$SCHEME" >&2
                      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST" >&2
                      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT" >&2
                      [[ $DEBUG -eq 1 ]] && echo "PATH=$PATH" >&2

                      exec 3<>/dev/tcp/${HOST}/$PORT
                      if [ $? -ne 0 ]; then
                      return $?
                      fi
                      echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
                      if [ $? -ne 0 ]; then
                      return $?
                      fi
                      # 0: at begin, before reading http response
                      # 1: reading header
                      # 2: reading body
                      local state=0
                      local num=0
                      local code=0
                      while read line; do
                      num=$(($num + 1))
                      # check http code
                      if [ $state -eq 0 ]; then
                      if [ $num -eq 1 ]; then
                      if [[ $line =~ ^HTTP/1.[01][[:space:]]([0-9]{3}).*$ ]]; then
                      code="${BASH_REMATCH[1]}"
                      if [[ "$code" != "200" ]]; then
                      printf "failed to wget '%s', code is not 200 (%s)n" "$URL" "$code"
                      exec 3>&-
                      return 1
                      fi
                      state=1
                      else
                      printf "invalid http response from '%s'" "$URL"
                      exec 3>&-
                      return 1
                      fi
                      fi
                      elif [ $state -eq 1 ]; then
                      if [[ "$line" == $'r' ]]; then
                      # found "rn"
                      state=2
                      fi
                      elif [ $state -eq 2 ]; then
                      # redirect body to stdout
                      # TODO: any way to pipe data directly to stdout?
                      echo "$line"
                      fi
                      done <&3
                      exec 3>&-
                      }





                      share|improve this answer























                      • Nice enhancements +1
                        – Chris Snow
                        May 16 '17 at 18:43










                      • It worked, But I found a concern, when I use this scripts, It keep wait several seconds when all data is read finished, this case not happen in @Chris Snow answer, anyone could explain this?
                        – zw963
                        May 19 '17 at 14:45










                      • And, in this answer, echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3, ${tag} is not specified.
                        – zw963
                        May 19 '17 at 15:17










                      • I edit this answer with tag variable is correct set, it work well now.
                        – zw963
                        May 19 '17 at 15:28










                      • @zw963 Thanks for fixing the bug!
                        – Yecheng Fu
                        Sep 22 '17 at 9:35
















                      3














                      Based on @Chris Snow recipe. I made some improvements:




                      • http scheme check (it only supports http)

                      • http response validation (response status line check, and split header and body by 'rn' line, not 'Connection: close' which is not true sometimes)

                      • failed on non-200 code (it's important to download files on the internet)


                      Here is code:



                      function __wget() {
                      : ${DEBUG:=0}
                      local URL=$1
                      local tag="Connection: close"

                      if [ -z "${URL}" ]; then
                      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
                      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
                      return 1;
                      fi
                      read proto server path <<<$(echo ${URL//// })
                      local SCHEME=${proto//:*}
                      local PATH=/${path// //}
                      local HOST=${server//:*}
                      local PORT=${server//*:}
                      if [[ "$SCHEME" != "http" ]]; then
                      printf "sorry, %s only support httpn" "${FUNCNAME[0]}"
                      return 1
                      fi
                      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
                      [[ $DEBUG -eq 1 ]] && echo "SCHEME=$SCHEME" >&2
                      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST" >&2
                      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT" >&2
                      [[ $DEBUG -eq 1 ]] && echo "PATH=$PATH" >&2

                      exec 3<>/dev/tcp/${HOST}/$PORT
                      if [ $? -ne 0 ]; then
                      return $?
                      fi
                      echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
                      if [ $? -ne 0 ]; then
                      return $?
                      fi
                      # 0: at begin, before reading http response
                      # 1: reading header
                      # 2: reading body
                      local state=0
                      local num=0
                      local code=0
                      while read line; do
                      num=$(($num + 1))
                      # check http code
                      if [ $state -eq 0 ]; then
                      if [ $num -eq 1 ]; then
                      if [[ $line =~ ^HTTP/1.[01][[:space:]]([0-9]{3}).*$ ]]; then
                      code="${BASH_REMATCH[1]}"
                      if [[ "$code" != "200" ]]; then
                      printf "failed to wget '%s', code is not 200 (%s)n" "$URL" "$code"
                      exec 3>&-
                      return 1
                      fi
                      state=1
                      else
                      printf "invalid http response from '%s'" "$URL"
                      exec 3>&-
                      return 1
                      fi
                      fi
                      elif [ $state -eq 1 ]; then
                      if [[ "$line" == $'r' ]]; then
                      # found "rn"
                      state=2
                      fi
                      elif [ $state -eq 2 ]; then
                      # redirect body to stdout
                      # TODO: any way to pipe data directly to stdout?
                      echo "$line"
                      fi
                      done <&3
                      exec 3>&-
                      }





                      share|improve this answer























                      • Nice enhancements +1
                        – Chris Snow
                        May 16 '17 at 18:43










                      • It worked, But I found a concern, when I use this scripts, It keep wait several seconds when all data is read finished, this case not happen in @Chris Snow answer, anyone could explain this?
                        – zw963
                        May 19 '17 at 14:45










                      • And, in this answer, echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3, ${tag} is not specified.
                        – zw963
                        May 19 '17 at 15:17










                      • I edit this answer with tag variable is correct set, it work well now.
                        – zw963
                        May 19 '17 at 15:28










                      • @zw963 Thanks for fixing the bug!
                        – Yecheng Fu
                        Sep 22 '17 at 9:35














                      3












                      3








                      3






                      Based on @Chris Snow recipe. I made some improvements:




                      • http scheme check (it only supports http)

                      • http response validation (response status line check, and split header and body by 'rn' line, not 'Connection: close' which is not true sometimes)

                      • failed on non-200 code (it's important to download files on the internet)


                      Here is code:



                      function __wget() {
                      : ${DEBUG:=0}
                      local URL=$1
                      local tag="Connection: close"

                      if [ -z "${URL}" ]; then
                      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
                      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
                      return 1;
                      fi
                      read proto server path <<<$(echo ${URL//// })
                      local SCHEME=${proto//:*}
                      local PATH=/${path// //}
                      local HOST=${server//:*}
                      local PORT=${server//*:}
                      if [[ "$SCHEME" != "http" ]]; then
                      printf "sorry, %s only support httpn" "${FUNCNAME[0]}"
                      return 1
                      fi
                      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
                      [[ $DEBUG -eq 1 ]] && echo "SCHEME=$SCHEME" >&2
                      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST" >&2
                      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT" >&2
                      [[ $DEBUG -eq 1 ]] && echo "PATH=$PATH" >&2

                      exec 3<>/dev/tcp/${HOST}/$PORT
                      if [ $? -ne 0 ]; then
                      return $?
                      fi
                      echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
                      if [ $? -ne 0 ]; then
                      return $?
                      fi
                      # 0: at begin, before reading http response
                      # 1: reading header
                      # 2: reading body
                      local state=0
                      local num=0
                      local code=0
                      while read line; do
                      num=$(($num + 1))
                      # check http code
                      if [ $state -eq 0 ]; then
                      if [ $num -eq 1 ]; then
                      if [[ $line =~ ^HTTP/1.[01][[:space:]]([0-9]{3}).*$ ]]; then
                      code="${BASH_REMATCH[1]}"
                      if [[ "$code" != "200" ]]; then
                      printf "failed to wget '%s', code is not 200 (%s)n" "$URL" "$code"
                      exec 3>&-
                      return 1
                      fi
                      state=1
                      else
                      printf "invalid http response from '%s'" "$URL"
                      exec 3>&-
                      return 1
                      fi
                      fi
                      elif [ $state -eq 1 ]; then
                      if [[ "$line" == $'r' ]]; then
                      # found "rn"
                      state=2
                      fi
                      elif [ $state -eq 2 ]; then
                      # redirect body to stdout
                      # TODO: any way to pipe data directly to stdout?
                      echo "$line"
                      fi
                      done <&3
                      exec 3>&-
                      }





                      share|improve this answer














                      Based on @Chris Snow recipe. I made some improvements:




                      • http scheme check (it only supports http)

                      • http response validation (response status line check, and split header and body by 'rn' line, not 'Connection: close' which is not true sometimes)

                      • failed on non-200 code (it's important to download files on the internet)


                      Here is code:



                      function __wget() {
                      : ${DEBUG:=0}
                      local URL=$1
                      local tag="Connection: close"

                      if [ -z "${URL}" ]; then
                      printf "Usage: %s "URL" [e.g.: %s http://www.google.com/]"
                      "${FUNCNAME[0]}" "${FUNCNAME[0]}"
                      return 1;
                      fi
                      read proto server path <<<$(echo ${URL//// })
                      local SCHEME=${proto//:*}
                      local PATH=/${path// //}
                      local HOST=${server//:*}
                      local PORT=${server//*:}
                      if [[ "$SCHEME" != "http" ]]; then
                      printf "sorry, %s only support httpn" "${FUNCNAME[0]}"
                      return 1
                      fi
                      [[ x"${HOST}" == x"${PORT}" ]] && PORT=80
                      [[ $DEBUG -eq 1 ]] && echo "SCHEME=$SCHEME" >&2
                      [[ $DEBUG -eq 1 ]] && echo "HOST=$HOST" >&2
                      [[ $DEBUG -eq 1 ]] && echo "PORT=$PORT" >&2
                      [[ $DEBUG -eq 1 ]] && echo "PATH=$PATH" >&2

                      exec 3<>/dev/tcp/${HOST}/$PORT
                      if [ $? -ne 0 ]; then
                      return $?
                      fi
                      echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3
                      if [ $? -ne 0 ]; then
                      return $?
                      fi
                      # 0: at begin, before reading http response
                      # 1: reading header
                      # 2: reading body
                      local state=0
                      local num=0
                      local code=0
                      while read line; do
                      num=$(($num + 1))
                      # check http code
                      if [ $state -eq 0 ]; then
                      if [ $num -eq 1 ]; then
                      if [[ $line =~ ^HTTP/1.[01][[:space:]]([0-9]{3}).*$ ]]; then
                      code="${BASH_REMATCH[1]}"
                      if [[ "$code" != "200" ]]; then
                      printf "failed to wget '%s', code is not 200 (%s)n" "$URL" "$code"
                      exec 3>&-
                      return 1
                      fi
                      state=1
                      else
                      printf "invalid http response from '%s'" "$URL"
                      exec 3>&-
                      return 1
                      fi
                      fi
                      elif [ $state -eq 1 ]; then
                      if [[ "$line" == $'r' ]]; then
                      # found "rn"
                      state=2
                      fi
                      elif [ $state -eq 2 ]; then
                      # redirect body to stdout
                      # TODO: any way to pipe data directly to stdout?
                      echo "$line"
                      fi
                      done <&3
                      exec 3>&-
                      }






                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited May 19 '17 at 15:56









                      zw963

                      1033




                      1033










                      answered May 16 '17 at 11:05









                      Yecheng Fu

                      312




                      312












                      • Nice enhancements +1
                        – Chris Snow
                        May 16 '17 at 18:43










                      • It worked, But I found a concern, when I use this scripts, It keep wait several seconds when all data is read finished, this case not happen in @Chris Snow answer, anyone could explain this?
                        – zw963
                        May 19 '17 at 14:45










                      • And, in this answer, echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3, ${tag} is not specified.
                        – zw963
                        May 19 '17 at 15:17










                      • I edit this answer with tag variable is correct set, it work well now.
                        – zw963
                        May 19 '17 at 15:28










                      • @zw963 Thanks for fixing the bug!
                        – Yecheng Fu
                        Sep 22 '17 at 9:35


















                      • Nice enhancements +1
                        – Chris Snow
                        May 16 '17 at 18:43










                      • It worked, But I found a concern, when I use this scripts, It keep wait several seconds when all data is read finished, this case not happen in @Chris Snow answer, anyone could explain this?
                        – zw963
                        May 19 '17 at 14:45










                      • And, in this answer, echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3, ${tag} is not specified.
                        – zw963
                        May 19 '17 at 15:17










                      • I edit this answer with tag variable is correct set, it work well now.
                        – zw963
                        May 19 '17 at 15:28










                      • @zw963 Thanks for fixing the bug!
                        – Yecheng Fu
                        Sep 22 '17 at 9:35
















                      Nice enhancements +1
                      – Chris Snow
                      May 16 '17 at 18:43




                      Nice enhancements +1
                      – Chris Snow
                      May 16 '17 at 18:43












                      It worked, But I found a concern, when I use this scripts, It keep wait several seconds when all data is read finished, this case not happen in @Chris Snow answer, anyone could explain this?
                      – zw963
                      May 19 '17 at 14:45




                      It worked, But I found a concern, when I use this scripts, It keep wait several seconds when all data is read finished, this case not happen in @Chris Snow answer, anyone could explain this?
                      – zw963
                      May 19 '17 at 14:45












                      And, in this answer, echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3, ${tag} is not specified.
                      – zw963
                      May 19 '17 at 15:17




                      And, in this answer, echo -en "GET ${PATH} HTTP/1.1rnHost: ${HOST}rn${tag}rnrn" >&3, ${tag} is not specified.
                      – zw963
                      May 19 '17 at 15:17












                      I edit this answer with tag variable is correct set, it work well now.
                      – zw963
                      May 19 '17 at 15:28




                      I edit this answer with tag variable is correct set, it work well now.
                      – zw963
                      May 19 '17 at 15:28












                      @zw963 Thanks for fixing the bug!
                      – Yecheng Fu
                      Sep 22 '17 at 9:35




                      @zw963 Thanks for fixing the bug!
                      – Yecheng Fu
                      Sep 22 '17 at 9:35


















                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Unix & Linux Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f83926%2fhow-to-download-a-file-using-just-bash-and-nothing-else-no-curl-wget-perl-et%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      サソリ

                      広島県道265号伴広島線

                      Accessing regular linux commands in Huawei's Dopra Linux