dd is producing a 32 MB random file instead of 1 GB












31














I wanted to produce a 1 GB random file, so I used following command.



dd if=/dev/urandom of=output bs=1G count=1


But instead every time I launch this command I get a 32 MB file:



<11:58:40>$ dd if=/dev/urandom of=output bs=1G count=1
0+1 records in
0+1 records out
33554431 bytes (34 MB, 32 MiB) copied, 0,288321 s, 116 MB/s


What is wrong?










share|improve this question





























    31














    I wanted to produce a 1 GB random file, so I used following command.



    dd if=/dev/urandom of=output bs=1G count=1


    But instead every time I launch this command I get a 32 MB file:



    <11:58:40>$ dd if=/dev/urandom of=output bs=1G count=1
    0+1 records in
    0+1 records out
    33554431 bytes (34 MB, 32 MiB) copied, 0,288321 s, 116 MB/s


    What is wrong?










    share|improve this question



























      31












      31








      31


      3





      I wanted to produce a 1 GB random file, so I used following command.



      dd if=/dev/urandom of=output bs=1G count=1


      But instead every time I launch this command I get a 32 MB file:



      <11:58:40>$ dd if=/dev/urandom of=output bs=1G count=1
      0+1 records in
      0+1 records out
      33554431 bytes (34 MB, 32 MiB) copied, 0,288321 s, 116 MB/s


      What is wrong?










      share|improve this question















      I wanted to produce a 1 GB random file, so I used following command.



      dd if=/dev/urandom of=output bs=1G count=1


      But instead every time I launch this command I get a 32 MB file:



      <11:58:40>$ dd if=/dev/urandom of=output bs=1G count=1
      0+1 records in
      0+1 records out
      33554431 bytes (34 MB, 32 MiB) copied, 0,288321 s, 116 MB/s


      What is wrong?







      script dd random-number-generator






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 1 hour ago









      Peter Mortensen

      8,331166184




      8,331166184










      asked yesterday









      Trismegistos

      25627




      25627






















          2 Answers
          2






          active

          oldest

          votes


















          61














          bs, the buffer size, means the size of a single read() call done by dd.



          (For example, both bs=1M count=1 and bs=1k count=1k will result in a 1 MiB file, but the first version will do it in a single step, while the second will do it in 1024 small chunks.)



          Regular files can be read at nearly any buffer size (as long as that buffer fits in RAM), but devices and "virtual" files often work very close to the individual calls and have some arbitrary restriction of how much data they'll produce per read() call.



          For /dev/urandom, this limit is defined in urandom_read() in drivers/char/random.c:



          #define ENTROPY_SHIFT 3

          static ssize_t
          urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
          {
          nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
          ...
          }


          This means that every time the function is called, it will clamp the requested size to 33554431 bytes.



          By default, unlike most other tools, dd will not retry after receiving less data than requested – you get the 32 MiB and that's it. (To make it retry automatically, as in Kamil's answer, you'll need to specify iflag=fullblock.)





          Note also that "the size of a single read()" means that the whole buffer must fit in memory at once, so massive block sizes also correspond to massive memory usage by dd.



          And it's all pointless because you usually won't gain any performance when going above ~16–32 MiB blocks – syscalls aren't the slow part here, the random number generator is.



          So for simplicity, just use head -c 1G /dev/urandom > output.






          share|improve this answer



















          • 1




            "... you usually won't gain any performance when going above ~16–32 MiB blocks" - In my experience, you tend not to gain much, or even lose performance above 64-128 kilobyte. At that point, you're well in the diminishing returns wrt syscall cost, and cache contention starts to play a role.
            – marcelm
            17 hours ago






          • 2




            @marcelm I've helped architect high performance systems where IO performance would improve as block size increased to 1-2 MB blocks, and in some cases up to 8 MB or so. Per LUN. And as filesystems were constructed using multiple parallel LUNs, to get get best performance meant using multiple threads for IO, each doing 1 MB+ blocks. Sustained IO rates were over 1 GB/sec. And those were all spinning disks, so I can see high-performance arrays of SSDs swallowing or generating data faster and faster as the block size grows to 16 or even 32 MB blocks. Easily. Maybe even larger.
            – Andrew Henle
            2 hours ago






          • 1




            I'll explicitly note that iflag=fullblock is a GNU extension to the POSIX dd utility. As the question doesn't specify Linux, I think the use of Linux-specific extensions should probably be explicitly noted lest some future reader trying to solve a similar issue on a non-Linux system be confused.
            – Andrew Henle
            1 hour ago



















          16














          dd may read less than ibs (note: bs specifies both ibs and obs), unless iflag=fullblock is specified. 0+1 records in indicates that 0 full blocks and 1 partial block was read. However any full or partial block increases the counter.



          I don't know the exact mechanism that makes dd read a block that is less than 1G in this particular case. I guess any block is read to the memory before it's written, so memory management may interfere (but this is only a guess). Edit: this concurrent answer explains the mechanism that makes dd read a block that is less than 1G in this particular case.



          Anyway, I don't recommend such large bs. I would use bs=1M count=1024. The most important thing is: without iflag=fullblock any read attempt may read less than ibs (unless ibs=1, I think, this is quite inefficient though).



          So if you need to read some exact amount of data, use iflag=fullblock. Note iflag is not required by POSIX, your dd may not support it. According to this answer ibs=1 is probably the only POSIX way to read an exact number of bytes. Of course if you change ibs then you will need to recalculate the count. In your case lowering ibs to 32M or less will probably fix the issue, even without iflag=fullblock.



          In my Kubuntu I would fix your command like this:



          dd if=/dev/urandom of=output bs=1M count=1024 iflag=fullblock





          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "3"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1388082%2fdd-is-producing-a-32-mb-random-file-instead-of-1-gb%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            61














            bs, the buffer size, means the size of a single read() call done by dd.



            (For example, both bs=1M count=1 and bs=1k count=1k will result in a 1 MiB file, but the first version will do it in a single step, while the second will do it in 1024 small chunks.)



            Regular files can be read at nearly any buffer size (as long as that buffer fits in RAM), but devices and "virtual" files often work very close to the individual calls and have some arbitrary restriction of how much data they'll produce per read() call.



            For /dev/urandom, this limit is defined in urandom_read() in drivers/char/random.c:



            #define ENTROPY_SHIFT 3

            static ssize_t
            urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
            {
            nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
            ...
            }


            This means that every time the function is called, it will clamp the requested size to 33554431 bytes.



            By default, unlike most other tools, dd will not retry after receiving less data than requested – you get the 32 MiB and that's it. (To make it retry automatically, as in Kamil's answer, you'll need to specify iflag=fullblock.)





            Note also that "the size of a single read()" means that the whole buffer must fit in memory at once, so massive block sizes also correspond to massive memory usage by dd.



            And it's all pointless because you usually won't gain any performance when going above ~16–32 MiB blocks – syscalls aren't the slow part here, the random number generator is.



            So for simplicity, just use head -c 1G /dev/urandom > output.






            share|improve this answer



















            • 1




              "... you usually won't gain any performance when going above ~16–32 MiB blocks" - In my experience, you tend not to gain much, or even lose performance above 64-128 kilobyte. At that point, you're well in the diminishing returns wrt syscall cost, and cache contention starts to play a role.
              – marcelm
              17 hours ago






            • 2




              @marcelm I've helped architect high performance systems where IO performance would improve as block size increased to 1-2 MB blocks, and in some cases up to 8 MB or so. Per LUN. And as filesystems were constructed using multiple parallel LUNs, to get get best performance meant using multiple threads for IO, each doing 1 MB+ blocks. Sustained IO rates were over 1 GB/sec. And those were all spinning disks, so I can see high-performance arrays of SSDs swallowing or generating data faster and faster as the block size grows to 16 or even 32 MB blocks. Easily. Maybe even larger.
              – Andrew Henle
              2 hours ago






            • 1




              I'll explicitly note that iflag=fullblock is a GNU extension to the POSIX dd utility. As the question doesn't specify Linux, I think the use of Linux-specific extensions should probably be explicitly noted lest some future reader trying to solve a similar issue on a non-Linux system be confused.
              – Andrew Henle
              1 hour ago
















            61














            bs, the buffer size, means the size of a single read() call done by dd.



            (For example, both bs=1M count=1 and bs=1k count=1k will result in a 1 MiB file, but the first version will do it in a single step, while the second will do it in 1024 small chunks.)



            Regular files can be read at nearly any buffer size (as long as that buffer fits in RAM), but devices and "virtual" files often work very close to the individual calls and have some arbitrary restriction of how much data they'll produce per read() call.



            For /dev/urandom, this limit is defined in urandom_read() in drivers/char/random.c:



            #define ENTROPY_SHIFT 3

            static ssize_t
            urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
            {
            nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
            ...
            }


            This means that every time the function is called, it will clamp the requested size to 33554431 bytes.



            By default, unlike most other tools, dd will not retry after receiving less data than requested – you get the 32 MiB and that's it. (To make it retry automatically, as in Kamil's answer, you'll need to specify iflag=fullblock.)





            Note also that "the size of a single read()" means that the whole buffer must fit in memory at once, so massive block sizes also correspond to massive memory usage by dd.



            And it's all pointless because you usually won't gain any performance when going above ~16–32 MiB blocks – syscalls aren't the slow part here, the random number generator is.



            So for simplicity, just use head -c 1G /dev/urandom > output.






            share|improve this answer



















            • 1




              "... you usually won't gain any performance when going above ~16–32 MiB blocks" - In my experience, you tend not to gain much, or even lose performance above 64-128 kilobyte. At that point, you're well in the diminishing returns wrt syscall cost, and cache contention starts to play a role.
              – marcelm
              17 hours ago






            • 2




              @marcelm I've helped architect high performance systems where IO performance would improve as block size increased to 1-2 MB blocks, and in some cases up to 8 MB or so. Per LUN. And as filesystems were constructed using multiple parallel LUNs, to get get best performance meant using multiple threads for IO, each doing 1 MB+ blocks. Sustained IO rates were over 1 GB/sec. And those were all spinning disks, so I can see high-performance arrays of SSDs swallowing or generating data faster and faster as the block size grows to 16 or even 32 MB blocks. Easily. Maybe even larger.
              – Andrew Henle
              2 hours ago






            • 1




              I'll explicitly note that iflag=fullblock is a GNU extension to the POSIX dd utility. As the question doesn't specify Linux, I think the use of Linux-specific extensions should probably be explicitly noted lest some future reader trying to solve a similar issue on a non-Linux system be confused.
              – Andrew Henle
              1 hour ago














            61












            61








            61






            bs, the buffer size, means the size of a single read() call done by dd.



            (For example, both bs=1M count=1 and bs=1k count=1k will result in a 1 MiB file, but the first version will do it in a single step, while the second will do it in 1024 small chunks.)



            Regular files can be read at nearly any buffer size (as long as that buffer fits in RAM), but devices and "virtual" files often work very close to the individual calls and have some arbitrary restriction of how much data they'll produce per read() call.



            For /dev/urandom, this limit is defined in urandom_read() in drivers/char/random.c:



            #define ENTROPY_SHIFT 3

            static ssize_t
            urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
            {
            nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
            ...
            }


            This means that every time the function is called, it will clamp the requested size to 33554431 bytes.



            By default, unlike most other tools, dd will not retry after receiving less data than requested – you get the 32 MiB and that's it. (To make it retry automatically, as in Kamil's answer, you'll need to specify iflag=fullblock.)





            Note also that "the size of a single read()" means that the whole buffer must fit in memory at once, so massive block sizes also correspond to massive memory usage by dd.



            And it's all pointless because you usually won't gain any performance when going above ~16–32 MiB blocks – syscalls aren't the slow part here, the random number generator is.



            So for simplicity, just use head -c 1G /dev/urandom > output.






            share|improve this answer














            bs, the buffer size, means the size of a single read() call done by dd.



            (For example, both bs=1M count=1 and bs=1k count=1k will result in a 1 MiB file, but the first version will do it in a single step, while the second will do it in 1024 small chunks.)



            Regular files can be read at nearly any buffer size (as long as that buffer fits in RAM), but devices and "virtual" files often work very close to the individual calls and have some arbitrary restriction of how much data they'll produce per read() call.



            For /dev/urandom, this limit is defined in urandom_read() in drivers/char/random.c:



            #define ENTROPY_SHIFT 3

            static ssize_t
            urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
            {
            nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
            ...
            }


            This means that every time the function is called, it will clamp the requested size to 33554431 bytes.



            By default, unlike most other tools, dd will not retry after receiving less data than requested – you get the 32 MiB and that's it. (To make it retry automatically, as in Kamil's answer, you'll need to specify iflag=fullblock.)





            Note also that "the size of a single read()" means that the whole buffer must fit in memory at once, so massive block sizes also correspond to massive memory usage by dd.



            And it's all pointless because you usually won't gain any performance when going above ~16–32 MiB blocks – syscalls aren't the slow part here, the random number generator is.



            So for simplicity, just use head -c 1G /dev/urandom > output.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited yesterday

























            answered yesterday









            grawity

            232k35490546




            232k35490546








            • 1




              "... you usually won't gain any performance when going above ~16–32 MiB blocks" - In my experience, you tend not to gain much, or even lose performance above 64-128 kilobyte. At that point, you're well in the diminishing returns wrt syscall cost, and cache contention starts to play a role.
              – marcelm
              17 hours ago






            • 2




              @marcelm I've helped architect high performance systems where IO performance would improve as block size increased to 1-2 MB blocks, and in some cases up to 8 MB or so. Per LUN. And as filesystems were constructed using multiple parallel LUNs, to get get best performance meant using multiple threads for IO, each doing 1 MB+ blocks. Sustained IO rates were over 1 GB/sec. And those were all spinning disks, so I can see high-performance arrays of SSDs swallowing or generating data faster and faster as the block size grows to 16 or even 32 MB blocks. Easily. Maybe even larger.
              – Andrew Henle
              2 hours ago






            • 1




              I'll explicitly note that iflag=fullblock is a GNU extension to the POSIX dd utility. As the question doesn't specify Linux, I think the use of Linux-specific extensions should probably be explicitly noted lest some future reader trying to solve a similar issue on a non-Linux system be confused.
              – Andrew Henle
              1 hour ago














            • 1




              "... you usually won't gain any performance when going above ~16–32 MiB blocks" - In my experience, you tend not to gain much, or even lose performance above 64-128 kilobyte. At that point, you're well in the diminishing returns wrt syscall cost, and cache contention starts to play a role.
              – marcelm
              17 hours ago






            • 2




              @marcelm I've helped architect high performance systems where IO performance would improve as block size increased to 1-2 MB blocks, and in some cases up to 8 MB or so. Per LUN. And as filesystems were constructed using multiple parallel LUNs, to get get best performance meant using multiple threads for IO, each doing 1 MB+ blocks. Sustained IO rates were over 1 GB/sec. And those were all spinning disks, so I can see high-performance arrays of SSDs swallowing or generating data faster and faster as the block size grows to 16 or even 32 MB blocks. Easily. Maybe even larger.
              – Andrew Henle
              2 hours ago






            • 1




              I'll explicitly note that iflag=fullblock is a GNU extension to the POSIX dd utility. As the question doesn't specify Linux, I think the use of Linux-specific extensions should probably be explicitly noted lest some future reader trying to solve a similar issue on a non-Linux system be confused.
              – Andrew Henle
              1 hour ago








            1




            1




            "... you usually won't gain any performance when going above ~16–32 MiB blocks" - In my experience, you tend not to gain much, or even lose performance above 64-128 kilobyte. At that point, you're well in the diminishing returns wrt syscall cost, and cache contention starts to play a role.
            – marcelm
            17 hours ago




            "... you usually won't gain any performance when going above ~16–32 MiB blocks" - In my experience, you tend not to gain much, or even lose performance above 64-128 kilobyte. At that point, you're well in the diminishing returns wrt syscall cost, and cache contention starts to play a role.
            – marcelm
            17 hours ago




            2




            2




            @marcelm I've helped architect high performance systems where IO performance would improve as block size increased to 1-2 MB blocks, and in some cases up to 8 MB or so. Per LUN. And as filesystems were constructed using multiple parallel LUNs, to get get best performance meant using multiple threads for IO, each doing 1 MB+ blocks. Sustained IO rates were over 1 GB/sec. And those were all spinning disks, so I can see high-performance arrays of SSDs swallowing or generating data faster and faster as the block size grows to 16 or even 32 MB blocks. Easily. Maybe even larger.
            – Andrew Henle
            2 hours ago




            @marcelm I've helped architect high performance systems where IO performance would improve as block size increased to 1-2 MB blocks, and in some cases up to 8 MB or so. Per LUN. And as filesystems were constructed using multiple parallel LUNs, to get get best performance meant using multiple threads for IO, each doing 1 MB+ blocks. Sustained IO rates were over 1 GB/sec. And those were all spinning disks, so I can see high-performance arrays of SSDs swallowing or generating data faster and faster as the block size grows to 16 or even 32 MB blocks. Easily. Maybe even larger.
            – Andrew Henle
            2 hours ago




            1




            1




            I'll explicitly note that iflag=fullblock is a GNU extension to the POSIX dd utility. As the question doesn't specify Linux, I think the use of Linux-specific extensions should probably be explicitly noted lest some future reader trying to solve a similar issue on a non-Linux system be confused.
            – Andrew Henle
            1 hour ago




            I'll explicitly note that iflag=fullblock is a GNU extension to the POSIX dd utility. As the question doesn't specify Linux, I think the use of Linux-specific extensions should probably be explicitly noted lest some future reader trying to solve a similar issue on a non-Linux system be confused.
            – Andrew Henle
            1 hour ago













            16














            dd may read less than ibs (note: bs specifies both ibs and obs), unless iflag=fullblock is specified. 0+1 records in indicates that 0 full blocks and 1 partial block was read. However any full or partial block increases the counter.



            I don't know the exact mechanism that makes dd read a block that is less than 1G in this particular case. I guess any block is read to the memory before it's written, so memory management may interfere (but this is only a guess). Edit: this concurrent answer explains the mechanism that makes dd read a block that is less than 1G in this particular case.



            Anyway, I don't recommend such large bs. I would use bs=1M count=1024. The most important thing is: without iflag=fullblock any read attempt may read less than ibs (unless ibs=1, I think, this is quite inefficient though).



            So if you need to read some exact amount of data, use iflag=fullblock. Note iflag is not required by POSIX, your dd may not support it. According to this answer ibs=1 is probably the only POSIX way to read an exact number of bytes. Of course if you change ibs then you will need to recalculate the count. In your case lowering ibs to 32M or less will probably fix the issue, even without iflag=fullblock.



            In my Kubuntu I would fix your command like this:



            dd if=/dev/urandom of=output bs=1M count=1024 iflag=fullblock





            share|improve this answer




























              16














              dd may read less than ibs (note: bs specifies both ibs and obs), unless iflag=fullblock is specified. 0+1 records in indicates that 0 full blocks and 1 partial block was read. However any full or partial block increases the counter.



              I don't know the exact mechanism that makes dd read a block that is less than 1G in this particular case. I guess any block is read to the memory before it's written, so memory management may interfere (but this is only a guess). Edit: this concurrent answer explains the mechanism that makes dd read a block that is less than 1G in this particular case.



              Anyway, I don't recommend such large bs. I would use bs=1M count=1024. The most important thing is: without iflag=fullblock any read attempt may read less than ibs (unless ibs=1, I think, this is quite inefficient though).



              So if you need to read some exact amount of data, use iflag=fullblock. Note iflag is not required by POSIX, your dd may not support it. According to this answer ibs=1 is probably the only POSIX way to read an exact number of bytes. Of course if you change ibs then you will need to recalculate the count. In your case lowering ibs to 32M or less will probably fix the issue, even without iflag=fullblock.



              In my Kubuntu I would fix your command like this:



              dd if=/dev/urandom of=output bs=1M count=1024 iflag=fullblock





              share|improve this answer


























                16












                16








                16






                dd may read less than ibs (note: bs specifies both ibs and obs), unless iflag=fullblock is specified. 0+1 records in indicates that 0 full blocks and 1 partial block was read. However any full or partial block increases the counter.



                I don't know the exact mechanism that makes dd read a block that is less than 1G in this particular case. I guess any block is read to the memory before it's written, so memory management may interfere (but this is only a guess). Edit: this concurrent answer explains the mechanism that makes dd read a block that is less than 1G in this particular case.



                Anyway, I don't recommend such large bs. I would use bs=1M count=1024. The most important thing is: without iflag=fullblock any read attempt may read less than ibs (unless ibs=1, I think, this is quite inefficient though).



                So if you need to read some exact amount of data, use iflag=fullblock. Note iflag is not required by POSIX, your dd may not support it. According to this answer ibs=1 is probably the only POSIX way to read an exact number of bytes. Of course if you change ibs then you will need to recalculate the count. In your case lowering ibs to 32M or less will probably fix the issue, even without iflag=fullblock.



                In my Kubuntu I would fix your command like this:



                dd if=/dev/urandom of=output bs=1M count=1024 iflag=fullblock





                share|improve this answer














                dd may read less than ibs (note: bs specifies both ibs and obs), unless iflag=fullblock is specified. 0+1 records in indicates that 0 full blocks and 1 partial block was read. However any full or partial block increases the counter.



                I don't know the exact mechanism that makes dd read a block that is less than 1G in this particular case. I guess any block is read to the memory before it's written, so memory management may interfere (but this is only a guess). Edit: this concurrent answer explains the mechanism that makes dd read a block that is less than 1G in this particular case.



                Anyway, I don't recommend such large bs. I would use bs=1M count=1024. The most important thing is: without iflag=fullblock any read attempt may read less than ibs (unless ibs=1, I think, this is quite inefficient though).



                So if you need to read some exact amount of data, use iflag=fullblock. Note iflag is not required by POSIX, your dd may not support it. According to this answer ibs=1 is probably the only POSIX way to read an exact number of bytes. Of course if you change ibs then you will need to recalculate the count. In your case lowering ibs to 32M or less will probably fix the issue, even without iflag=fullblock.



                In my Kubuntu I would fix your command like this:



                dd if=/dev/urandom of=output bs=1M count=1024 iflag=fullblock






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited yesterday

























                answered yesterday









                Kamil Maciorowski

                24.1k155176




                24.1k155176






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Super User!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1388082%2fdd-is-producing-a-32-mb-random-file-instead-of-1-gb%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Accessing regular linux commands in Huawei's Dopra Linux

                    Can't connect RFCOMM socket: Host is down

                    Kernel panic - not syncing: Fatal Exception in Interrupt