On a Linux system, I need to create a large file (about 10GB), uncompressible file.
This file is supposed to reside in a Docker image, needed to test performance in transferring and storing large docker images on a local registry. Therefore, I need the image to be “intrinsically” large (that is: uncompressible), in order to bypass optimization mechanisms.
fallocate (described at Quickly create a large file on a Linux system ) works great to create large files very quickly, but the result is a 0 entropy large file, highly compressible. When pushing the large image to the registry, it takes only few MB.
So, how can a large, uncompressible file be created?
You may tray use
/dev/random to fill your file for example
@debian-10:~$ SECONDS=0; dd if=/dev/urandom of=testfile bs=10M count=1000 ;echo $SECONDS 1000+0 record in 1000+0 record out 10485760000 bytes (10 GB, 9,8 GiB) copied, 171,516 s, 61,1 MB/s 171
Using bigger bs a little small time is needed:
*@debian-10:~$ SECONDS=0; dd if=/dev/urandom of=testfile bs=30M count=320 ;echo $SECONDS 320+0 record in 320+0 record out 10066329600 bytes (10 GB, 9,4 GiB) copied, 164,498 s, 61,2 MB/s 165
171 seconds VS. 165 seconds