Answers:
使用“ --filter”:
split --bytes=1024M --filter='gzip > $FILE.gz' /path/to/input /path/to/output
使用条件式的单线是尽可能接近的。
cd /path/to/output && split --bytes=1024M /path/to/input/filename && gzip x*
gzip
将只运行,如果split
是因为条件的成功&&
这也是之间cd
,并split
确保了cd
成功,太..请注意,split
并gzip
输出到当前目录,而不必指定输出目录的能力。您可以根据需要创建目录:
mkdir -p /path/to/output && cd /path/to/output && split --bytes=1024M /path/to/input/filename && gzip x*
放在一起:
gunzip /path/to/files/x* && cat /path/to/files/x* > /path/to/dest/filename
一个bash函数可以使用Pigz快速压缩
function splitreads(){
# add this function to your .bashrc or alike
# split large compressed read files into chunks of fixed size
# suffix is a three digit counter starting with 000
# take compressed input and compress output with pigz
# keeps the read-in-pair suffix in outputs
# requires pigz installed or modification to use gzip
usage="# splitreads <reads.fastq.gz> <reads per chunk; default 10000000>\n";
if [ $# -lt 1 ]; then
echo;
echo ${usage};
return;
fi;
# threads for pigz (adapt to your needs)
thr=8
input=$1
# extract prefix and read number in pair
# this code is adapted to paired reads
base=$(basename ${input%.f*.gz})
pref=$(basename ${input%_?.f*.gz})
readn="${base#"${base%%_*}"}"
# 10M reads (4 lines each)
binsize=$((${2:-10000000}*4))
# split in bins of ${binsize}
echo "# splitting ${input} in chuncks of $((${binsize}/4)) reads"
cmd="zcat ${input} \
| split \
-a 3 \
-d \
-l ${binsize} \
--numeric-suffixes \
--additional-suffix ${readn} \
--filter='pigz -p ${thr} > \$FILE.fq.gz' \
- ${pref}_"
echo "# ${cmd}"
eval ${cmd}
}
--line-bytes=1024M
。