Answers:
怎么样:
prog1 & prog2 && fg
这将:
prog1
。prog2
,并使其保持在前台,以便您可以使用关闭它ctrl-c
。prog2
,您将返回到prog1
的前景,因此也可以使用关闭它ctrl-c
。prog1
时prog2
终止?想一想 node srv.js & cucumberjs
prog1 & prog2 ; fg
这是用于一次运行多个ssh隧道。希望这对某人有帮助。
prog2
无法立即运行,您将回到prog1
前台。如果这是理想的,那就可以了。
prog1 & prog2 && kill $!
。
您可以使用wait
:
some_command &
P1=$!
other_command &
P2=$!
wait $P1 $P2
它将后台程序PID分配给变量($!
是最后启动的进程的PID),然后wait
命令等待它们。很好,因为如果您终止脚本,它也会终止进程!
#!/usr/bin/env bash ARRAY='cat bat rat' for ARR in $ARRAY do ./run_script1 $ARR & done P1=$! wait $P1 echo "INFO: Execution of all background processes in the for loop has completed.."
${}
其插值到字符串列表或类似列表中。
使用GNU Parallel http://www.gnu.org/software/parallel/,它很容易:
(echo prog1; echo prog2) | parallel
或者,如果您喜欢:
parallel ::: prog1 prog2
学到更多:
parallel
具有不同语法的不同版本。例如,在Debian衍生产品上,moreutils
软件包包含一个不同的命令parallel
,该命令的行为大不相同。
parallel
比使用更好的&
?
如果您希望能够使用轻松运行并杀死多个进程ctrl-c
,这是我最喜欢的方法:在(…)
子shell中产生多个后台进程,并SIGINT
执行陷阱kill 0
,这将杀死subshell组中产生的所有东西:
(trap 'kill 0' SIGINT; prog1 & prog2 & prog3)
您可以拥有复杂的流程执行结构,并且所有内容都将以一个关闭ctrl-c
(只要确保最后一个流程在前台运行即可,即,不要包含&
after prog1.3
):
(trap 'kill 0' SIGINT; prog1.1 && prog1.2 & (prog2.1 | prog2.2 || prog2.3) & prog1.3)
#!/bin/bash
prog1 & 2> .errorprog1.log; prog2 & 2> .errorprog2.log
将错误重定向到单独的日志。
prog1 2> .errorprog1.log & prog2 2> .errorprog2.log &
ls notthere1 & 2> .errorprog1.log; ls notthere2 & 2>.errorprog2.log
。错误进入控制台,并且两个错误文件均为空。正如@Dennis Williamson所说,&
是一个分隔符,如;
,所以(a)它需要在命令末尾(在任何redirecton之后),并且(b)您根本不需要;
:-)
xargs -P <n>
允许您<n>
并行运行命令。
尽管这-P
是一个非标准选项,但GNU(Linux)和macOS / BSD实现均支持该选项。
下面的例子:
time xargs -P 3 -I {} sh -c 'eval "$1"' - {} <<'EOF'
sleep 1; echo 1
sleep 2; echo 2
sleep 3; echo 3
echo 4
EOF
输出看起来像:
1 # output from 1st command
4 # output from *last* command, which started as soon as the count dropped below 3
2 # output from 2nd command
3 # output from 3rd command
real 0m3.012s
user 0m0.011s
sys 0m0.008s
时间显示命令是并行运行的(最后一个命令仅在原始3个命令中的第一个终止后才启动,但执行速度非常快)。
在xargs
所有命令完成之前,命令本身不会返回,但是您可以在后台执行该操作&
,方法是使用控制操作符终止该命令,然后使用wait
内置函数等待整个xargs
命令完成。
{
xargs -P 3 -I {} sh -c 'eval "$1"' - {} <<'EOF'
sleep 1; echo 1
sleep 2; echo 2
sleep 3; echo 3
echo 4
EOF
} &
# Script execution continues here while `xargs` is running
# in the background.
echo "Waiting for commands to finish..."
# Wait for `xargs` to finish, via special variable $!, which contains
# the PID of the most recently started background process.
wait $!
注意:
BSD / MacOS的xargs
要求指定命令的计数并行运行明确,而GNU xargs
允许你指定-P 0
要尽可能多的运行尽可能平行。
并行运行的进程的输出将在生成时到达,因此将无法预料的交错。
parallel
如Ole的回答中所述(大多数平台都不标准),GNU可以方便地在每个进程的基础上对输出进行序列化(分组),并提供许多更高级的功能。有一个非常有用的程序,调用nohup。
nohup - run a command immune to hangups, with output to a non-tty
nohup
本身不会在后台运行任何内容,使用nohup
并不是在后台运行任务的必要条件或先决条件。它们通常一起使用,但不能解决问题。
这是我为了在最大n个进程上并行运行而使用的函数(在示例中,n = 4):
max_children=4
function parallel {
local time1=$(date +"%H:%M:%S")
local time2=""
# for the sake of the example, I'm using $2 as a description, you may be interested in other description
echo "starting $2 ($time1)..."
"$@" && time2=$(date +"%H:%M:%S") && echo "finishing $2 ($time1 -- $time2)..." &
local my_pid=$$
local children=$(ps -eo ppid | grep -w $my_pid | wc -w)
children=$((children-1))
if [[ $children -ge $max_children ]]; then
wait -n
fi
}
parallel sleep 5
parallel sleep 6
parallel sleep 7
parallel sleep 8
parallel sleep 9
wait
如果将max_children设置为内核数,则此函数将尝试避免空闲内核。
wait -n
需要bash
4.3+和它改变了逻辑等待任何指定/隐含过程的终止。
最近我有一个类似的情况,我需要同时运行多个程序,将它们的输出重定向到单独的日志文件,然后等待它们完成,最后我得到了类似的结果:
#!/bin/bash
# Add the full path processes to run to the array
PROCESSES_TO_RUN=("/home/joao/Code/test/prog_1/prog1" \
"/home/joao/Code/test/prog_2/prog2")
# You can keep adding processes to the array...
for i in ${PROCESSES_TO_RUN[@]}; do
${i%/*}/./${i##*/} > ${i}.log 2>&1 &
# ${i%/*} -> Get folder name until the /
# ${i##*/} -> Get the filename after the /
done
# Wait for the processes to finish
wait
来源:http : //joaoperibeiro.com/execute-multiple-programs-and-redirect-their-outputs-linux/
流程生成经理
当然,从技术上来说,这些是进程,并且该程序实际上应该称为进程生成管理器,但这仅是由于BASH在使用&号进行分叉,使用fork()或clone()系统调用时的工作方式它将克隆到一个单独的内存空间中,而不是像pthread_create()这样共享内存的东西。如果BASH支持后者,则每个“执行序列”将以相同的方式运行,并且可以被称为传统线程,同时获得更有效的内存占用。但是,在功能上它是一样的,尽管有点困难,因为每个工作克隆中都没有GLOBAL变量,因此使用进程间通信文件和基本的群信号量来管理关键部分。从BASH分叉当然是这里的基本答案,但是我觉得好像人们知道这一样,但他们实际上是在设法管理产生的内容,而不是仅仅将其分叉而忘了它。这演示了一种管理最多200个分支进程实例的方法,这些实例都访问单个资源。显然,这太过分了,但是我很喜欢写它,所以我一直坚持下去。相应地增加终端的大小。希望这个对你有帮助。
ME=$(basename $0)
IPC="/tmp/$ME.ipc" #interprocess communication file (global thread accounting stats)
DBG=/tmp/$ME.log
echo 0 > $IPC #initalize counter
F1=thread
SPAWNED=0
COMPLETE=0
SPAWN=1000 #number of jobs to process
SPEEDFACTOR=1 #dynamically compensates for execution time
THREADLIMIT=50 #maximum concurrent threads
TPS=1 #threads per second delay
THREADCOUNT=0 #number of running threads
SCALE="scale=5" #controls bc's precision
START=$(date +%s) #whence we began
MAXTHREADDUR=6 #maximum thread life span - demo mode
LOWER=$[$THREADLIMIT*100*90/10000] #90% worker utilization threshold
UPPER=$[$THREADLIMIT*100*95/10000] #95% worker utilization threshold
DELTA=10 #initial percent speed change
threadspeed() #dynamically adjust spawn rate based on worker utilization
{
#vaguely assumes thread execution average will be consistent
THREADCOUNT=$(threadcount)
if [ $THREADCOUNT -ge $LOWER ] && [ $THREADCOUNT -le $UPPER ] ;then
echo SPEED HOLD >> $DBG
return
elif [ $THREADCOUNT -lt $LOWER ] ;then
#if maxthread is free speed up
SPEEDFACTOR=$(echo "$SCALE;$SPEEDFACTOR*(1-($DELTA/100))"|bc)
echo SPEED UP $DELTA%>> $DBG
elif [ $THREADCOUNT -gt $UPPER ];then
#if maxthread is active then slow down
SPEEDFACTOR=$(echo "$SCALE;$SPEEDFACTOR*(1+($DELTA/100))"|bc)
DELTA=1 #begin fine grain control
echo SLOW DOWN $DELTA%>> $DBG
fi
echo SPEEDFACTOR $SPEEDFACTOR >> $DBG
#average thread duration (total elapsed time / number of threads completed)
#if threads completed is zero (less than 100), default to maxdelay/2 maxthreads
COMPLETE=$(cat $IPC)
if [ -z $COMPLETE ];then
echo BAD IPC READ ============================================== >> $DBG
return
fi
#echo Threads COMPLETE $COMPLETE >> $DBG
if [ $COMPLETE -lt 100 ];then
AVGTHREAD=$(echo "$SCALE;$MAXTHREADDUR/2"|bc)
else
ELAPSED=$[$(date +%s)-$START]
#echo Elapsed Time $ELAPSED >> $DBG
AVGTHREAD=$(echo "$SCALE;$ELAPSED/$COMPLETE*$THREADLIMIT"|bc)
fi
echo AVGTHREAD Duration is $AVGTHREAD >> $DBG
#calculate timing to achieve spawning each workers fast enough
# to utilize threadlimit - average time it takes to complete one thread / max number of threads
TPS=$(echo "$SCALE;($AVGTHREAD/$THREADLIMIT)*$SPEEDFACTOR"|bc)
#TPS=$(echo "$SCALE;$AVGTHREAD/$THREADLIMIT"|bc) # maintains pretty good
#echo TPS $TPS >> $DBG
}
function plot()
{
echo -en \\033[${2}\;${1}H
if [ -n "$3" ];then
if [[ $4 = "good" ]];then
echo -en "\\033[1;32m"
elif [[ $4 = "warn" ]];then
echo -en "\\033[1;33m"
elif [[ $4 = "fail" ]];then
echo -en "\\033[1;31m"
elif [[ $4 = "crit" ]];then
echo -en "\\033[1;31;4m"
fi
fi
echo -n "$3"
echo -en "\\033[0;39m"
}
trackthread() #displays thread status
{
WORKERID=$1
THREADID=$2
ACTION=$3 #setactive | setfree | update
AGE=$4
TS=$(date +%s)
COL=$[(($WORKERID-1)/50)*40]
ROW=$[(($WORKERID-1)%50)+1]
case $ACTION in
"setactive" )
touch /tmp/$ME.$F1$WORKERID #redundant - see main loop
#echo created file $ME.$F1$WORKERID >> $DBG
plot $COL $ROW "Worker$WORKERID: ACTIVE-TID:$THREADID INIT " good
;;
"update" )
plot $COL $ROW "Worker$WORKERID: ACTIVE-TID:$THREADID AGE:$AGE" warn
;;
"setfree" )
plot $COL $ROW "Worker$WORKERID: FREE " fail
rm /tmp/$ME.$F1$WORKERID
;;
* )
;;
esac
}
getfreeworkerid()
{
for i in $(seq 1 $[$THREADLIMIT+1])
do
if [ ! -e /tmp/$ME.$F1$i ];then
#echo "getfreeworkerid returned $i" >> $DBG
break
fi
done
if [ $i -eq $[$THREADLIMIT+1] ];then
#echo "no free threads" >> $DBG
echo 0
#exit
else
echo $i
fi
}
updateIPC()
{
COMPLETE=$(cat $IPC) #read IPC
COMPLETE=$[$COMPLETE+1] #increment IPC
echo $COMPLETE > $IPC #write back to IPC
}
worker()
{
WORKERID=$1
THREADID=$2
#echo "new worker WORKERID:$WORKERID THREADID:$THREADID" >> $DBG
#accessing common terminal requires critical blocking section
(flock -x -w 10 201
trackthread $WORKERID $THREADID setactive
)201>/tmp/$ME.lock
let "RND = $RANDOM % $MAXTHREADDUR +1"
for s in $(seq 1 $RND) #simulate random lifespan
do
sleep 1;
(flock -x -w 10 201
trackthread $WORKERID $THREADID update $s
)201>/tmp/$ME.lock
done
(flock -x -w 10 201
trackthread $WORKERID $THREADID setfree
)201>/tmp/$ME.lock
(flock -x -w 10 201
updateIPC
)201>/tmp/$ME.lock
}
threadcount()
{
TC=$(ls /tmp/$ME.$F1* 2> /dev/null | wc -l)
#echo threadcount is $TC >> $DBG
THREADCOUNT=$TC
echo $TC
}
status()
{
#summary status line
COMPLETE=$(cat $IPC)
plot 1 $[$THREADLIMIT+2] "WORKERS $(threadcount)/$THREADLIMIT SPAWNED $SPAWNED/$SPAWN COMPLETE $COMPLETE/$SPAWN SF=$SPEEDFACTOR TIMING=$TPS"
echo -en '\033[K' #clear to end of line
}
function main()
{
while [ $SPAWNED -lt $SPAWN ]
do
while [ $(threadcount) -lt $THREADLIMIT ] && [ $SPAWNED -lt $SPAWN ]
do
WID=$(getfreeworkerid)
worker $WID $SPAWNED &
touch /tmp/$ME.$F1$WID #if this loops faster than file creation in the worker thread it steps on itself, thread tracking is best in main loop
SPAWNED=$[$SPAWNED+1]
(flock -x -w 10 201
status
)201>/tmp/$ME.lock
sleep $TPS
if ((! $[$SPAWNED%100]));then
#rethink thread timing every 100 threads
threadspeed
fi
done
sleep $TPS
done
while [ "$(threadcount)" -gt 0 ]
do
(flock -x -w 10 201
status
)201>/tmp/$ME.lock
sleep 1;
done
status
}
clear
threadspeed
main
wait
status
echo
您的脚本应如下所示:
prog1 &
prog2 &
.
.
progn &
wait
progn+1 &
progn+2 &
.
.
假设您的系统一次可以处理n个工作。使用wait一次只能运行n个作业。
使用bashj(https://sourceforge.net/projects/bashj/),您不仅应该能够运行多个进程(其他人建议的方式),而且还可以运行多个线程在一个由脚本控制的JVM中运行。但是,当然这需要一个Java JDK。线程比进程消耗更少的资源。
这是一个工作代码:
#!/usr/bin/bashj
#!java
public static int cnt=0;
private static void loop() {u.p("java says cnt= "+(cnt++));u.sleep(1.0);}
public static void startThread()
{(new Thread(() -> {while (true) {loop();}})).start();}
#!bashj
j.startThread()
while [ j.cnt -lt 4 ]
do
echo "bash views cnt=" j.cnt
sleep 0.5
done
wait
!是的,在bash中,您可以等待脚本的子进程。