任务完成时间的变化如何影响制造跨度?


16

比方说,我们有大量的任务τ 1τ 2τ Ñ和的相同集合(在性能方面)处理器ρ 1ρ 2ρ 其完全并行操作。对于感兴趣的场景,我们可以假定ñ。每个τ 需要时间/周期的一定量来完成,一旦它被分配给一个处理器ρ Ĵτ1,τ2,...,τnρ1,ρ2,...,ρmmnτiρj,并且一旦分配,就无法重新分配,直到完成(处理器总是最终完成分配的任务)。让我们假定每个τ 需要的时间量/周期X ,在事先不知道,从一些离散的随机分布截取。对于这个问题,我们甚至可以假设一个简单的分布:P X = 1 = P X = 5 = 1 / 2,并且所有X 是两两独立的。因此μ = 3στiXiP(Xi=1)=P(Xi=5)=1/2Xiμi=3 2= 4σ2=4.

Suppose that, statically, at time/cycle 0, all tasks are assigned as evenly as possible to all processors, uniformly at random; so each processor ρjρj is assigned n/mn/m tasks (we can just as well assume m|nm|n for the purposes of the question). We call the makespan the time/cycle at which the last processor ρρ to finish its assigned work, finishes the work it was assigned. First question:

As a function of mm, nn, and the XiXi's, what is the makespan MM? Specifically, what is E[M]E[M]? Var[M]Var[M]?

Second question:

Suppose P(Xi=2)=P(Xi=4)=1/2P(Xi=2)=P(Xi=4)=1/2, and all XiXi are pairwise independent, so μi=3μi=3 and σ2=1σ2=1. As a function of mm, nn, and these new XiXi's, what is the makespan? More interestingly, how does it compare to the answer from the first part?

Some simple thought experiments demonstrate the answer to the latter is that the makespan is longer. But how can this be quantified? I will be happy to post an example if this is either (a) controversial or (b) unclear. Depending on the success with this one, I will post a follow-up question about a dynamic assignment scheme under these same assumptions. Thanks in advance!

Analysis of an easy case: m=1m=1

If m=1m=1, then all nn tasks are scheduled to the same processor. The makespan MM is just the time to complete nn tasks in a complete sequential fashion. Therefore, E[M]=E[X1+X2+...+Xn]=E[X1]+E[X2]+...+E[Xn]=μ+μ+...+μ=nμ

E[M]=E[X1+X2+...+Xn]=E[X1]+E[X2]+...+E[Xn]=μ+μ+...+μ=nμ
and Var[M]=Var[X1+X2+...+Xn]=Var[X1]+Var[X2]+...+Var[Xn]=σ2+σ2+...+σ2=nσ2
Var[M]=Var[X1+X2+...+Xn]=Var[X1]+Var[X2]+...+Var[Xn]=σ2+σ2+...+σ2=nσ2

It seems like it might be possible to use this result to answer the question for m>1m>1; we simply need to find an expression (or close approximation) for max(Y1,Y2,...,Ym)max(Y1,Y2,...,Ym) where Yi=Xinm+1+Xinm+2+...+Xinm+nmYi=Xinm+1+Xinm+2+...+Xinm+nm, a random variable with μY=nmμXμY=nmμX and σ2Y=nmσ2Xσ2Y=nmσ2X. Is this heading in the right direction?


Nice question. If only there wasn't a deadline today....
Dave Clarke

Answers:


8

As m=k×nm=k×n, we can look at this in terms of kk and nn instead of nn and mm. Let's say TiTi is the time it takes the ii-th processor to finish its work.

As nn grows, the probability that TiTi = 5k5k (the processor was assigned only T=5T=5 tasks) for some ii approaches 11, so makespan being defined as max(Ti)max(Ti), E[M]E[M] approaches 5k5k.

For the second scenario this is 4k4k so increasing the number of processors makes the 4–2 split better.

What about kk — increasing the number of tasks per processor? Increasing kk has the opposite effect, it makes it less likely to have a processor with an unlucky set of tasks. I'm going home now but I'll come back to this later. My "hunch" is that as kk grows, the difference in E[M]E[M] between the 4–2 split and the 5­­­–1 split disappears, E[M]E[M] becomes the same for both. So I would assume that 4–2 is always better except maybe for some special cases (very small specific values of kk and nn), if even that.

So to summarize:

  • Lower variance is better, all else being equal.
  • As the number of processors grows, lower variance becomes more important.
  • As the number of tasks per processor grows, lower variance becomes less important.

+1 Excellent intuition, and this helps to clarify my thinking as well. So increasing processor counts tends to increase makespan under a weak scaling assumption; and increasing task counts tends to decrease makespan under a strong scaling assumption (of course it takes longer; I mean the work/makespan ratio improves). These are interesting observations, and they seem true;
Patrick87

the first is justified by the fact that 1(1P(X=5)k)n1(1P(X=5)k)n tends to 11 for fixed kk and increasing nn; the latter by the fact that Var[X+X]=Var[X]+Var[X]=2σ24σ2=4Var[X]=Var[2X]Var[X+X]=Var[X]+Var[X]=2σ24σ2=4Var[X]=Var[2X]... so the variance doesn't increase linearly as a function of kk. Is that compatible with your thinking (that's how I'm interpreting what you have so far)?
Patrick87

I don't know where the "hunch" came from; it is not consistent with the rest of the heuristic reasoning.
András Salamon

2

I find that heuristic arguments are often quite misleading when considering task scheduling (and closely related problems like bin packing). Things can happen that are counter-intuitive. For such a simple case, it is worthwhile actually doing the probability theory.

Let n=kmn=km with kk a positive integer. Suppose TijTij is the time taken to complete the jj-th task given to processor ii. This is a random variable with mean μμ and variance σ2σ2. The expected makespan in the first case is E[M]=E[max{kj=1Tiji=1,2,,m}].

E[M]=E[max{j=1kTiji=1,2,,m}].
The sums are all iid with mean kμkμ and variance kσ2kσ2, assuming that TijTij are all iid (this is stronger than pairwise independence).

Now to obtain the expectation of a maximum, one either needs more information about the distribution, or one has to settle for distribution-free bounds, such as:

  • Peter J. Downey, Distribution-free bounds on the expectation of the maximum with scheduling applications, Operations Research Letters 9, 189–201, 1990. doi:10.1016/0167-6377(90)90018-Z

which can be applied if the processor-wise sums are iid. This would not necessarily be the case if the underlying times were just pairwise independent. In particular, by Theorem 1 the expected makespan is bounded above by E[M]kμ+σkn12n1.

E[M]kμ+σkn12n1.
Downey also gives a particular distribution achieving this bound, although the distribution changes as nn does, and is not exactly natural.

Note that the bound says that the expected makespan can increase as any of the parameters increase: the variance σ2σ2, the number of processors nn, or the number of tasks per processor kk.

For your second question, the low-variance scenario resulting in a larger makespan seems to be an unlikely outcome of a thought experiment. Let X=maxmi=1XiX=maxmi=1Xi denote the makespan for the first distribution, and Y=maxmi=1YiY=maxmi=1Yi for the second (with all other parameters the same). Here XiXi and YiYi denote the sums of kk task durations corresponding to processor ii under the two distributions. For all xkμxkμ, independence yields Pr[Xx]=mi=1Pr[Xix]mi=1Pr[Yix]=Pr[Yx].

Pr[Xx]=i=1mPr[Xix]i=1mPr[Yix]=Pr[Yx].
Since most of the mass of the probability distribution of the maximum will be above its mean, E[X]E[X] will therefore tend to be larger than E[Y]. This is not a completely rigorous answer, but in short, the second case seems preferable.
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.