Answers:
我还没有看到来自可靠来源的答案,但是我将尝试通过一个简单的示例(以我目前的知识)自己回答这个问题。
通常,请注意,通常使用矩阵来实现使用反向传播训练MLP。
的矩阵乘法的时间复杂度就是。
注意,这里我们假设最简单的乘法算法:存在一些其他算法,它们的时间复杂度更好。
前馈传播算法如下。
首先,从层去到,你做
然后您应用激活功能
如果我们有层(包括输入和输出层),它将运行次。
例如,让我们为具有层的MLP计算前向通过算法的时间复杂度,其中表示输入层的节点数,表示第二层的节点数,表示输入层的节点数。第三层和输出层中的节点数。
由于共有层,因此需要矩阵来表示这些层之间的权重。让我们用,和表示它们,其中是一个具有行和列的矩阵(因此,包含从第层到第层的权重)。
假设你有的训练例子。为了从第层传播到,我们首先
并且此运算(即矩阵乘法)的时间复杂度为。然后我们应用激活功能
并且这具有时间复杂度,因为它是元素操作。
因此,总的来说,
使用相同的逻辑,对于,我们有,对于,我们有。
总的来说,前馈传播的时间复杂度为
我不确定是否可以进一步简化。也许只是,但我不确定。
反向传播算法进行如下。从输出层,我们计算误差信号,该矩阵包含层l节点的误差信号
其中表示逐元素乘法。注意,具有行和列:这仅表示每一列都是训练示例的错误信号。
然后我们计算“Δ权重”,(层之间和层)
其中是的转置。
然后,我们调整权重
对于,我们的时间复杂度为。
现在,从。我们首先有
然后
接着
where is the transpose of . For , we have the time complexity .
And finally, for , we have . In total, we have
which is same as feedforward pass algorithm. Since they are same, the total time complexity for one epoch will be
This time complexity is then multiplied by number of iterations (epochs). So, we have
Note that these matrix operations can greatly be paralelized by GPUs.
We tried to find the time complexity for training a neural network that has 4 layers with respectively , , and nodes, with training examples and epochs. The result was .
We assumed the simplest form of matrix multiplication that has cubic time complexity. We used batch gradient descent algorithm. The results for stochastic and mini-batch gradient descent should be same. (Let me know if you think the otherwise: note that batch gradient descent is the general form, with little modification, it becomes stochastic or mini-batch)
Also, if you use momentum optimization, you will have same time complexity, because the extra matrix operations required are all element-wise operations, hence they will not affect the time complexity of the algorithm.
I'm not sure what the results would be using other optimizers such as RMSprop.
The following article http://briandolhansky.com/blog/2014/10/30/artificial-neural-networks-matrix-form-part-5 describes an implementation using matrices. Although this implementation is using "row major", the time complexity is not affected by this.
If you're not familiar with back-propagation, check this article:
http://briandolhansky.com/blog/2013/9/27/artificial-neural-networks-backpropagation-part-4
For the evaluation of a single pattern, you need to process all weights and all neurons. Given that every neuron has at least one weight, we can ignore them, and have where is the number of weights, i.e., , assuming full connectivity between your layers.
The back-propagation has the same complexity as the forward evaluation (just look at the formula).
So, the complexity for learning examples, where each gets repeated times, is .
The bad news is that there's no formula telling you what number of epochs you need.
e
times for each of m
examples. I didn't bother to compute the number of weights, I guess, that's the difference.
w = ij + jk + kl
. basically sum of n * n_i
between layers as you noted.
A potential disadvantage of gradient-based methods is that they head for the nearest minimum, which is usually not the global minimum.
This means that the only difference between these search methods is the speed with which solutions are obtained, and not the nature of those solutions.
An important consideration is time complexity, which is the rate at which the time required to find a solution increases with the number of parameters (weights). In short, the time complexities of a range of different gradient-based methods (including second-order methods) seem to be similar.
Six different error functions exhibit a median run-time order of approximately O(N to the power 4) on the N-2-N encoder in this paper:
Lister, R and Stone J "An Empirical Study of the Time Complexity of Various Error Functions with Conjugate Gradient Back Propagation" , IEEE International Conference on Artificial Neural Networks (ICNN95), Perth, Australia, Nov 27-Dec 1, 1995.
Summarised from my book: Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep Learning.