当我在PyTorch中使用Adam优化器降低学习速度时,损失会突然跳升


11

我正在auto-encoder使用Adam优化器(带有amsgrad=True)和MSE loss单通道音频源分离任务来训练网络。每当我将学习速率降低一个因素时,网络损耗就会突然跳升,然后下降,直到学习速率再次下降。

我正在使用Pytorch进行网络实施和培训。

Following are my experimental setups:

 Setup-1: NO learning rate decay, and 
          Using the same Adam optimizer for all epochs

 Setup-2: NO learning rate decay, and 
          Creating a new Adam optimizer with same initial values every epoch

 Setup-3: 0.25 decay in learning rate every 25 epochs, and
          Creating a new Adam optimizer every epoch

 Setup-4: 0.25 decay in learning rate every 25 epochs, and
          NOT creating a new Adam optimizer every time rather
          using PyTorch's "multiStepLR" and "ExponentialLR" decay scheduler 
          every 25 epochs

对于设置#2,#3,#4,我得到非常令人惊讶的结果,并且无法对此做出任何解释。以下是我的结果:

Setup-1 Results:

Here I'm NOT decaying the learning rate and 
I'm using the same Adam optimizer. So my results are as expected.
My loss decreases with more epochs.
Below is the loss plot this setup.

情节1:

设置1结果

optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)

for epoch in range(num_epochs):
    running_loss = 0.0
    for i in range(num_train):
        train_input_tensor = ..........                    
        train_label_tensor = ..........
        optimizer.zero_grad()
        pred_label_tensor = model(train_input_tensor)
        loss = criterion(pred_label_tensor, train_label_tensor)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    loss_history[m_lr].append(running_loss/num_train)

Setup-2 Results:  

Here I'm NOT decaying the learning rate but every epoch I'm creating a new
Adam optimizer with the same initial parameters.
Here also results show similar behavior as Setup-1.

Because at every epoch a new Adam optimizer is created, so the calculated gradients
for each parameter should be lost, but it seems that this doesnot affect the 
network learning. Can anyone please help on this?

情节2:

设置2结果

for epoch in range(num_epochs):
    optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)

    running_loss = 0.0
    for i in range(num_train):
        train_input_tensor = ..........                    
        train_label_tensor = ..........
        optimizer.zero_grad()
        pred_label_tensor = model(train_input_tensor)
        loss = criterion(pred_label_tensor, train_label_tensor)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    loss_history[m_lr].append(running_loss/num_train)

Setup-3 Results: 

As can be seen from the results in below plot, 
my loss jumps every time I decay the learning rate. This is a weird behavior.

If it was happening due to the fact that I'm creating a new Adam 
optimizer every epoch then, it should have happened in Setup #1, #2 as well.
And if it is happening due to the creation of a new Adam optimizer with a new 
learning rate (alpha) every 25 epochs, then the results of Setup #4 below also 
denies such correlation.

情节3:

设置3结果

decay_rate = 0.25
for epoch in range(num_epochs):
    optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)

    if epoch % 25 == 0  and epoch != 0:
        lr *= decay_rate   # decay the learning rate

    running_loss = 0.0
    for i in range(num_train):
        train_input_tensor = ..........                    
        train_label_tensor = ..........
        optimizer.zero_grad()
        pred_label_tensor = model(train_input_tensor)
        loss = criterion(pred_label_tensor, train_label_tensor)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    loss_history[m_lr].append(running_loss/num_train)

Setup-4 Results:  

In this setup, I'm using Pytorch's learning-rate-decay scheduler (multiStepLR)
which decays the learning rate every 25 epochs by 0.25.
Here also, the loss jumps everytime the learning rate is decayed.

正如@Dennis在下面的评论中所建议的,我同时尝试了非线性ReLU1e-02 leakyReLU非线性。但是,结果的表现似乎相似,并且损失首先降低,然后增加,然后以比没有学习率衰减时所能达到的更高的值饱和。

图4显示了结果。

情节4:

在此处输入图片说明

scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[25,50,75], gamma=0.25)

scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=0.95)

scheduler = ......... # defined above
optimizer = torch.optim.Adam(lr=m_lr,amsgrad=True, ...........)

for epoch in range(num_epochs):

    scheduler.step()

    running_loss = 0.0
    for i in range(num_train):
        train_input_tensor = ..........                    
        train_label_tensor = ..........
        optimizer.zero_grad()
        pred_label_tensor = model(train_input_tensor)
        loss = criterion(pred_label_tensor, train_label_tensor)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    loss_history[m_lr].append(running_loss/num_train)

编辑:

  • 如下面的评论和答复中所建议,我已经更改了代码并训练了模型。我已经添加了相同的代码和绘图。
  • 我尝试了各种lr_schedulerin,PyTorch (multiStepLR, ExponentialLR)并且在相同的地块中列出Setup-4了@Dennis在下面的评论中建议的情节。
  • 尝试使用@Dennis在评论中建议的LeakyReLU。

任何帮助。谢谢


评论不作进一步讨论;此对话已转移至聊天
本N

Answers:


8

我认为没有理由解释为什么学习率下降会导致您所观察到的损失激增。它应该“减慢”您“移动”的速度,在损失持续减少的情况下,在最坏的情况下,这实际上只会导致损失达到平稳状态(而不是那些跳跃)。

我在您的代码中观察到的第一件事是,您在每个时代都从头开始重新创建了优化器。我还不能与PyTorch充分合作来确定,但这不是每次都会破坏优化器的内部状态/内存吗?我认为您应该只在循环遍历各个时期之前创建一次优化器。如果这确实是代码中的错误,那么在您不使用学习率衰减的情况下,它实际上实际上仍然仍然是一个错误……但是也许您只是幸运地在那儿,而没有遇到相同的负面影响。错误。

对于学习率下降,我建议为此使用官方API,而不是手动解决方案。在您的特定情况下,您需要使用以下方法实例化StepLR调度程序:

  • optimizer = ADAM优化器,您可能应该只实例化一次。
  • step_size = 25
  • gamma = 0.25

然后,您可以简单地scheduler.step()在每个纪元的开始处调用(或者可以在结尾处调用?API链接中的示例在每个纪元的开始处调用它)。


如果经过上述更改后,您仍然遇到问题,则多次运行每个实验并绘制平均结果(或为所有实验绘制线)也很有用。从理论上讲,您的实验在前25个周期内应该是相同的,但即使在前25个周期中没有学习率下降的情况下,我们仍然看到这两个数字之间存在巨大差异(例如,一个数字以〜28K的损失开始,另一个损失约40K)。这可能仅仅是由于不同的随机初始化造成的,因此最好将非确定性平均出图。


1
评论不作进一步讨论;此对话已转移至聊天
本N
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.