如何在TensorFlow中使用批处理规范化?


77

我想在TensorFlow中使用批处理规范化。我在中找到了相关的C ++源代码core/ops/nn_ops.cc。但是,我没有在tensorflow.org上找到它的文档。

BN在MLP和CNN中具有不同的语义,因此我不确定此BN的确切作用。

我没有找到任何一种方法MovingMoments


1

我认为不再有这样的tf.Op(BatchNormWithGlobalNormalization)
Pinocchio


Answers:


57

2016年7月更新 在TensorFlow中使用批处理规范化的最简单方法是通过contrib / layerstflearnslim中提供的高级接口。

如果您想自己动手,则可以使用先前的答案:自发布以来,此文档的文档字符串已得到改进-请参阅master分支中docs注释,而不是找到的注释。它特别说明了它是的输出tf.nn.moments

您可以在batch_norm测试代码中看到一个非常简单的示例。对于更真实的使用示例,我将其包含在帮助器类下面,并使用了我自己编写的注释(不提供保修!):

"""A helper class for managing batch normalization state.                   

This class is designed to simplify adding batch normalization               
(http://arxiv.org/pdf/1502.03167v3.pdf) to your model by                    
managing the state variables associated with it.                            

Important use note:  The function get_assigner() returns                    
an op that must be executed to save the updated state.                      
A suggested way to do this is to make execution of the                      
model optimizer force it, e.g., by:                                         

  update_assignments = tf.group(bn1.get_assigner(),                         
                                bn2.get_assigner())                         
  with tf.control_dependencies([optimizer]):                                
    optimizer = tf.group(update_assignments)                                

"""

import tensorflow as tf


class ConvolutionalBatchNormalizer(object):
  """Helper class that groups the normalization logic and variables.        

  Use:                                                                      
      ewma = tf.train.ExponentialMovingAverage(decay=0.99)                  
      bn = ConvolutionalBatchNormalizer(depth, 0.001, ewma, True)           
      update_assignments = bn.get_assigner()                                
      x = bn.normalize(y, train=training?)                                  
      (the output x will be batch-normalized).                              
  """

  def __init__(self, depth, epsilon, ewma_trainer, scale_after_norm):
    self.mean = tf.Variable(tf.constant(0.0, shape=[depth]),
                            trainable=False)
    self.variance = tf.Variable(tf.constant(1.0, shape=[depth]),
                                trainable=False)
    self.beta = tf.Variable(tf.constant(0.0, shape=[depth]))
    self.gamma = tf.Variable(tf.constant(1.0, shape=[depth]))
    self.ewma_trainer = ewma_trainer
    self.epsilon = epsilon
    self.scale_after_norm = scale_after_norm

  def get_assigner(self):
    """Returns an EWMA apply op that must be invoked after optimization."""
    return self.ewma_trainer.apply([self.mean, self.variance])

  def normalize(self, x, train=True):
    """Returns a batch-normalized version of x."""
    if train:
      mean, variance = tf.nn.moments(x, [0, 1, 2])
      assign_mean = self.mean.assign(mean)
      assign_variance = self.variance.assign(variance)
      with tf.control_dependencies([assign_mean, assign_variance]):
        return tf.nn.batch_norm_with_global_normalization(
            x, mean, variance, self.beta, self.gamma,
            self.epsilon, self.scale_after_norm)
    else:
      mean = self.ewma_trainer.average(self.mean)
      variance = self.ewma_trainer.average(self.variance)
      local_beta = tf.identity(self.beta)
      local_gamma = tf.identity(self.gamma)
      return tf.nn.batch_norm_with_global_normalization(
          x, mean, variance, local_beta, local_gamma,
          self.epsilon, self.scale_after_norm)

请注意,我ConvolutionalBatchNormalizer之所以称其为a是因为它tf.nn.moments在轴0、1和2上固定了使用sum的用途,而对于非卷积用途,您可能只需要轴0。

如果您使用它,反馈表示赞赏。


我很难将其应用于在LSTM网络中重用的convnet子图。默认情况下,它会为应用子图的每个时间步骤创建一个不同的规范化器。有什么想法可以使它在子图的所有应用上都正常化吗?
Joren Van Severen 2015年

1
您是否尝试过在子图外部创建bn并将其传递给子图构造函数? bn = Conv...er(args); ... createSubgraph(bn, args);然后bn.normalize在子图内部调用
dga 2015年

1
我不明白为什么在此示例中您要在测试阶段计算移动平均值?
2015年

相反-在训练(if train:)期间,它会计算输入批次(tf.nn.moments(x, [0, 1, 2]))的均值和标准差。在评估/测试过程中,它将提取保存的移动平均值(self.ewma_trainer.average(self.mean))。令人困惑的是,调用ewma的average方法会返回存储的平均值,但不会更新它。更新是通过以下self.mean.assign(mean)行完成的,该行将当前批次均值存储到“ self.mean”中,然后该行ewma_trainer.apply根据self.mean
dga

1
@dga:是的,它确实运行了(之前引起了错误),但是我看到了奇怪的行为。我正在github.com/tensorflow/tensorflow/blob/master/tensorflow/models/…中建立图表两次,并使用第二个图表对更大的火车和有效批次进行测试。随着批量标准化,我越来越增加/随机丢失和acc。对于第二张图,用于训练操作的第一张图显示了很好的减少损失。
Joren Van Severen 2015年

54

从TensorFlow 1.0(2017年2月)开始tf.layers.batch_normalization,TensorFlow本身还包含高级API。

使用起来超级简单:

# Set this to True for training and False for testing
training = tf.placeholder(tf.bool)

x = tf.layers.dense(input_x, units=100)
x = tf.layers.batch_normalization(x, training=training)
x = tf.nn.relu(x)

...除了它将额外的操作添加到图形中(用于更新其均值和方差变量)以使它们不会成为训练操作的依赖项。您可以单独运行操作:

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
sess.run([train_op, extra_update_ops], ...)

或手动将更新操作作为您的培训操作的依赖项添加,然后像平常一样运行您的培训操作:

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
    train_op = optimizer.minimize(loss)
...
sess.run([train_op], ...)

1
@MiniQuark您能否详细说明依赖项?我不太明白那部分。
mamafoku

5
@mamafoku Batch Norm算法需要计算整个训练集的均值和标准差。这些计算在训练中,但它们不使用只在推理训练中。使用指数平均值进行此计算。它与其余训练无关,因此您必须extra_update_ops在每次训练迭代中“手动”运行指数平均计算步骤(即)以及常规训练操作,否则您可以使训练操作依赖extra_update_ops(使用control_dependencies()块)。希望这可以帮助。
MiniQuark

因此,考虑到这样update_ups做是为了更新移动平均值和移动方差,如果我们仅测试预训练的网络,将其全部包括在内是没有意义的,对吗?
安德烈斯·菲利普

axis卷积网络应使用什么值?
乔纳斯·阿德勒'18

1
@ gantzer89是的。如果加载预训练的网络,则检查点将包括训练过程中计算出的均值和方差的值。测试期间不应更新均值和方差。
马修·拉兹

32

以下内容对我来说很好用,不需要在外部调用EMA-apply。

import numpy as np
import tensorflow as tf
from tensorflow.python import control_flow_ops

def batch_norm(x, n_out, phase_train, scope='bn'):
    """
    Batch normalization on convolutional maps.
    Args:
        x:           Tensor, 4D BHWD input maps
        n_out:       integer, depth of input maps
        phase_train: boolean tf.Varialbe, true indicates training phase
        scope:       string, variable scope
    Return:
        normed:      batch-normalized maps
    """
    with tf.variable_scope(scope):
        beta = tf.Variable(tf.constant(0.0, shape=[n_out]),
                                     name='beta', trainable=True)
        gamma = tf.Variable(tf.constant(1.0, shape=[n_out]),
                                      name='gamma', trainable=True)
        batch_mean, batch_var = tf.nn.moments(x, [0,1,2], name='moments')
        ema = tf.train.ExponentialMovingAverage(decay=0.5)

        def mean_var_with_update():
            ema_apply_op = ema.apply([batch_mean, batch_var])
            with tf.control_dependencies([ema_apply_op]):
                return tf.identity(batch_mean), tf.identity(batch_var)

        mean, var = tf.cond(phase_train,
                            mean_var_with_update,
                            lambda: (ema.average(batch_mean), ema.average(batch_var)))
        normed = tf.nn.batch_normalization(x, mean, var, beta, gamma, 1e-3)
    return normed

例:

import math

n_in, n_out = 3, 16
ksize = 3
stride = 1
phase_train = tf.placeholder(tf.bool, name='phase_train')
input_image = tf.placeholder(tf.float32, name='input_image')
kernel = tf.Variable(tf.truncated_normal([ksize, ksize, n_in, n_out],
                                   stddev=math.sqrt(2.0/(ksize*ksize*n_out))),
                                   name='kernel')
conv = tf.nn.conv2d(input_image, kernel, [1,stride,stride,1], padding='SAME')
conv_bn = batch_norm(conv, n_out, phase_train)
relu = tf.nn.relu(conv_bn)

with tf.Session() as session:
    session.run(tf.initialize_all_variables())
    for i in range(20):
        test_image = np.random.rand(4,32,32,3)
        sess_outputs = session.run([relu],
          {input_image.name: test_image, phase_train.name: True})

感谢您提供另一个答案:)。你是control_flow_ops.cond什么人 是tf.control_flow_ops.cond吗 我在张量流中找不到它。您是否考虑过性能差异?由于如果将控制依赖项应用于层,那么也许计算必须等待每一层,而不是等待每次迭代,并且等待的时间是否过多?我实际上使用的是您的版本,即第1层中的版本,因为它比较简单,但是稍后将尝试使用全局版本。
肖恩·李

我已经更新了答案。它是tensorflow.python.control_flow_ops,尚未记录。我猜应用EMA不会花费很多时间,因为它是对长度通常为几百的向量进行元素操作。但是我还没有验证这一点。
bgshi

我已经确认@jrocks在他的回答中说的话,您的代码有点麻烦。请注意。
myme5261314'5

@ myme5261314 @jrock是的,看起来ema_apply_op在测试期间也被称为。我已经编辑了答案,phase_train从更改tf.Variable为python布尔值。但是,现在您必须创建单独的图形以进行培训和测试。感谢您的反馈,也很抱歉我的回复很晚。
bgshi

3
考虑到有官方的BN层,您的代码真的必要吗?代码:github.com/tensorflow/tensorflow/blob/…–
Pinocchio

14

还有一个由开发人员编码的“官方”批处理规范化层。他们没有关于如何使用它的很好的文档,但是这里是如何使用它(根据我的说法):

from tensorflow.contrib.layers.python.layers import batch_norm as batch_norm

def batch_norm_layer(x,train_phase,scope_bn):
    bn_train = batch_norm(x, decay=0.999, center=True, scale=True,
    updates_collections=None,
    is_training=True,
    reuse=None, # is this right?
    trainable=True,
    scope=scope_bn)
    bn_inference = batch_norm(x, decay=0.999, center=True, scale=True,
    updates_collections=None,
    is_training=False,
    reuse=True, # is this right?
    trainable=True,
    scope=scope_bn)
    z = tf.cond(train_phase, lambda: bn_train, lambda: bn_inference)
    return z

要实际使用它,您需要为其创建一个占位符,train_phase以指示您是否处于训练或推断阶段(如中所示train_phase = tf.placeholder(tf.bool, name='phase_train'))。它的值可以在推理或训练过程中用来填充,tf.session例如:

test_error = sess.run(fetches=cross_entropy, feed_dict={x: batch_xtest, y_:batch_ytest, train_phase: False})

或在训练期间:

sess.run(fetches=train_step, feed_dict={x: batch_xs, y_:batch_ys, train_phase: True})

根据github中的讨论,我很确定这是正确的。


似乎还有另一个有用的链接:

http://r2rt.com/implementing-batch-normalization-in-tensorflow.html


注意这updates_collections=None很重要。我不明白为什么,但是那是。我知道的最好的解释是,But what it is important is that either you pass updates_collections=None so the moving_mean and moving_variance are updated in-place, otherwise you will need gather the update_ops and make sure they are run.但我不太明白为什么这是一种解释,但是从经验上我观察到MNIST在“无”时表现良好,而在“无”时表现很差。

11

您可以简单地使用内置的batch_norm层:

batch_norm = tf.cond(is_train, 
    lambda: tf.contrib.layers.batch_norm(prev, activation_fn=tf.nn.relu, is_training=True, reuse=None),
    lambda: tf.contrib.layers.batch_norm(prev, activation_fn =tf.nn.relu, is_training=False, reuse=True))

其中prev是上一层的输出(可以是全连接层或卷积层),而is_train是布尔占位符。只需使用batch_norm作为下一层的输入即可。


1
您有没有不is_train占位符的示例吗?我无法做到这一点,传递python布尔值不起作用,tf.cond并在分支给我的情况下定义两个批处理规范,如果分支给我“ reuse =没有name_or_scope不能使用True”(即使我为它们指定了变量范围)...
sygi's

@sygi,可以使用tf.cast(True/False, tf.bool)操作。
I.

@sygi,是的,我知道,例如,您可以说:var1 = True or False,然后说:tf.cast(var1, tf.bool)。这应该工作得很好
I.

您为什么reuse=True只进入测试阶段?
JenkinsY

11

由于最近有人对此进行了编辑,因此我想澄清一下,这不再是一个问题。

答案似乎不正确当phase_train设置为false时,它仍然更新ema均值和方差。可以使用以下代码段对此进行验证。

x = tf.placeholder(tf.float32, [None, 20, 20, 10], name='input')
phase_train = tf.placeholder(tf.bool, name='phase_train')

# generate random noise to pass into batch norm
x_gen = tf.random_normal([50,20,20,10])
pt_false = tf.Variable(tf.constant(True))

#generate a constant variable to pass into batch norm
y = x_gen.eval()

[bn, bn_vars] = batch_norm(x, 10, phase_train)

tf.initialize_all_variables().run()
train_step = lambda: bn.eval({x:x_gen.eval(), phase_train:True})
test_step = lambda: bn.eval({x:y, phase_train:False})
test_step_c = lambda: bn.eval({x:y, phase_train:True})

# Verify that this is different as expected, two different x's have different norms
print(train_step()[0][0][0])
print(train_step()[0][0][0])

# Verify that this is same as expected, same x's (y) have same norm
print(train_step_c()[0][0][0])
print(train_step_c()[0][0][0])

# THIS IS DIFFERENT but should be they same, should only be reading from the ema.
print(test_step()[0][0][0])
print(test_step()[0][0][0])

我已经更新了答案。原始版本中存在一个错误,ema_apply_op即使在时也会导致被调用phase_train=False
bgshi

2
感谢您的更新,仍然无法在您的线程上发表评论(表示敬意),但是看起来现在应该可以使用了。也感谢@ myme5261314。
jrock

3

下面是使用TensorFlow内置的batch_norm层的代码,用于加载数据,构建具有一个隐藏的ReLU层和L2标准化的网络,并为隐藏层和外部层引入批处理标准化。这样运行良好,训练也很好。仅供参考,此示例主要基于Udacity DeepLearning课程的数据和代码构建。PS是的,在较早的答案中已经对其中的一部分进行了讨论,但是我决定将所有代码都收集在一个代码段中,以便为您提供带有批处理规范化及其评估的整个网络培训过程的示例

# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle

pickle_file = '/home/maxkhk/Documents/Udacity/DeepLearningCourse/SourceCode/tensorflow/examples/udacity/notMNIST.pickle'

with open(pickle_file, 'rb') as f:
  save = pickle.load(f)
  train_dataset = save['train_dataset']
  train_labels = save['train_labels']
  valid_dataset = save['valid_dataset']
  valid_labels = save['valid_labels']
  test_dataset = save['test_dataset']
  test_labels = save['test_labels']
  del save  # hint to help gc free up memory
  print('Training set', train_dataset.shape, train_labels.shape)
  print('Validation set', valid_dataset.shape, valid_labels.shape)
  print('Test set', test_dataset.shape, test_labels.shape)

image_size = 28
num_labels = 10

def reformat(dataset, labels):
  dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
  # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
  labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
  return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)


def accuracy(predictions, labels):
  return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])


#for NeuralNetwork model code is below
#We will use SGD for training to save our time. Code is from Assignment 2
#beta is the new parameter - controls level of regularization.
#Feel free to play with it - the best one I found is 0.001
#notice, we introduce L2 for both biases and weights of all layers

batch_size = 128
beta = 0.001

#building tensorflow graph
graph = tf.Graph()
with graph.as_default():
      # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)

  #introduce batchnorm
  tf_train_dataset_bn = tf.contrib.layers.batch_norm(tf_train_dataset)


  #now let's build our new hidden layer
  #that's how many hidden neurons we want
  num_hidden_neurons = 1024
  #its weights
  hidden_weights = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_hidden_neurons]))
  hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons]))

  #now the layer itself. It multiplies data by weights, adds biases
  #and takes ReLU over result
  hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset_bn, hidden_weights) + hidden_biases)

  #adding the batch normalization layerhi()
  hidden_layer_bn = tf.contrib.layers.batch_norm(hidden_layer)

  #time to go for output linear layer
  #out weights connect hidden neurons to output labels
  #biases are added to output labels  
  out_weights = tf.Variable(
    tf.truncated_normal([num_hidden_neurons, num_labels]))  

  out_biases = tf.Variable(tf.zeros([num_labels]))  

  #compute output  
  out_layer = tf.matmul(hidden_layer_bn,out_weights) + out_biases
  #our real output is a softmax of prior result
  #and we also compute its cross-entropy to get our loss
  #Notice - we introduce our L2 here
  loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    out_layer, tf_train_labels) +
    beta*tf.nn.l2_loss(hidden_weights) +
    beta*tf.nn.l2_loss(hidden_biases) +
    beta*tf.nn.l2_loss(out_weights) +
    beta*tf.nn.l2_loss(out_biases)))

  #now we just minimize this loss to actually train the network
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

  #nice, now let's calculate the predictions on each dataset for evaluating the
  #performance so far
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(out_layer)
  valid_relu = tf.nn.relu(  tf.matmul(tf_valid_dataset, hidden_weights) + hidden_biases)
  valid_prediction = tf.nn.softmax( tf.matmul(valid_relu, out_weights) + out_biases) 

  test_relu = tf.nn.relu( tf.matmul( tf_test_dataset, hidden_weights) + hidden_biases)
  test_prediction = tf.nn.softmax(tf.matmul(test_relu, out_weights) + out_biases)



#now is the actual training on the ANN we built
#we will run it for some number of steps and evaluate the progress after 
#every 500 steps

#number of steps we will train our ANN
num_steps = 3001

#actual training
with tf.Session(graph=graph) as session:
  tf.initialize_all_variables().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
      print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))

如何获取数据集以尝试运行您的示例?即`''/home/maxkhk/Documents/Udacity/DeepLearningCourse/SourceCode/tensorflow/examples/udacity/notMNIST.pickle'`–
Pinocchio

@Pinocchio是Udacity的课程深度学习,它是在第一个任务那里进行,你可以检查我的代码为这个位置:github.com/MaxKHK/Udacity_DeepLearningAssignments/blob/master/...
马克西姆Khaitovich

看来您在训练期间没有更新batch_norm层的移动平均值
Temak '16

0

下面是使用这个batchnorm类的简单示例:

from bn_class import *

with tf.name_scope('Batch_norm_conv1') as scope:
    ewma = tf.train.ExponentialMovingAverage(decay=0.99)                  
    bn_conv1 = ConvolutionalBatchNormalizer(num_filt_1, 0.001, ewma, True)           
    update_assignments = bn_conv1.get_assigner() 
    a_conv1 = bn_conv1.normalize(a_conv1, train=bn_train) 
    h_conv1 = tf.nn.relu(a_conv1)
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.