在MNIST初学者教程中,有一条声明
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
tf.cast
基本上改变了对象的张量类型,但是tf.reduce_mean
和之间有什么区别np.mean
?
这是有关的文档tf.reduce_mean
:
reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)
input_tensor
:张量减少。应为数字类型。
reduction_indices
:要缩小的尺寸。如果None
(默认)减小所有尺寸。# 'x' is [[1., 1. ]] # [2., 2.]] tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1., 2.]
对于一维矢量,它看起来像np.mean == tf.reduce_mean
,但我不了解tf.reduce_mean(x, 1) ==> [1., 2.]
。tf.reduce_mean(x, 0) ==> [1.5, 1.5]
的意思是合理的,因为的意思是[1, 2]
和[1, 2]
是[1.5, 1.5]
,但是这是怎么回事tf.reduce_mean(x, 1)
?