Questions tagged «loss-function»

18
训练回归网络时NaN损失
我有一个“一键编码”(全1和全0)的数据矩阵,具有260,000行和35列。我正在使用Keras训练简单的神经网络来预测连续变量。组成网络的代码如下: model = Sequential() model.add(Dense(1024, input_shape=(n_train,))) model.add(Activation('relu')) model.add(Dropout(0.1)) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.1)) model.add(Dense(256)) model.add(Activation('relu')) model.add(Dropout(0.1)) model.add(Dense(1)) sgd = SGD(lr=0.01, nesterov=True); #rms = RMSprop() #model.compile(loss='categorical_crossentropy', optimizer=rms, metrics=['accuracy']) model.compile(loss='mean_absolute_error', optimizer=sgd) model.fit(X_train, Y_train, batch_size=32, nb_epoch=3, verbose=1, validation_data=(X_test,Y_test), callbacks=[EarlyStopping(monitor='val_loss', patience=4)] ) 但是,在训练过程中,我看到损失下降得很好,但是在第二个时期的中间,它就变成了nan: Train on 260000 samples, validate on 64905 samples Epoch 1/3 260000/260000 [==============================] - …
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.