1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| # K折验证
"""
Q:为什么我们需要使用K折验证?
A:因为数据量太少。
如果选择只使用数据集一次,那么训练结果会和数据的分布情况有很大相关性。
数据集分布不同输出结果会有很大差异,即误差较大,这不符合泛化的理念。
使用K折验证可以减小这种误差。
"""
import numpy as np
k = 4
num_val_samples = len(train_data) // 4
num_epochs = 500
all_source = []
all_mae_histories = []
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples : (i + 1) * num_val_samples]
val_targets = train_targets[i*num_val_samples:(i+1)*num_val_samples]
partial_train_data = np.concatenate([train_data[:i * num_val_samples], train_data[(i + 1)*num_val_samples:]], axis=0)
partial_train_targets = np.concatenate([train_targets[:i*num_val_samples], train_targets[(i+1)*num_val_samples:]], axis=0)
model = build_model()
history = model.fit(partial_train_data, partial_train_targets, validation_data=(val_data, val_targets), epochs=num_epochs, batch_size=1, verbose=0)
# val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)
# all_source.append(val_mae)
print(history.history.keys())
mae_history = history.history['val_mae']
all_mae_histories.append(mae_history)
|