热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

2.7mnist手写数字识别之训练调试与优化精讲(百度架构师手把手带你零基础实践深度学习原版笔记系列)

2.7mnist手写数字识别之训练调试与优化精讲(百度架构师手把手带你零基础实践深度学习原版笔记系列)目录2.7mnist手写数字识别之训练调试与优化精讲(百度架构师手把手带你零

2.7mnist手写数字识别之训练调试与优化精讲(百度架构师手把手带你零基础实践深度学习原版笔记系列)

 

目录

2.7mnist手写数字识别之训练调试与优化精讲(百度架构师手把手带你零基础实践深度学习原版笔记系列)

概述

计算模型的分类准确率

检查模型训练过程,识别潜在训练问题

加入校验或测试,更好评价模型效果

加入正则化项,避免模型过拟合

过拟合现象

导致过拟合原因

过拟合的成因与防控

正则化项

可视化分析

使用Matplotlib库绘制损失随训练下降的曲线图

使用VisualDL可视化分析




概述

 

训练过程优化思路主要有如下五个关键环节:

1. 计算分类准确率,观测模型训练效果。

交叉熵损失函数只能作为优化目标,无法直接准确衡量模型的训练效果。准确率可以直接衡量训练效果,但由于其离散性质,不适合做为损失函数优化神经网络。

2. 检查模型训练过程,识别潜在问题。

如果模型的损失或者评估指标表现异常,通常需要打印模型每一层的输入和输出来定位问题,分析每一层的内容来获取错误的原因。

3. 加入校验或测试,更好评价模型效果。

理想的模型训练结果是在训练集和验证集上均有较高的准确率,如果训练集上的准确率高于验证集,说明网络训练程度不够;如果验证集的准确率高于训练集,可能是发生了过拟合现象。通过在优化目标中加入正则化项的办法,解决过拟合的问题。

4. 加入正则化项,避免模型过拟合。

飞桨框架支持为整体参数加入正则化项,这是通常的做法。此外,飞桨框架也支持为某一层或某一部分的网络单独加入正则化项,以达到精细调整参数训练的效果。

5. 可视化分析。

用户不仅可以通过打印或使用matplotlib库作图,飞桨还提供了更专业的可视化分析工具VisualDL,提供便捷的可视化分析方法。

 

 


计算模型的分类准确率

准确率是一个直观衡量分类模型效果的指标,由于这个指标是离散的,因此不适合作为损失函数来优化。通常情况下,交叉熵损失越小的模型,分类的准确率也越高。基于分类准确率,我们可以公平的比较两种损失函数的优劣,例如【手写数字识别】之损失函数 章节中均方误差和交叉熵的比较。

飞桨提供了计算分类准确率的API,使用fluid.layers.accuracy可以直接计算准确率,该API的输入参数input为预测的分类结果predict,输入参数label为数据真实的label。

在下述代码中,我们在模型前向计算过程forward函数中计算分类准确率,并在训练时打印每个批次样本的分类准确率。

 

# 加载相关库
import os
import random
import paddle
import paddle.fluid as fluid
from paddle.fluid.dygraph.nn import Conv2D, Pool2D, Linear
import numpy as np
from PIL import Imageimport gzip
import json# 定义数据集读取器
def load_data(mode='train'):# 读取数据文件datafile = './work/mnist.json.gz'print('loading mnist dataset from {} ......'.format(datafile))data = json.load(gzip.open(datafile))# 读取数据集中的训练集,验证集和测试集train_set, val_set, eval_set = data# 数据集相关参数,图片高度IMG_ROWS, 图片宽度IMG_COLSIMG_ROWS = 28IMG_COLS = 28# 根据输入mode参数决定使用训练集,验证集还是测试if mode == 'train':imgs = train_set[0]labels = train_set[1]elif mode == 'valid':imgs = val_set[0]labels = val_set[1]elif mode == 'eval':imgs = eval_set[0]labels = eval_set[1]# 获得所有图像的数量imgs_length = len(imgs)# 验证图像数量和标签数量是否一致assert len(imgs) == len(labels), \"length of train_imgs({}) should be the same as train_labels({})".format(len(imgs), len(labels))index_list = list(range(imgs_length))# 读入数据时用到的batchsizeBATCHSIZE = 100# 定义数据生成器def data_generator():# 训练模式下,打乱训练数据if mode == 'train':random.shuffle(index_list)imgs_list = []labels_list = []# 按照索引读取数据for i in index_list:# 读取图像和标签,转换其尺寸和类型img = np.reshape(imgs[i], [1, IMG_ROWS, IMG_COLS]).astype('float32')label = np.reshape(labels[i], [1]).astype('int64')imgs_list.append(img) labels_list.append(label)# 如果当前数据缓存达到了batch size,就返回一个批次数据if len(imgs_list) == BATCHSIZE:yield np.array(imgs_list), np.array(labels_list)# 清空数据缓存列表imgs_list = []labels_list = []# 如果剩余数据的数目小于BATCHSIZE,# 则剩余数据一起构成一个大小为len(imgs_list)的mini-batchif len(imgs_list) > 0:yield np.array(imgs_list), np.array(labels_list)return data_generator# 定义模型结构
class MNIST(fluid.dygraph.Layer):def __init__(self):super(MNIST, self).__init__()# 定义一个卷积层,使用relu激活函数self.conv1 = Conv2D(num_channels=1, num_filters=20, filter_size=5, stride=1, padding=2, act='relu')# 定义一个池化层,池化核为2,步长为2,使用最大池化方式self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')# 定义一个卷积层,使用relu激活函数self.conv2 = Conv2D(num_channels=20, num_filters=20, filter_size=5, stride=1, padding=2, act='relu')# 定义一个池化层,池化核为2,步长为2,使用最大池化方式self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')# 定义一个全连接层,输出节点数为10 self.fc = Linear(input_dim=980, output_dim=10, act='softmax')# 定义网络的前向计算过程def forward(self, inputs, label):x = self.conv1(inputs)x = self.pool1(x)x = self.conv2(x)x = self.pool2(x)x = fluid.layers.reshape(x, [x.shape[0], 980])x = self.fc(x)#加入了准确率的判断if label is not None:acc = fluid.layers.accuracy(input=x, label=label)return x, accelse:return x#调用加载数据的函数
train_loader = load_data('train')#在使用GPU机器时,可以将use_gpu变量设置成True
use_gpu = False
place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()with fluid.dygraph.guard(place):model = MNIST()model.train() #四种优化算法的设置方案,可以逐一尝试效果optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.01, parameter_list=model.parameters())#optimizer = fluid.optimizer.MomentumOptimizer(learning_rate=0.01, momentum=0.9, parameter_list=model.parameters())#optimizer = fluid.optimizer.AdagradOptimizer(learning_rate=0.01, parameter_list=model.parameters())#optimizer = fluid.optimizer.AdamOptimizer(learning_rate=0.01, parameter_list=model.parameters())EPOCH_NUM = 5for epoch_id in range(EPOCH_NUM):for batch_id, data in enumerate(train_loader()):#准备数据image_data, label_data = dataimage = fluid.dygraph.to_variable(image_data)label = fluid.dygraph.to_variable(label_data)#前向计算的过程,同时拿到模型输出值和分类准确率predict, acc = model(image, label)#计算损失,取一个批次样本损失的平均值loss = fluid.layers.cross_entropy(predict, label)avg_loss = fluid.layers.mean(loss)#每训练了200批次的数据,打印下当前Loss的情况if batch_id % 200 == 0:print("epoch: {}, batch: {}, loss is: {}, acc is {}".format(epoch_id, batch_id, avg_loss.numpy(), acc.numpy()))#后向传播,更新参数的过程avg_loss.backward()optimizer.minimize(avg_loss)model.clear_gradients()#保存模型参数fluid.save_dygraph(model.state_dict(), 'mnist')

 

 

 


检查模型训练过程,识别潜在训练问题

使用飞桨动态图可以方便的查看和调试训练的执行过程。在网络定义的Forward函数中,可以打印每一层输入输出的尺寸,以及每层网络的参数。通过查看这些信息,不仅可以更好地理解训练的执行过程,还可以发现潜在问题,或者启发继续优化的思路。

在下述程序中,使用check_shape变量控制是否打印“尺寸”,验证网络结构是否正确。使用check_content变量控制是否打印“内容值”,验证数据分布是否合理。假如在训练中发现中间层的部分输出持续为0,说明该部分的网络结构设计存在问题,没有充分利用。

 

# 定义模型结构
class MNIST(fluid.dygraph.Layer):def __init__(self):super(MNIST, self).__init__()# 定义一个卷积层,使用relu激活函数self.conv1 = Conv2D(num_channels=1, num_filters=20, filter_size=5, stride=1, padding=2, act='relu')# 定义一个池化层,池化核为2,步长为2,使用最大池化方式self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')# 定义一个卷积层,使用relu激活函数self.conv2 = Conv2D(num_channels=20, num_filters=20, filter_size=5, stride=1, padding=2, act='relu')# 定义一个池化层,池化核为2,步长为2,使用最大池化方式self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')# 定义一个全连接层,输出节点数为10 self.fc = Linear(input_dim=980, output_dim=10, act='softmax')# 加入对每一层输入和输出的尺寸和数据内容的打印,根据check参数决策是否打印每层的参数和输出尺寸def forward(self, inputs, label=None, check_shape=False, check_cOntent=False):# 给不同层的输出不同命名,方便调试outputs1 = self.conv1(inputs)outputs2 = self.pool1(outputs1)outputs3 = self.conv2(outputs2)outputs4 = self.pool2(outputs3)_outputs4 = fluid.layers.reshape(outputs4, [outputs4.shape[0], -1])outputs5 = self.fc(_outputs4)# 选择是否打印神经网络每层的参数尺寸和输出尺寸,验证网络结构是否设置正确if check_shape:# 打印每层网络设置的超参数-卷积核尺寸,卷积步长,卷积padding,池化核尺寸print("\n########## print network layer's superparams ##############")print("conv1-- kernel_size:{}, padding:{}, stride:{}".format(self.conv1.weight.shape, self.conv1._padding, self.conv1._stride))print("conv2-- kernel_size:{}, padding:{}, stride:{}".format(self.conv2.weight.shape, self.conv2._padding, self.conv2._stride))print("pool1-- pool_type:{}, pool_size:{}, pool_stride:{}".format(self.pool1._pool_type, self.pool1._pool_size, self.pool1._pool_stride))print("pool2-- pool_type:{}, poo2_size:{}, pool_stride:{}".format(self.pool2._pool_type, self.pool2._pool_size, self.pool2._pool_stride))print("fc-- weight_size:{}, bias_size_{}, activation:{}".format(self.fc.weight.shape, self.fc.bias.shape, self.fc._act))# 打印每层的输出尺寸print("\n########## print shape of features of every layer ###############")print("inputs_shape: {}".format(inputs.shape))print("outputs1_shape: {}".format(outputs1.shape))print("outputs2_shape: {}".format(outputs2.shape))print("outputs3_shape: {}".format(outputs3.shape))print("outputs4_shape: {}".format(outputs4.shape))print("outputs5_shape: {}".format(outputs5.shape))# 选择是否打印训练过程中的参数和输出内容,可用于训练过程中的调试if check_content:# 打印卷积层的参数-卷积核权重,权重参数较多,此处只打印部分参数print("\n########## print convolution layer's kernel ###############")print("conv1 params -- kernel weights:", self.conv1.weight[0][0])print("conv2 params -- kernel weights:", self.conv2.weight[0][0])# 创建随机数,随机打印某一个通道的输出值idx1 = np.random.randint(0, outputs1.shape[1])idx2 = np.random.randint(0, outputs3.shape[1])# 打印卷积-池化后的结果,仅打印batch中第一个图像对应的特征print("\nThe {}th channel of conv1 layer: ".format(idx1), outputs1[0][idx1])print("The {}th channel of conv2 layer: ".format(idx2), outputs3[0][idx2])print("The output of last layer:", outputs5[0], '\n')# 如果label不是None,则计算分类精度并返回if label is not None:acc = fluid.layers.accuracy(input=outputs5, label=label)return outputs5, accelse:return outputs5#在使用GPU机器时,可以将use_gpu变量设置成True
use_gpu = False
place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()with fluid.dygraph.guard(place):model = MNIST()model.train() #四种优化算法的设置方案,可以逐一尝试效果optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.01, parameter_list=model.parameters())#optimizer = fluid.optimizer.MomentumOptimizer(learning_rate=0.01, momentum=0.9, parameter_list=model.parameters())#optimizer = fluid.optimizer.AdagradOptimizer(learning_rate=0.01, parameter_list=model.parameters())#optimizer = fluid.optimizer.AdamOptimizer(learning_rate=0.01, parameter_list=model.parameters())EPOCH_NUM = 1for epoch_id in range(EPOCH_NUM):for batch_id, data in enumerate(train_loader()):#准备数据,变得更加简洁image_data, label_data = dataimage = fluid.dygraph.to_variable(image_data)label = fluid.dygraph.to_variable(label_data)#前向计算的过程,同时拿到模型输出值和分类准确率if batch_id == 0 and epoch_id==0:# 打印模型参数和每层输出的尺寸predict, acc = model(image, label, check_shape=True, check_cOntent=False)elif batch_id==401:# 打印模型参数和每层输出的值predict, acc = model(image, label, check_shape=False, check_cOntent=True)else:predict, acc = model(image, label)#计算损失,取一个批次样本损失的平均值loss = fluid.layers.cross_entropy(predict, label)avg_loss = fluid.layers.mean(loss)#每训练了100批次的数据,打印下当前Loss的情况if batch_id % 200 == 0:print("epoch: {}, batch: {}, loss is: {}, acc is {}".format(epoch_id, batch_id, avg_loss.numpy(), acc.numpy()))#后向传播,更新参数的过程avg_loss.backward()optimizer.minimize(avg_loss)model.clear_gradients()#保存模型参数fluid.save_dygraph(model.state_dict(), 'mnist')print("Model has been saved.")

########## print network layer's superparams ##############
conv1-- kernel_size:[20, 1, 5, 5], padding:[2, 2], stride:[1, 1]
conv2-- kernel_size:[20, 20, 5, 5], padding:[2, 2], stride:[1, 1]
pool1-- pool_type:max, pool_size:[2, 2], pool_stride:[2, 2]
pool2-- pool_type:max, poo2_size:[2, 2], pool_stride:[2, 2]
fc-- weight_size:[980, 10], bias_size_[10], activation:softmax########## print shape of features of every layer ###############
inputs_shape: [100, 1, 28, 28]
outputs1_shape: [100, 20, 28, 28]
outputs2_shape: [100, 20, 14, 14]
outputs3_shape: [100, 20, 14, 14]
outputs4_shape: [100, 20, 7, 7]
outputs5_shape: [100, 10]
epoch: 0, batch: 0, loss is: [3.2442489], acc is [0.13]
epoch: 0, batch: 200, loss is: [0.36618954], acc is [0.88]
epoch: 0, batch: 400, loss is: [0.3081761], acc is [0.92]########## print convolution layer's kernel ###############
conv1 params -- kernel weights: name tmp_9640, dtype: VarType.FP32 shape: [5, 5] lod: {}dim: 5, 5layout: NCHWdtype: floatdata: [0.260075 -0.00745626 -0.0697677 0.111943 -0.0975073 0.0544546 0.0273952 0.0193562 0.309678 -0.354084 -0.27813 -0.189031 -0.132707 0.163043 -0.151623 -0.290956 -0.31168 -0.0738771 -0.0949294 -0.0823158 0.11385 -0.0159101 -0.00686984 -0.106764 0.797937]conv2 params -- kernel weights: name tmp_9642, dtype: VarType.FP32 shape: [5, 5] lod: {}dim: 5, 5layout: NCHWdtype: floatdata: [0.0932817 0.000386959 -0.00681949 -0.012132 0.0126475 -0.0322635 0.00878124 -0.0520608 -0.0458112 -0.0789385 -0.0296422 -0.0117399 -0.048606 0.0150676 0.0482094 0.0204052 -0.0190315 0.0368744 0.157832 -0.0495575 -0.00119514 0.0261193 -0.0205148 0.0852218 -0.00742791]The 15th channel of conv1 layer: name tmp_9644, dtype: VarType.FP32 shape: [28, 28] lod: {}dim: 28, 28layout: NCHWdtype: floatdata: [0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0 0 0.487762 0.414096 0 0 0.00731099 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.43804 0.175633 0.689989 0.796777 0.395783 0.0110922 0 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.698916 0.600264 1.07184 1.05845 0.659411 0.181982 0.0360546 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00312052 0.366386 1.03023 0.629613 0.515795 0.714244 0.343142 0.0786281 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.142234 0.403876 1.24141 0.835458 0.806119 0.376449 0.356066 0.138631 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.296209 0.476231 1.25336 0.75349 0.825601 0.431859 0.387893 0.0868528 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.269335 0.576601 1.01783 0.651935 0.844587 0.458234 0.36132 0.0460989 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.240535 0.654248 0.944057 0.773375 0.825987 0.677465 0.36332 0.00315366 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.242979 0.672254 0.955016 0.807378 0.782403 0.631354 0.272329 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.242979 0.672254 0.962577 0.971074 0.455678 0.665583 0.29844 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.238771 0.581023 0.9369 0.774144 0.518305 0.570775 0.229796 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00772302 0.33136 0.595799 1.11686 0.88737 0.349233 0.592554 0.225651 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.0205286 0.367781 0.945688 1.22956 0.873241 0.424885 0.643326 0.170045 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.198165 0.374519 1.31117 0.813943 0.771383 0.21808 0.391418 0.132606 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.42785 0.412886 1.55181 0.693253 0.819634 0.430292 0.445502 0.0527417 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00351502 0.39083 0.773294 1.40178 0.879286 0.622535 0.719786 0.423459 0.00179658 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.133312 0.442085 1.03554 1.129 0.960236 0.513462 0.621634 0.269152 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.249568 0.49342 1.16197 1.01332 1.03847 0.216333 0.615429 0.24288 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.209095 0.572415 0.917223 0.880518 0.708101 0.359635 0.574575 0.132275 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.195116 0.625709 0.703911 0.575285 0.875428 0.274367 0.352641 0.0735118 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.202838 0.790562 0.976235 0.367058 0.329558 0.764462 0.4172 0.00986199 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.0330963 0.257185 0.42567 0 0.125631 0.501264 0.185536 0.00729275 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0 0 0 0 0 0.18226 0.139418 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.0143739 0.0675122 0.166373 0.310749 0.374837 0.283567 0 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627 0.00844627]The 10th channel of conv2 layer: name tmp_9646, dtype: VarType.FP32 shape: [14, 14] lod: {}dim: 14, 14layout: NCHWdtype: floatdata: [0.0214605 0.0225076 0.0230596 0.0130798 0 0 0 0 0 0 0 0 0.0232862 0.0196273 0.0289432 0.0318078 0.0368357 0 0 0 0 0 0 0 0 0.0125096 0.0357306 0.0273967 0.0305188 0.0341901 0.0402815 0 0 0 0.592146 0 0 0.135152 0 0 0.0383017 0.0308731 0.0305187 0.0341901 0.0402815 0 0 0.249774 1.38996 0.565725 1.28768 1.54353 0.375356 0.110466 0.0383017 0.0308731 0.0305187 0.03419 0.0402814 0 0.0634479 0.136039 1.31309 0.985633 2.34409 2.43699 1.05789 0.20941 0.0383017 0.0308731 0.0305187 0.03419 0.0402814 0 0.103933 0.138894 0.798289 1.19125 2.71053 2.51254 0.990892 0.17444 0.0383017 0.0308731 0.0305188 0.0341901 0.0310417 0 0 0.0933116 0.352354 0.980576 2.92028 2.17298 0.800111 0.0510806 0.0383017 0.0308731 0.0305188 0.0341901 0.00789067 0 0.0569667 0.214217 0.399324 0.638856 2.85835 2.17324 0.880279 0.0402816 0.0383018 0.0308732 0.0305188 0.0341901 0 0 0.389954 0.498942 0.172426 0.949868 3.32984 1.89241 0.705547 0.0402817 0.0383018 0.0308732 0.0305187 0.03419 0.00313572 0.0232608 0.602907 0.94019 0.871292 2.27115 3.2753 1.93472 0.705161 0.0402816 0.0383018 0.0308732 0.0305187 0.03419 0.0699736 0.538337 0.831499 0.953618 2.05674 3.38043 3.63507 1.8095 0.435398 0.0402816 0.0383017 0.0308731 0.0305186 0.03419 0.126856 1.05526 1.51667 2.02479 3.13312 4.91652 3.81103 1.90084 0.337946 0.0402815 0.0383017 0.0308731 0.0313486 0.037314 0.1772 0.836636 0.723606 1.49062 3.56492 4.69548 3.02711 1.30412 0.136581 0.0455353 0.0418341 0.0326044 0.0294949 0.0357211 0.127375 0.290805 0.0152147 0.41493 1.16508 2.56936 1.68996 0.7819 0.0308204 0.043425 0.0404283 0.0356454]The output of last layer: name tmp_9647, dtype: VarType.FP32 shape: [10] lod: {}dim: 10layout: NCHWdtype: floatdata: [0.000120948 0.97931 0.00568777 0.000927439 0.000133486 0.000412126 0.00337982 0.000712543 0.0090171 0.0002989]Model has been saved.

 

 

 


加入校验或测试,更好评价模型效果

在训练过程中,我们会发现模型在训练样本集上的损失在不断减小。但这是否代表模型在未来的应用场景上依然有效?为了验证模型的有效性,通常将样本集合分成三份,训练集、校验集和测试集。


  • 训练集 :用于训练模型的参数,即训练过程中主要完成的工作。
  • 校验集 :用于对模型超参数的选择,比如网络结构的调整、正则化项权重的选择等。
  • 测试集 :用于模拟模型在应用后的真实效果。因为测试集没有参与任何模型优化或参数训练的工作,所以它对模型来说是完全未知的样本。在不以校验数据优化网络结构或模型超参数时,校验数据和测试数据的效果是类似的,均更真实的反映模型效果。

如下程序读取上一步训练保存的模型参数,读取测试数据集,并测试模型在测试数据集上的效果。

 

with fluid.dygraph.guard():print('start evaluation .......')#加载模型参数model = MNIST()model_state_dict, _ = fluid.load_dygraph('mnist')model.load_dict(model_state_dict)model.eval()eval_loader = load_data('eval')acc_set = []avg_loss_set = []for batch_id, data in enumerate(eval_loader()):x_data, y_data = dataimg = fluid.dygraph.to_variable(x_data)label = fluid.dygraph.to_variable(y_data)prediction, acc = model(img, label)loss = fluid.layers.cross_entropy(input=prediction, label=label)avg_loss = fluid.layers.mean(loss)acc_set.append(float(acc.numpy()))avg_loss_set.append(float(avg_loss.numpy()))#计算多个batch的平均损失和准确率acc_val_mean = np.array(acc_set).mean()avg_loss_val_mean = np.array(avg_loss_set).mean()print('loss={}, acc={}'.format(avg_loss_val_mean, acc_val_mean))

start evaluation .......
loading mnist dataset from ./work/mnist.json.gz ......
loss=0.25758751126006246, acc=0.9276000016927719

从测试的效果来看,模型在测试集上依然有93%的准确率,证明它是有预测效果的。

 

 

 


加入正则化项,避免模型过拟合


过拟合现象

对于样本量有限、但需要使用强大模型的复杂任务,模型很容易出现过拟合的表现,即在训练集上的损失小,在验证集或测试集上的损失较大,如 图2 所示。


图2:过拟合现象,训练误差不断降低,但测试误差先降后增


 

反之,如果模型在训练集和测试集上均损失较大,则称为欠拟合过拟合表示模型过于敏感,学习到了训练数据中的一些误差,而这些误差并不是真实的泛化规律(可推广到测试集上的规律)欠拟合表示模型还不够强大,还没有很好的拟合已知的训练样本,更别提测试样本了。因为欠拟合情况容易观察和解决,只要训练loss不够好,就不断使用更强大的模型即可,因此实际中我们更需要处理好过拟合的问题。

 

 


导致过拟合原因

造成过拟合的原因是模型过于敏感,而训练数据量太少或其中的噪音太多。

图3 所示,理想的回归模型是一条坡度较缓的抛物线,欠拟合的模型只拟合出一条直线,显然没有捕捉到真实的规律,但过拟合的模型拟合出存在很多拐点的抛物线,显然是过于敏感,也没有正确表达真实规律。


图3:回归模型的过拟合,理想和欠拟合状态的表现


 

图4 所示,理想的分类模型是一条半圆形的曲线,欠拟合用直线作为分类边界,显然没有捕捉到真实的边界,但过拟合的模型拟合出很扭曲的分类边界,虽然对所有的训练数据正确分类,但对一些较为个例的样本所做出的妥协,高概率不是真实的规律。


图4:分类模型的欠拟合,理想和过拟合状态的表现


 


过拟合的成因与防控

为了更好的理解过拟合的成因,可以参考侦探定位罪犯的案例逻辑,如 图5 所示。


图5:侦探定位罪犯与模型假设示意


 

对于这个案例,假设侦探也会犯错,通过分析发现可能的原因:


  1. 情况1:罪犯证据存在错误,依据错误的证据寻找罪犯肯定是缘木求鱼。

  2. 情况2:搜索范围太大的同时证据太少,导致符合条件的候选(嫌疑人)太多,无法准确定位罪犯。

那么侦探解决这个问题的方法有两种:或者缩小搜索范围(比如假设该案件只能是熟人作案),或者寻找更多的证据。

归结到深度学习中,假设模型也会犯错,通过分析发现可能的原因:


  1. 情况1:训练数据存在噪音,导致模型学到了噪音,而不是真实规律。

  2. 情况2:使用强大模型(表示空间大)的同时训练数据太少,导致在训练数据上表现良好的候选假设太多,锁定了一个“虚假正确”的假设。

对于情况1,我们使用数据清洗和修正来解决。 对于情况2,我们或者限制模型表示能力,或者收集更多的训练数据。

而清洗训练数据中的错误,或收集更多的训练数据往往是一句“正确的废话”,在任何时候我们都想获得更多更高质量的数据。在实际项目中,更快、更低成本可控制过拟合的方法,只有限制模型的表示能力。

 

 


正则化项

为了防止模型过拟合,在没有扩充样本量的可能下,只能降低模型的复杂度,可以通过限制参数的数量或可能取值(参数值尽量小)实现。

具体来说,在模型的优化目标(损失)中人为加入对参数规模的惩罚项。当参数越多或取值越大时,该惩罚项就越大。通过调整惩罚项的权重系数,可以使模型在“尽量减少训练损失”和“保持模型的泛化能力”之间取得平衡。泛化能力表示模型在没有见过的样本上依然有效。正则化项的存在,增加了模型在训练集上的损失。

飞桨支持为所有参数加上统一的正则化项,也支持为特定的参数添加正则化项。前者的实现如下代码所示,仅在优化器中设置regularization参数即可实现。使用参数regularization_coeff调节正则化项的权重,权重越大时,对模型复杂度的惩罚越高。

 

with fluid.dygraph.guard():model = MNIST()model.train() #各种优化算法均可以加入正则化项,避免过拟合,参数regularization_coeff调节正则化项的权重#optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.01, regularization=fluid.regularizer.L2Decay(regularization_coeff=0.1),parameter_list=model.parameters()))optimizer = fluid.optimizer.AdamOptimizer(learning_rate=0.01, regularization=fluid.regularizer.L2Decay(regularization_coeff=0.1),parameter_list=model.parameters())EPOCH_NUM = 10for epoch_id in range(EPOCH_NUM):for batch_id, data in enumerate(train_loader()):#准备数据,变得更加简洁image_data, label_data = dataimage = fluid.dygraph.to_variable(image_data)label = fluid.dygraph.to_variable(label_data)#前向计算的过程,同时拿到模型输出值和分类准确率predict, acc = model(image, label)#计算损失,取一个批次样本损失的平均值loss = fluid.layers.cross_entropy(predict, label)avg_loss = fluid.layers.mean(loss)#每训练了100批次的数据,打印下当前Loss的情况if batch_id % 100 == 0:print("epoch: {}, batch: {}, loss is: {}, acc is {}".format(epoch_id, batch_id, avg_loss.numpy(), acc.numpy()))#后向传播,更新参数的过程avg_loss.backward()optimizer.minimize(avg_loss)model.clear_gradients()#保存模型参数fluid.save_dygraph(model.state_dict(), 'mnist')

epoch: 0, batch: 0, loss is: [2.610871], acc is [0.08]
epoch: 0, batch: 100, loss is: [0.36060372], acc is [0.91]
epoch: 0, batch: 200, loss is: [0.26544896], acc is [0.92]
epoch: 0, batch: 300, loss is: [0.32515743], acc is [0.93]
epoch: 0, batch: 400, loss is: [0.35714394], acc is [0.92]
epoch: 1, batch: 0, loss is: [0.40216166], acc is [0.86]
epoch: 1, batch: 100, loss is: [0.28893617], acc is [0.93]
epoch: 1, batch: 200, loss is: [0.42620686], acc is [0.91]
epoch: 1, batch: 300, loss is: [0.23569341], acc is [0.96]
epoch: 1, batch: 400, loss is: [0.45707798], acc is [0.91]
epoch: 2, batch: 0, loss is: [0.372382], acc is [0.92]
epoch: 2, batch: 100, loss is: [0.28487045], acc is [0.94]
epoch: 2, batch: 200, loss is: [0.43068737], acc is [0.84]
epoch: 2, batch: 300, loss is: [0.39103115], acc is [0.86]
epoch: 2, batch: 400, loss is: [0.5428891], acc is [0.87]
epoch: 3, batch: 0, loss is: [0.43450108], acc is [0.88]
epoch: 3, batch: 100, loss is: [0.3285971], acc is [0.93]
epoch: 3, batch: 200, loss is: [0.2657451], acc is [0.96]
epoch: 3, batch: 300, loss is: [0.26086193], acc is [0.94]
epoch: 3, batch: 400, loss is: [0.3242475], acc is [0.94]
epoch: 4, batch: 0, loss is: [0.3508662], acc is [0.9]
epoch: 4, batch: 100, loss is: [0.33543622], acc is [0.91]
epoch: 4, batch: 200, loss is: [0.27296743], acc is [0.93]
epoch: 4, batch: 300, loss is: [0.33019447], acc is [0.92]
epoch: 4, batch: 400, loss is: [0.3422754], acc is [0.93]
epoch: 5, batch: 0, loss is: [0.35054174], acc is [0.89]
epoch: 5, batch: 100, loss is: [0.3551485], acc is [0.92]
epoch: 5, batch: 200, loss is: [0.26521632], acc is [0.96]
epoch: 5, batch: 300, loss is: [0.25008282], acc is [0.94]
epoch: 5, batch: 400, loss is: [0.32645434], acc is [0.9]
epoch: 6, batch: 0, loss is: [0.38871726], acc is [0.9]
epoch: 6, batch: 100, loss is: [0.41576093], acc is [0.91]
epoch: 6, batch: 200, loss is: [0.27694413], acc is [0.95]
epoch: 6, batch: 300, loss is: [0.43215176], acc is [0.89]
epoch: 6, batch: 400, loss is: [0.26467267], acc is [0.93]
epoch: 7, batch: 0, loss is: [0.37565476], acc is [0.91]
epoch: 7, batch: 100, loss is: [0.31220886], acc is [0.94]
epoch: 7, batch: 200, loss is: [0.335222], acc is [0.88]
epoch: 7, batch: 300, loss is: [0.37132093], acc is [0.93]
epoch: 7, batch: 400, loss is: [0.35346523], acc is [0.92]
epoch: 8, batch: 0, loss is: [0.29914328], acc is [0.91]
epoch: 8, batch: 100, loss is: [0.34777313], acc is [0.9]
epoch: 8, batch: 200, loss is: [0.305633], acc is [0.93]
epoch: 8, batch: 300, loss is: [0.3560859], acc is [0.88]
epoch: 8, batch: 400, loss is: [0.38310045], acc is [0.89]
epoch: 9, batch: 0, loss is: [0.28080454], acc is [0.94]
epoch: 9, batch: 100, loss is: [0.3200447], acc is [0.94]
epoch: 9, batch: 200, loss is: [0.39912143], acc is [0.9]
epoch: 9, batch: 300, loss is: [0.28947696], acc is [0.93]
epoch: 9, batch: 400, loss is: [0.4311652], acc is [0.91]

 

 


可视化分析

训练模型时,经常需要观察模型的评价指标,分析模型的优化过程,以确保训练是有效的。可选用这两种工具:Matplotlib库和VisualDL。


  • Matplotlib库:Matplotlib库是Python中使用的最多的2D图形绘图库,它有一套完全仿照MATLAB的函数形式的绘图接口,使用轻量级的PLT库(Matplotlib)作图是非常简单的。
  • VisualDL:如果期望使用更加专业的作图工具,可以尝试VisualDL,飞桨可视化分析工具。VisualDL能够有效地展示飞桨在运行过程中的计算图、各种指标变化趋势和数据信息。

 


使用Matplotlib库绘制损失随训练下降的曲线图

将训练的批次编号作为X轴坐标,该批次的训练损失作为Y轴坐标。


  1. 训练开始前,声明两个列表变量存储对应的批次编号(iters=[])和训练损失(losses=[])。

iters=[]
losses=[]
for epoch_id in range(EPOCH_NUM):"""start to training"""

  1. 随着训练的进行,将iter和losses两个列表填满。

iters=[]
losses=[]
for epoch_id in range(EPOCH_NUM):for batch_id, data in enumerate(train_loader()):predict, acc = model(image, label)loss = fluid.layers.cross_entropy(predict, label)avg_loss = fluid.layers.mean(loss)# 累计迭代次数和对应的lossiters.append(batch_id + epoch_id*len(list(train_loader()))losses.append(avg_loss)

  1. 训练结束后,将两份数据以参数形式导入PLT的横纵坐标。

plt.xlabel("iter", fOntsize=14),plt.ylabel("loss", fOntsize=14)

  1. 最后,调用plt.plot()函数即可完成作图。

plt.plot(iters, losses,color='red',label='train loss')

详细代码如下:

 

#引入matplotlib库
import matplotlib.pyplot as pltwith fluid.dygraph.guard(place):model = MNIST()model.train() optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.01, parameter_list=model.parameters())EPOCH_NUM = 10iter=0iters=[]losses=[]for epoch_id in range(EPOCH_NUM):for batch_id, data in enumerate(train_loader()):#准备数据,变得更加简洁image_data, label_data = dataimage = fluid.dygraph.to_variable(image_data)label = fluid.dygraph.to_variable(label_data)#前向计算的过程,同时拿到模型输出值和分类准确率predict, acc = model(image, label)#计算损失,取一个批次样本损失的平均值loss = fluid.layers.cross_entropy(predict, label)avg_loss = fluid.layers.mean(loss)#每训练了100批次的数据,打印下当前Loss的情况if batch_id % 100 == 0:print("epoch: {}, batch: {}, loss is: {}, acc is {}".format(epoch_id, batch_id, avg_loss.numpy(), acc.numpy()))iters.append(iter)losses.append(avg_loss.numpy())iter = iter + 100#后向传播,更新参数的过程avg_loss.backward()optimizer.minimize(avg_loss)model.clear_gradients()#保存模型参数fluid.save_dygraph(model.state_dict(), 'mnist')

2020-08-13 09:51:10,959-INFO: font search path ['/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/mpl-data/fonts/ttf', '/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/mpl-data/fonts/afm', '/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/mpl-data/fonts/pdfcorefonts']
2020-08-13 09:51:12,844-INFO: generated new fontManager

epoch: 0, batch: 0, loss is: [2.7310417], acc is [0.08]
epoch: 0, batch: 100, loss is: [0.69016415], acc is [0.85]
epoch: 0, batch: 200, loss is: [0.73119736], acc is [0.78]
epoch: 0, batch: 300, loss is: [0.4755017], acc is [0.86]
epoch: 0, batch: 400, loss is: [0.25069112], acc is [0.94]
epoch: 1, batch: 0, loss is: [0.17892611], acc is [0.95]
epoch: 1, batch: 100, loss is: [0.18305962], acc is [0.95]
epoch: 1, batch: 200, loss is: [0.1657131], acc is [0.96]
epoch: 1, batch: 300, loss is: [0.16994277], acc is [0.94]
epoch: 1, batch: 400, loss is: [0.31380218], acc is [0.89]
epoch: 2, batch: 0, loss is: [0.3218058], acc is [0.9]
epoch: 2, batch: 100, loss is: [0.14207897], acc is [0.95]
epoch: 2, batch: 200, loss is: [0.10880348], acc is [0.97]
epoch: 2, batch: 300, loss is: [0.18627769], acc is [0.96]
epoch: 2, batch: 400, loss is: [0.26449117], acc is [0.94]
epoch: 3, batch: 0, loss is: [0.1475856], acc is [0.96]
epoch: 3, batch: 100, loss is: [0.17161469], acc is [0.95]
epoch: 3, batch: 200, loss is: [0.1761289], acc is [0.97]
epoch: 3, batch: 300, loss is: [0.19234805], acc is [0.94]
epoch: 3, batch: 400, loss is: [0.1607459], acc is [0.94]
epoch: 4, batch: 0, loss is: [0.12517354], acc is [0.96]
epoch: 4, batch: 100, loss is: [0.05750824], acc is [0.99]
epoch: 4, batch: 200, loss is: [0.14779979], acc is [0.98]
epoch: 4, batch: 300, loss is: [0.09626144], acc is [0.96]
epoch: 4, batch: 400, loss is: [0.06560835], acc is [0.98]
epoch: 5, batch: 0, loss is: [0.05752574], acc is [0.98]
epoch: 5, batch: 100, loss is: [0.05327866], acc is [0.98]
epoch: 5, batch: 200, loss is: [0.1519103], acc is [0.96]
epoch: 5, batch: 300, loss is: [0.07533882], acc is [0.98]
epoch: 5, batch: 400, loss is: [0.08351453], acc is [0.97]
epoch: 6, batch: 0, loss is: [0.09088901], acc is [0.98]
epoch: 6, batch: 100, loss is: [0.07256764], acc is [0.97]
epoch: 6, batch: 200, loss is: [0.1224548], acc is [0.94]
epoch: 6, batch: 300, loss is: [0.05678594], acc is [0.99]
epoch: 6, batch: 400, loss is: [0.06976603], acc is [0.96]
epoch: 7, batch: 0, loss is: [0.05674415], acc is [0.99]
epoch: 7, batch: 100, loss is: [0.07299229], acc is [0.97]
epoch: 7, batch: 200, loss is: [0.05643737], acc is [0.98]
epoch: 7, batch: 300, loss is: [0.11586691], acc is [0.96]
epoch: 7, batch: 400, loss is: [0.06251612], acc is [0.97]
epoch: 8, batch: 0, loss is: [0.06576212], acc is [0.98]
epoch: 8, batch: 100, loss is: [0.09684012], acc is [0.96]
epoch: 8, batch: 200, loss is: [0.06532772], acc is [0.97]
epoch: 8, batch: 300, loss is: [0.14739688], acc is [0.96]
epoch: 8, batch: 400, loss is: [0.05176679], acc is [0.98]
epoch: 9, batch: 0, loss is: [0.07476182], acc is [0.97]
epoch: 9, batch: 100, loss is: [0.05133891], acc is [0.98]
epoch: 9, batch: 200, loss is: [0.14491034], acc is [0.95]
epoch: 9, batch: 300, loss is: [0.16446055], acc is [0.97]
epoch: 9, batch: 400, loss is: [0.07250261], acc is [0.99]

 

#画出训练过程中Loss的变化曲线
plt.figure()
plt.title("train loss", fOntsize=24)
plt.xlabel("iter", fOntsize=14)
plt.ylabel("loss", fOntsize=14)
plt.plot(iters, losses,color='red',label='train loss')
plt.grid()
plt.show()

 

 


使用VisualDL可视化分析

飞桨可视化分析工具,以丰富的图表呈现训练参数变化趋势、模型结构、数据样本、高维数据分布等。帮助用户清晰直观地理解深度学习模型训练过程及模型结构,进而实现高效的模型调优,具体代码实现如下。



说明:

本案例不支持AI Studio演示,请读者在本地安装的飞桨上实践。




  • 步骤1:引入VisualDL库,定义作图数据存储位置(供第3步使用),本案例的路径是“log”。

from visualdl import LogWriter
log_writer = LogWriter("./log")

  • 步骤2:在训练过程中插入作图语句。当每100个batch训练完成后,将当前损失作为一个新增的数据点(iter和acc的映射对)存储到第一步设置的文件中。使用变量iter记录下已经训练的批次数,作为作图的X轴坐标。

log_writer.add_scalar(tag = 'acc', step = iter, value = avg_acc.numpy())
log_writer.add_scalar(tag = 'loss', step = iter, value = avg_loss.numpy())
iter = iter + 100

 

# 安装VisualDL
!pip install --upgrade --pre visualdl

Looking in indexes: https://mirror.baidu.com/pypi/simple/
Collecting visualdlDownloading https://mirror.baidu.com/pypi/packages/f0/3c/0f59d4fc4df4651b5ceff6685074d1e83b87f8870e426ea2bbb86ad40661/visualdl-2.0.0-py3-none-any.whl (3.0MB)|████████████████████████████████| 3.0MB 11.6MB/s eta 0:00:01
Requirement already satisfied, skipping upgrade: flask>=1.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (1.1.1)
Requirement already satisfied, skipping upgrade: Pillow>=7.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (7.1.2)
Requirement already satisfied, skipping upgrade: pre-commit in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (1.21.0)
Requirement already satisfied, skipping upgrade: flake8>=3.7.9 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (3.8.2)
Requirement already satisfied, skipping upgrade: opencv-python in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (4.1.1.26)
Requirement already satisfied, skipping upgrade: requests in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (2.22.0)
Requirement already satisfied, skipping upgrade: six>=1.14.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (1.15.0)
Requirement already satisfied, skipping upgrade: protobuf>=3.11.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (3.12.2)
Requirement already satisfied, skipping upgrade: Flask-Babel>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (1.0.0)
Requirement already satisfied, skipping upgrade: numpy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl) (1.16.4)
Requirement already satisfied, skipping upgrade: Babel>=2.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl) (2.8.0)
Requirement already satisfied, skipping upgrade: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl) (2019.3)
Requirement already satisfied, skipping upgrade: MarkupSafe>=0.23 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Jinja2>=2.10.1->flask>=1.1.1->visualdl) (1.1.1)
Requirement already satisfied, skipping upgrade: zipp>=0.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from importlib-metadata; python_version <"3.8"->pre-commit->visualdl) (0.6.0)
Requirement already satisfied, skipping upgrade: more-itertools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from zipp>=0.5->importlib-metadata; python_version <"3.8"->pre-commit->visualdl) (7.2.0)
Installing collected packages: visualdlFound existing installation: visualdl 2.0.0b8Uninstalling visualdl-2.0.0b8:Successfully uninstalled visualdl-2.0.0b8
Successfully installed visualdl-2.0.0

 

#引入VisualDL库,并设定保存作图数据的文件位置
from visualdl import LogWriter
log_writer = LogWriter(logdir="./log")with fluid.dygraph.guard():model = MNIST()model.train() optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.01, parameter_list=model.parameters())EPOCH_NUM = 10iter = 0for epoch_id in range(EPOCH_NUM):for batch_id, data in enumerate(train_loader()):#准备数据,变得更加简洁image_data, label_data = dataimage = fluid.dygraph.to_variable(image_data)label = fluid.dygraph.to_variable(label_data)#前向计算的过程,同时拿到模型输出值和分类准确率predict, avg_acc = model(image, label)#计算损失,取一个批次样本损失的平均值loss = fluid.layers.cross_entropy(predict, label)avg_loss = fluid.layers.mean(loss)#每训练了100批次的数据,打印下当前Loss的情况if batch_id % 100 == 0:print("epoch: {}, batch: {}, loss is: {}, acc is {}".format(epoch_id, batch_id, avg_loss.numpy(), avg_acc.numpy()))log_writer.add_scalar(tag = &#39;acc&#39;, step = iter, value = avg_acc.numpy())log_writer.add_scalar(tag = &#39;loss&#39;, step = iter, value = avg_loss.numpy())iter = iter + 100#后向传播,更新参数的过程avg_loss.backward()optimizer.minimize(avg_loss)model.clear_gradients()#保存模型参数fluid.save_dygraph(model.state_dict(), &#39;mnist&#39;)

epoch: 0, batch: 0, loss is: [2.6038013], acc is [0.11]
epoch: 0, batch: 100, loss is: [0.64737916], acc is [0.84]
epoch: 0, batch: 200, loss is: [0.47225732], acc is [0.86]
epoch: 0, batch: 300, loss is: [0.30074444], acc is [0.93]
epoch: 0, batch: 400, loss is: [0.3165561], acc is [0.92]
epoch: 1, batch: 0, loss is: [0.32988814], acc is [0.92]
epoch: 1, batch: 100, loss is: [0.25466824], acc is [0.94]
epoch: 1, batch: 200, loss is: [0.2939149], acc is [0.93]
epoch: 1, batch: 300, loss is: [0.11531588], acc is [0.96]
epoch: 1, batch: 400, loss is: [0.18765363], acc is [0.95]
epoch: 2, batch: 0, loss is: [0.25530568], acc is [0.92]
epoch: 2, batch: 100, loss is: [0.13840751], acc is [0.95]
epoch: 2, batch: 200, loss is: [0.18936181], acc is [0.93]
epoch: 2, batch: 300, loss is: [0.23487468], acc is [0.91]
epoch: 2, batch: 400, loss is: [0.14501975], acc is [0.97]
epoch: 3, batch: 0, loss is: [0.10513314], acc is [0.98]
epoch: 3, batch: 100, loss is: [0.14437269], acc is [0.96]
epoch: 3, batch: 200, loss is: [0.10430384], acc is [0.97]
epoch: 3, batch: 300, loss is: [0.09829547], acc is [0.97]
epoch: 3, batch: 400, loss is: [0.12238665], acc is [0.96]
epoch: 4, batch: 0, loss is: [0.09102558], acc is [0.96]
epoch: 4, batch: 100, loss is: [0.09773639], acc is [0.96]
epoch: 4, batch: 200, loss is: [0.13471647], acc is [0.94]
epoch: 4, batch: 300, loss is: [0.16359945], acc is [0.94]
epoch: 4, batch: 400, loss is: [0.1262849], acc is [0.96]
epoch: 5, batch: 0, loss is: [0.1152655], acc is [0.96]
epoch: 5, batch: 100, loss is: [0.03675374], acc is [1.]
epoch: 5, batch: 200, loss is: [0.04133964], acc is [1.]
epoch: 5, batch: 300, loss is: [0.09179698], acc is [0.98]
epoch: 5, batch: 400, loss is: [0.1258232], acc is [0.95]
epoch: 6, batch: 0, loss is: [0.14807416], acc is [0.97]
epoch: 6, batch: 100, loss is: [0.04807493], acc is [0.99]
epoch: 6, batch: 200, loss is: [0.11792229], acc is [0.97]
epoch: 6, batch: 300, loss is: [0.14033443], acc is [0.94]
epoch: 6, batch: 400, loss is: [0.13261904], acc is [0.97]
epoch: 7, batch: 0, loss is: [0.07268089], acc is [0.98]
epoch: 7, batch: 100, loss is: [0.08069782], acc is [0.97]
epoch: 7, batch: 200, loss is: [0.09695492], acc is [0.96]
epoch: 7, batch: 300, loss is: [0.04560409], acc is [0.99]
epoch: 7, batch: 400, loss is: [0.09219052], acc is [0.97]
epoch: 8, batch: 0, loss is: [0.08077841], acc is [0.97]
epoch: 8, batch: 100, loss is: [0.05164277], acc is [0.99]
epoch: 8, batch: 200, loss is: [0.0810402], acc is [0.97]
epoch: 8, batch: 300, loss is: [0.14958261], acc is [0.97]
epoch: 8, batch: 400, loss is: [0.09248257], acc is [0.97]
epoch: 9, batch: 0, loss is: [0.05507369], acc is [0.98]
epoch: 9, batch: 100, loss is: [0.11248624], acc is [0.96]
epoch: 9, batch: 200, loss is: [0.02587705], acc is [0.99]
epoch: 9, batch: 300, loss is: [0.20676087], acc is [0.96]
epoch: 9, batch: 400, loss is: [0.08361522], acc is [0.97]

  • 步骤3:命令行启动VisualDL。

使用“visualdl --logdir [数据文件所在文件夹路径] 的命令启动VisualDL。在VisualDL启动后,命令行会打印出可用浏览器查阅图形结果的网址。

$ visualdl --logdir ./log --port 8080

  • 步骤4:打开浏览器,查看作图结果,如 图6 所示。

查阅的网址在第三步的启动命令后会打印出来(如http://127.0.0.1:8080/),将该网址输入浏览器地址栏刷新页面的效果如下图所示。除了右侧对数据点的作图外,左侧还有一个控制板,可以调整诸多作图的细节。


图6:Visualdl的作图示例


推荐阅读
  • 生成式对抗网络模型综述摘要生成式对抗网络模型(GAN)是基于深度学习的一种强大的生成模型,可以应用于计算机视觉、自然语言处理、半监督学习等重要领域。生成式对抗网络 ... [详细]
  • PHP图片截取方法及应用实例
    本文介绍了使用PHP动态切割JPEG图片的方法,并提供了应用实例,包括截取视频图、提取文章内容中的图片地址、裁切图片等问题。详细介绍了相关的PHP函数和参数的使用,以及图片切割的具体步骤。同时,还提供了一些注意事项和优化建议。通过本文的学习,读者可以掌握PHP图片截取的技巧,实现自己的需求。 ... [详细]
  • 计算机存储系统的层次结构及其优势
    本文介绍了计算机存储系统的层次结构,包括高速缓存、主存储器和辅助存储器三个层次。通过分层存储数据可以提高程序的执行效率。计算机存储系统的层次结构将各种不同存储容量、存取速度和价格的存储器有机组合成整体,形成可寻址存储空间比主存储器空间大得多的存储整体。由于辅助存储器容量大、价格低,使得整体存储系统的平均价格降低。同时,高速缓存的存取速度可以和CPU的工作速度相匹配,进一步提高程序执行效率。 ... [详细]
  • 不同优化算法的比较分析及实验验证
    本文介绍了神经网络优化中常用的优化方法,包括学习率调整和梯度估计修正,并通过实验验证了不同优化算法的效果。实验结果表明,Adam算法在综合考虑学习率调整和梯度估计修正方面表现较好。该研究对于优化神经网络的训练过程具有指导意义。 ... [详细]
  • 树莓派语音控制的配置方法和步骤
    本文介绍了在树莓派上实现语音控制的配置方法和步骤。首先感谢博主Eoman的帮助,文章参考了他的内容。树莓派的配置需要通过sudo raspi-config进行,然后使用Eoman的控制方法,即安装wiringPi库并编写控制引脚的脚本。具体的安装步骤和脚本编写方法在文章中详细介绍。 ... [详细]
  • 本文介绍了使用Python编写购物程序的实现步骤和代码示例。程序启动后,用户需要输入工资,并打印商品列表。用户可以根据商品编号选择购买商品,程序会检测余额是否充足,如果充足则直接扣款,否则提醒用户。用户可以随时退出程序,在退出时打印已购买商品的数量和余额。附带了完整的代码示例。 ... [详细]
  • 基于移动平台的会展导游系统APP设计与实现的技术介绍与需求分析
    本文介绍了基于移动平台的会展导游系统APP的设计与实现过程。首先,对会展经济和移动互联网的概念进行了简要介绍,并阐述了将会展引入移动互联网的意义。接着,对基础技术进行了介绍,包括百度云开发环境、安卓系统和近场通讯技术。然后,进行了用户需求分析和系统需求分析,并提出了系统界面运行流畅和第三方授权等需求。最后,对系统的概要设计进行了详细阐述,包括系统前端设计和交互与原型设计。本文对基于移动平台的会展导游系统APP的设计与实现提供了技术支持和需求分析。 ... [详细]
  • 产业智能化升级的浪潮并没有因为疫情等原因停滞不前,作为带来人工智能应用井喷式发展的深度学习技术在近几年也可谓是“时代宠儿”,想要尝试应用深度学习技术解决 ... [详细]
  • 本周AI热点回顾:和欧阳娜娜一起搞研发?强大的神经网络新算子involution,超越卷积、自注意力机制!...
    ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍点击左上方蓝字关注我们01我和欧阳娜娜一起搞研发AI新闻播报,开车明星导航,现如今根据文本生成语音的AI技术 ... [详细]
  • paddlepaddle使用笔记——使用自己的数据训练ocr模型
    1、使用环境:ubuntu18.04,4gpu,nvidia410.78,cuda9.0,cudnn7.3& ... [详细]
  • 15+城市道路要素分割应用,用这一个分割模型就够了
    本文已在飞桨公众号发布,查看请戳链接:15城市道路要素分割应用,用这一个分割模型就够了!图像语义分割在计算机视觉中是一个经 ... [详细]
  • 本文介绍了Redis的基础数据结构string的应用场景,并以面试的形式进行问答讲解,帮助读者更好地理解和应用Redis。同时,描述了一位面试者的心理状态和面试官的行为。 ... [详细]
  • Java容器中的compareto方法排序原理解析
    本文从源码解析Java容器中的compareto方法的排序原理,讲解了在使用数组存储数据时的限制以及存储效率的问题。同时提到了Redis的五大数据结构和list、set等知识点,回忆了作者大学时代的Java学习经历。文章以作者做的思维导图作为目录,展示了整个讲解过程。 ... [详细]
  • sklearn数据集库中的常用数据集类型介绍
    本文介绍了sklearn数据集库中常用的数据集类型,包括玩具数据集和样本生成器。其中详细介绍了波士顿房价数据集,包含了波士顿506处房屋的13种不同特征以及房屋价格,适用于回归任务。 ... [详细]
  • 前景:当UI一个查询条件为多项选择,或录入多个条件的时候,比如查询所有名称里面包含以下动态条件,需要模糊查询里面每一项时比如是这样一个数组条件:newstring[]{兴业银行, ... [详细]
author-avatar
mobiledu2502884963
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有