鸢尾花分类问题

先看看系统环境

In [1]:
import platform
platform.python_version()
Out[1]:
'3.8.16'
In [2]:
import tensorflow as tf
tf.__version__
Out[2]:
'2.7.4'

获取训练数据和测试数据

In [3]:
from tensorflow.keras import utils
train_path = utils.get_file('iris_training.csv', 'http://download.tensorflow.org/data/iris_training.csv')
test_path = utils.get_file('iris_test.csv', 'http://download.tensorflow.org/data/iris_training.csv')
(train_path, test_path)
Out[3]:
('/home/henry/.keras/datasets/iris_training.csv',
 '/home/henry/.keras/datasets/iris_test.csv')
In [4]:
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
import pandas as pd
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
train.head()
Out[4]:
SepalLength SepalWidth PetalLength PetalWidth Species
0 6.4 2.8 5.6 2.2 2
1 5.0 2.3 3.3 1.0 1
2 4.9 2.5 4.5 1.7 2
3 4.9 3.1 1.5 0.1 0
4 5.7 3.8 1.7 0.3 0
In [5]:
x_train, y_train = train, train.pop(CSV_COLUMN_NAMES[-1])
x_test, y_test = test, test.pop(CSV_COLUMN_NAMES[-1])
x_train.shape, y_train.shape
Out[5]:
((120, 4), (120,))
In [6]:
from tensorflow.keras import models, layers, activations
model = models.Sequential([
    layers.Dense(10, activation=activations.relu, input_shape=(4,)),
    layers.Dense(10, activation=activations.relu),
    layers.Dense(3, activation=activations.softmax)
])
model.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param#
=================================================================
 dense (Dense)               (None, 10)                50

 dense_1 (Dense)             (None, 10)                110

 dense_2 (Dense)             (None, 3)                 33

=================================================================
Total params: 193
Trainable params: 193
Non-trainable params: 0
_________________________________________________________________
2023-10-07 22:02:54.408669: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
In [7]:
from tensorflow.keras import optimizers, losses, metrics
model.compile(
    optimizer=optimizers.Adam(),
    loss=losses.sparse_categorical_crossentropy,
    metrics=[metrics.sparse_categorical_accuracy])

history=model.fit(x_train, y_train, epochs=50, validation_data=(x_test, y_test))
Epoch 1/50
4/4 [==============================] - 1s 94ms/step - loss: 2.8094 - sparse_categorical_accuracy: 0.3500 - val_loss: 2.6716 - val_sparse_categorical_accuracy: 0.3500
Epoch 2/50
4/4 [==============================] - 0s 13ms/step - loss: 2.6019 - sparse_categorical_accuracy: 0.3500 - val_loss: 2.4785 - val_sparse_categorical_accuracy: 0.3500
Epoch 3/50
4/4 [==============================] - 0s 47ms/step - loss: 2.4182 - sparse_categorical_accuracy: 0.3500 - val_loss: 2.2932 - val_sparse_categorical_accuracy: 0.3500
Epoch 4/50
4/4 [==============================] - 0s 39ms/step - loss: 2.2268 - sparse_categorical_accuracy: 0.3500 - val_loss: 2.1191 - val_sparse_categorical_accuracy: 0.3500
Epoch 5/50
4/4 [==============================] - 0s 41ms/step - loss: 2.0620 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.9553 - val_sparse_categorical_accuracy: 0.3500
Epoch 6/50
4/4 [==============================] - 0s 40ms/step - loss: 1.9040 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.8068 - val_sparse_categorical_accuracy: 0.3500
Epoch 7/50
4/4 [==============================] - 0s 21ms/step - loss: 1.7584 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.6786 - val_sparse_categorical_accuracy: 0.3500
Epoch 8/50
4/4 [==============================] - 0s 20ms/step - loss: 1.6411 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.5766 - val_sparse_categorical_accuracy: 0.3500
Epoch 9/50
4/4 [==============================] - 0s 16ms/step - loss: 1.5473 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.4954 - val_sparse_categorical_accuracy: 0.3500
Epoch 10/50
4/4 [==============================] - 0s 21ms/step - loss: 1.4691 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.4277 - val_sparse_categorical_accuracy: 0.3500
Epoch 11/50
4/4 [==============================] - 0s 14ms/step - loss: 1.4085 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.3724 - val_sparse_categorical_accuracy: 0.3500
Epoch 12/50
4/4 [==============================] - 0s 13ms/step - loss: 1.3521 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.3285 - val_sparse_categorical_accuracy: 0.3500
Epoch 13/50
4/4 [==============================] - 0s 23ms/step - loss: 1.3163 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.2880 - val_sparse_categorical_accuracy: 0.3500
Epoch 14/50
4/4 [==============================] - 0s 28ms/step - loss: 1.2750 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.2526 - val_sparse_categorical_accuracy: 0.3500
Epoch 15/50
4/4 [==============================] - 0s 29ms/step - loss: 1.2417 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.2200 - val_sparse_categorical_accuracy: 0.3500
Epoch 16/50
4/4 [==============================] - 0s 30ms/step - loss: 1.2093 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.1909 - val_sparse_categorical_accuracy: 0.3500
Epoch 17/50
4/4 [==============================] - 0s 29ms/step - loss: 1.1799 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.1648 - val_sparse_categorical_accuracy: 0.3500
Epoch 18/50
4/4 [==============================] - 0s 28ms/step - loss: 1.1559 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.1407 - val_sparse_categorical_accuracy: 0.3500
Epoch 19/50
4/4 [==============================] - 0s 29ms/step - loss: 1.1325 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.1191 - val_sparse_categorical_accuracy: 0.3500
Epoch 20/50
4/4 [==============================] - 0s 17ms/step - loss: 1.1121 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.0998 - val_sparse_categorical_accuracy: 0.3500
Epoch 21/50
4/4 [==============================] - 0s 16ms/step - loss: 1.0942 - sparse_categorical_accuracy: 0.3500 - val_loss: 1.0827 - val_sparse_categorical_accuracy: 0.3750
Epoch 22/50
4/4 [==============================] - 0s 21ms/step - loss: 1.0783 - sparse_categorical_accuracy: 0.5000 - val_loss: 1.0673 - val_sparse_categorical_accuracy: 0.6917
Epoch 23/50
4/4 [==============================] - 0s 15ms/step - loss: 1.0622 - sparse_categorical_accuracy: 0.6917 - val_loss: 1.0540 - val_sparse_categorical_accuracy: 0.7000
Epoch 24/50
4/4 [==============================] - 0s 19ms/step - loss: 1.0497 - sparse_categorical_accuracy: 0.7000 - val_loss: 1.0420 - val_sparse_categorical_accuracy: 0.7000
Epoch 25/50
4/4 [==============================] - 0s 17ms/step - loss: 1.0381 - sparse_categorical_accuracy: 0.7000 - val_loss: 1.0312 - val_sparse_categorical_accuracy: 0.7000
Epoch 26/50
4/4 [==============================] - 0s 42ms/step - loss: 1.0276 - sparse_categorical_accuracy: 0.6917 - val_loss: 1.0216 - val_sparse_categorical_accuracy: 0.6500
Epoch 27/50
4/4 [==============================] - 0s 32ms/step - loss: 1.0182 - sparse_categorical_accuracy: 0.6333 - val_loss: 1.0130 - val_sparse_categorical_accuracy: 0.6000
Epoch 28/50
4/4 [==============================] - 0s 37ms/step - loss: 1.0113 - sparse_categorical_accuracy: 0.5500 - val_loss: 1.0051 - val_sparse_categorical_accuracy: 0.5250
Epoch 29/50
4/4 [==============================] - 0s 41ms/step - loss: 1.0027 - sparse_categorical_accuracy: 0.5167 - val_loss: 0.9979 - val_sparse_categorical_accuracy: 0.5000
Epoch 30/50
4/4 [==============================] - 0s 42ms/step - loss: 0.9957 - sparse_categorical_accuracy: 0.5000 - val_loss: 0.9912 - val_sparse_categorical_accuracy: 0.4917
Epoch 31/50
4/4 [==============================] - 0s 19ms/step - loss: 0.9892 - sparse_categorical_accuracy: 0.4917 - val_loss: 0.9846 - val_sparse_categorical_accuracy: 0.4917
Epoch 32/50
4/4 [==============================] - 0s 18ms/step - loss: 0.9825 - sparse_categorical_accuracy: 0.5000 - val_loss: 0.9783 - val_sparse_categorical_accuracy: 0.5083
Epoch 33/50
4/4 [==============================] - 0s 14ms/step - loss: 0.9767 - sparse_categorical_accuracy: 0.5167 - val_loss: 0.9722 - val_sparse_categorical_accuracy: 0.5333
Epoch 34/50
4/4 [==============================] - 0s 19ms/step - loss: 0.9703 - sparse_categorical_accuracy: 0.5417 - val_loss: 0.9662 - val_sparse_categorical_accuracy: 0.5583
Epoch 35/50
4/4 [==============================] - 0s 16ms/step - loss: 0.9644 - sparse_categorical_accuracy: 0.5667 - val_loss: 0.9602 - val_sparse_categorical_accuracy: 0.5833
Epoch 36/50
4/4 [==============================] - 0s 18ms/step - loss: 0.9581 - sparse_categorical_accuracy: 0.6083 - val_loss: 0.9542 - val_sparse_categorical_accuracy: 0.6167
Epoch 37/50
4/4 [==============================] - 0s 36ms/step - loss: 0.9522 - sparse_categorical_accuracy: 0.6167 - val_loss: 0.9482 - val_sparse_categorical_accuracy: 0.6333
Epoch 38/50
4/4 [==============================] - 0s 29ms/step - loss: 0.9463 - sparse_categorical_accuracy: 0.6500 - val_loss: 0.9422 - val_sparse_categorical_accuracy: 0.6750
Epoch 39/50
4/4 [==============================] - 0s 36ms/step - loss: 0.9403 - sparse_categorical_accuracy: 0.6750 - val_loss: 0.9361 - val_sparse_categorical_accuracy: 0.6833
Epoch 40/50
4/4 [==============================] - 0s 36ms/step - loss: 0.9340 - sparse_categorical_accuracy: 0.6917 - val_loss: 0.9300 - val_sparse_categorical_accuracy: 0.6917
Epoch 41/50
4/4 [==============================] - 0s 30ms/step - loss: 0.9278 - sparse_categorical_accuracy: 0.6917 - val_loss: 0.9236 - val_sparse_categorical_accuracy: 0.6917
Epoch 42/50
4/4 [==============================] - 0s 19ms/step - loss: 0.9214 - sparse_categorical_accuracy: 0.6917 - val_loss: 0.9172 - val_sparse_categorical_accuracy: 0.7000
Epoch 43/50
4/4 [==============================] - 0s 21ms/step - loss: 0.9152 - sparse_categorical_accuracy: 0.7000 - val_loss: 0.9106 - val_sparse_categorical_accuracy: 0.7000
Epoch 44/50
4/4 [==============================] - 0s 22ms/step - loss: 0.9085 - sparse_categorical_accuracy: 0.7000 - val_loss: 0.9041 - val_sparse_categorical_accuracy: 0.7000
Epoch 45/50
4/4 [==============================] - 0s 17ms/step - loss: 0.9020 - sparse_categorical_accuracy: 0.7000 - val_loss: 0.8977 - val_sparse_categorical_accuracy: 0.7000
Epoch 46/50
4/4 [==============================] - 0s 17ms/step - loss: 0.8953 - sparse_categorical_accuracy: 0.7000 - val_loss: 0.8912 - val_sparse_categorical_accuracy: 0.7000
Epoch 47/50
4/4 [==============================] - 0s 20ms/step - loss: 0.8890 - sparse_categorical_accuracy: 0.7000 - val_loss: 0.8845 - val_sparse_categorical_accuracy: 0.7000
Epoch 48/50
4/4 [==============================] - 0s 30ms/step - loss: 0.8823 - sparse_categorical_accuracy: 0.7000 - val_loss: 0.8777 - val_sparse_categorical_accuracy: 0.7000
Epoch 49/50
4/4 [==============================] - 0s 29ms/step - loss: 0.8757 - sparse_categorical_accuracy: 0.7000 - val_loss: 0.8708 - val_sparse_categorical_accuracy: 0.7000
Epoch 50/50
4/4 [==============================] - 0s 28ms/step - loss: 0.8688 - sparse_categorical_accuracy: 0.7000 - val_loss: 0.8639 - val_sparse_categorical_accuracy: 0.7000
In [8]:
%matplotlib inline
%config InlineBackend.figure_format = 'png'
from plt_utils import plot_metric
In [9]:
plot_metric(history, 'loss')
No description has been provided for this image
In [10]:
plot_metric(history, 'sparse_categorical_accuracy')
No description has been provided for this image