给专业人员的 TensorFlow 2 入门教程

In [1]:
import tensorflow as tf
tf.__version__
Out[1]:
'2.7.4'
In [2]:
from tensorflow.keras import datasets
mnist = datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
In [3]:
print(f'x_train shape: {x_train.shape}, type: {type(x_train)}, dtype: {x_train.dtype}')
x_train shape: (60000, 28, 28), type: <class'numpy.ndarray'>, dtype: float64

维度扩张,可以使用如下的 np.expand_dims, 也可以使用 x_train[..., tf.newaxis].

In [4]:
import numpy as np
x_train = np.expand_dims(x_train, axis=-1).astype('float32')
x_test = np.expand_dims(x_test, axis=-1).astype('float32')
In [5]:
print(f'x_train shape: {x_train.shape}, type: {type(x_train)}, dtype: {x_train.dtype}')
x_train shape: (60000, 28, 28, 1), type: <class'numpy.ndarray'>, dtype: float32
In [6]:
train_ds = tf.data.Dataset.from_tensor_slices((x_train,y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test,y_test)).batch(32)
2023-10-16 18:36:56.261151: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-16 18:36:56.294774: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 188160000 exceeds 10% of free system memory.
In [7]:
from tensorflow.keras import Model
class MyModel(Model):
    def __init__(self):
        super(MyModel, self).__init__()
        self.conv1 = layers.Conv2D(32, 3, activation=activations.relu)
        self.flatten = layers.Flatten()
        self.d1 = layers.Dense(128, activation=activations.relu)
        self.d2 = layers.Dense(10)
    def call(self, x):
        x = self.conv1(x)
        x = self.flatten(x)
        x = self.d1(x)
        return self.d2(x)
model = Model()
In [8]:
from tensorflow.keras import layers, activations
In [9]:
?layers.Conv2D
Init signature: layers.Conv2D(*args, **kwargs)
Docstring:
2D convolution layer (e.g. spatial convolution over images).

This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.

When using this layer as the first layer in a model, provide the keyword argument input_shape (tuple of integers or None, does not include the sample axis), e.g. input_shape=(128, 128, 3) for 128x128 RGB pictures in data_format="channels_last". You can use None when a dimension has variable size.

Examples:

>>> # The inputs are 28x28 RGB images with channels_last and the batch >>> # size is 4. >>> input_shape = (4, 28, 28, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv2D( … 2, 3, activation=’relu’, input_shape=input_shape[1:])(x) >>> print(y.shape) (4, 26, 26, 2)

>>> # With dilation_rate as 2. >>> input_shape = (4, 28, 28, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv2D( … 2, 3, activation=’relu’, dilation_rate=2, input_shape=input_shape[1:])(x) >>> print(y.shape) (4, 24, 24, 2)

>>> # With padding as “same”. >>> input_shape = (4, 28, 28, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv2D( … 2, 3, activation=’relu’, padding=”same”, input_shape=input_shape[1:])(x) >>> print(y.shape) (4, 28, 28, 2)

>>> # With extended batch shape [4, 7]: >>> input_shape = (4, 7, 28, 28, 3) >>> x = tf.random.normal(input_shape) >>> y = tf.keras.layers.Conv2D( … 2, 3, activation=’relu’, input_shape=input_shape[2:])(x) >>> print(y.shape) (4, 7, 26, 26, 2)

Args: filters: Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). kernel_size: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions. strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value!= 1 is incompatible with specifying any dilation_rate value!= 1. padding: one of "valid" or "same" (case-insensitive). "valid" means no padding. "same" results in padding with zeros evenly to the left/right or up/down of the input. When padding="same" and strides=1, the output has the same size as the input. data_format: A string, one of channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be channels_last. dilation_rate: an integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying any dilation_rate value!= 1 is incompatible with specifying any stride value!= 1. groups: A positive integer specifying the number of groups in which the input is split along the channel axis. Each group is convolved separately with filters/ groups filters. The output is the concatenation of all the groups results along the channel axis. Input channels and filters must both be divisible by groups. activation: Activation function to use. If you don’t specify anything, no activation is applied (see keras.activations). use_bias: Boolean, whether the layer uses a bias vector. kernel_initializer: Initializer for the kernel weights matrix (see keras.initializers). Defaults to ‘glorot_uniform’. bias_initializer: Initializer for the bias vector (see keras.initializers). Defaults to ‘zeros’. kernel_regularizer: Regularizer function applied to the kernel weights matrix (see keras.regularizers). bias_regularizer: Regularizer function applied to the bias vector (see keras.regularizers). activity_regularizer: Regularizer function applied to the output of the layer (its “activation”) (see keras.regularizers). kernel_constraint: Constraint function applied to the kernel matrix (see keras.constraints). bias_constraint: Constraint function applied to the bias vector (see keras.constraints).

Input shape: 4+D tensor with shape: batch_shape + (channels, rows, cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (rows, cols, channels) if data_format='channels_last'.

Output shape: 4+D tensor with shape: batch_shape + (filters, new_rows, new_cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (new_rows, new_cols, filters) if data_format='channels_last'. rows and cols values might have changed due to padding.

Returns: A tensor of rank 4+ representing activation(conv2d(inputs, kernel) + bias).

Raises: ValueError: if padding is "causal". ValueError: when both strides&gt; 1 and dilation_rate&gt; 1. File: ~/venv/venv-py38-tf2/lib/python3.8/site-packages/keras/layers/convolutional.py Type: type Subclasses: Conv2DTranspose, Conv2D

相关推荐