Layers¶
畳み込み層 Convolutional Layer¶
# tf.keras.layers.Conv2D(filters, kernel_size, strides=(1, 1), padding='valid', activation=None)
input_shape = (4, 5, 5, 3) # batch, I_h, I_w, channel
x = tf.random.normal(input_shape)
y = tf.keras.layers.Conv2D(2, 3, activation='relu', input_shape=input_shape)(x)
print(y.shape)
# (4, 3, 3, 2) # batch, O_h, O_w, filter
プーリング層 Pooling Layer¶
# tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=None, padding='valid')
input_shape = (4, 5, 5, 3) # batch, I_h, I_w, channel
x = tf.random.normal(input_shape)
y = tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding='valid')(x)
print(y.shape)
# TensorShape([4, 4, 4, 3]) # batch, O_h, O_w, channel
全結合層 Fully Connected Layer¶
# tf.keras.layers.Dense(units, activation=None)
input_shape = (4, 3, 3, 2) # batch, I_h, I_w, channel
x = tf.random.normal(input_shape)
y = tf.keras.layers.Flatten()(x)
print(y.shape)
# TensorShape([1, 18])
out = tf.keras.layers.Dense(1)(y)
print(out.shape)
# TensorShape([1, 1]) # units
ドロップアウト層 Dropout Layer¶
# tf.keras.layers.Dropout(rate, noise_shape=None)
x = tf.constant([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
layer = tf.keras.layers.Dropout(0.2)
outputs = layer(x, training=True)
outputs
# <tf.Tensor: shape=(3, 3), dtype=float32, numpy=
# array([[ 1.25, 2.5 , 3.75],
# [ 5. , 6.25, 7.5 ],
# [ 8.75, 10. , 0. ]], dtype=float32)>
Note:
- インプットをランダムに0にする(過学習防止)
- 各インプットに 1 / (1 - rate) が加算される