到目前为止,似乎我们在建立网络时草率地逃脱了惩罚。具体来说,我们做了以下不符合直觉的事情,这些事情可能看起来不应该起作用:
您可能会对我们的代码完全运行感到惊讶。毕竟,深度学习框架无法判断网络的输入维数。这里的技巧是框架推迟初始化,等到我们第一次通过模型传递数据时,动态推断每一层的大小。
稍后,当使用卷积神经网络时,该威廉希尔官方网站 将变得更加方便,因为输入维度(即图像的分辨率)将影响每个后续层的维度。因此,在编写代码时无需知道维度是多少就可以设置参数的能力可以极大地简化指定和随后修改模型的任务。接下来,我们将更深入地研究初始化机制。
import tensorflow as tf
首先,让我们实例化一个 MLP。
net = nn.Sequential()
net.add(nn.Dense(256, activation='relu'))
net.add(nn.Dense(10))
net = tf.keras.models.Sequential([
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(10),
])
此时,网络不可能知道输入层权重的维度,因为输入维度仍然未知。
Consequently the framework has not yet initialized any parameters. We confirm by attempting to access the parameters below.
<bound method Block.collect_params of Sequential(
(0): Dense(-1 -> 256, Activation(relu))
(1): Dense(-1 -> 10, linear)
)>
sequential0_ (
Parameter dense0_weight (shape=(256, -1), dtype=float32)
Parameter dense0_bias (shape=(256,), dtype=float32)
Parameter dense1_weight (shape=(10, -1), dtype=float32)
Parameter dense1_bias (shape=(10,), dtype=float32)
)
Note that while the parameter objects exist, the input dimension to each layer is listed as -1. MXNet uses the special value -1 to indicate that the parameter dimension remains unknown. At this point, attempts to access net[0].weight.data()
would trigger a runtime error stating that the network must be initialized before the parameters can be accessed. Now let’s see what happens when we attempt to initialize parameters via the initialize
method.
sequential0_ (
Parameter dense0_weight (shape=(256, -1), dtype=float32)
Parameter dense0_bias (shape=(256,), dtype=float32)
Parameter dense1_weight (shape=(10, -1), dtype=float32)
Parameter dense1_bias (shape=(10,), dtype=float32)
)
As we can see, nothing has changed. When input dimensions are unknown, calls to initialize do not truly initialize the parameters. Instead, this call registers to MXNet that we wish (and optionally, according to which distribution) to initialize the parameters.
As mentioned in Section 6.2.1, parameters and the network definition are decoupled in Jax and Flax, and the user handles both manually. Flax models are stateless hence there is no parameters
attribute.
Consequently the framework has not yet initialized any parameters. We confirm by attempting to access the parameters below.
[[], []]
Note that each layer objects exist but the weights are empty. Using net.get_weights()
would throw an error since the weights have not been initialized yet.
接下来让我们通过网络传递数据,让框架最终初始化参数。
sequential0_ (
Parameter dense0_weight (shape=(256, 20), dtype=float32)
Parameter dense0_bias (shape=(256,), dtype=float32)
Parameter dense1_weight (shape=(10, 256), dtype=float32)
Parameter dense1_bias (shape=(10,), dtype=float32)
)
params = net.init(d2l.get_key(), jnp.zeros((2, 20)))
jax.tree_util.tree_map(lambda x: x.shape, params).tree_flatten()
(({'params': {'layers_0': {'bias': (256,), 'kernel': (20, 256)},
'layers_2': {'bias': (10,), 'kernel': (256, 10)}}},),
())
一旦我们知道输入维度 20,框架就可以通过插入值 20 来识别第一层权重矩阵的形状。识别出第一层的形状后,框架进入第二层,依此类推计算图,直到所有形状都已知。请注意,在这种情况下,只有第一层需要延迟初始化,但框架会按顺序进行初始化。一旦知道所有参数形状,框架就可以最终初始化参数。
以下方法通过网络传入虚拟输入以进行试运行,以推断所有参数形状并随后初始化参数。当不需要默认随机初始化时,稍后将使用它。
Parameter initialization in Flax is always done manually and handled by the user. The following method takes a dummy input and a key dictionary as argument. This key dictionary has the rngs for initializing the model parameters and dropout rng for generating the dropout mask for the models with dropout layers. More about dropout will be covered later in Section 5.6. Ultimately the method initializes the model returning the parameters. We have been using it under the hood in the previous sections as well.
6.4.1. 概括
延迟初始化可能很方便,允许框架自动推断参数形状,从而轻松修改架构并消除一种常见的错误来源。我们可以通过模型传递数据,让框架最终初始化参数。
6.4.2. 练习
-
如果您将输入维度指定给第一层而不是后续层,会发生什么情况?你会立即初始化吗?
-
如果您指定不匹配的尺寸会发生什么?
-
如果你有不同维度的输入,你需要做什么?提示:查看参数绑定。
评论
查看更多