我正在尝试理解Deeplearning4j上的LSTM。我正在检查示例的源代码,但我不能理解。
//Allocate space:
//Note the order here:
// dimension 0 = number of examples in minibatch
// dimension 1 = size of each vector (i.e., number of characters)
// dimension 2 = length of each time series/example
INDArray input = Nd4j.zeros(currMinibatchSize,validCharacters.length,exampleLength);
INDArray labels = Nd4j.zeros(currMinibatchSize,validCharacters.length,exampleLength);为什么我们要存储3D数组,这意味着什么?
发布于 2016-05-16 22:34:12
问得好。但这与LSTM功能无关,而是与任务本身相关。所以任务是预测下一个字符是什么。下一个字的预测本身有两个方面:分类和近似。如果我们只处理近似,我们只能处理一维数组。但是如果我们同时处理近似和分类,我们就不能只将字符的规范化ascii表示输入到神经网络中。我们需要将每个字符转换成数组。
例如,a(非大写字母)将以这种方式表示:
1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
B(非大写)将表示为: 0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 c将表示为:
0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
Z (z大写!)
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1
所以,每个字符给我们两个维数组。所有这些维度是如何构建的?代码注释有如下解释:
// dimension 0 = number of examples in minibatch
// dimension 1 = size of each vector (i.e., number of characters)
// dimension 2 = length of each time series/example我希望sincerly赞扬您在理解LSTM如何工作方面所做的努力,但您指出的代码给出了适用于所有类型NN的示例,并解释了如何在神经网络中处理文本数据,但没有解释LSTM是如何工作的。您需要查看源代码的另一部分。
https://stackoverflow.com/questions/37245079
复制相似问题