PyTorch 中级篇(5):语言模型(Language Model (RNN-LM))

PyTorch

PyTorch 中级篇(5):语言模型(Language Model (RNN-LM))

参考代码

yunjey的 pytorch tutorial系列

语言模型 学习资料

语言模型这一块不是很想接触。就照着yunjey的代码,一带而过吧。

博客

CS224d笔记4——语言模型和循环神经网络(Recurrent Neural Network, RNN)

浅谈神经网络中的梯度爆炸问题

PyTorch 实现

1
2
3
4
5
6
7
8
# 包
import torch
import torch.nn as nn
import numpy as np
from torch.nn.utils import clip_grad_norm
from data_utils import Dictionary, Corpus

#data_utils代码在https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/language_model/data_utils.py
1
2
3
4
# 设备配置
# Device configuration
torch.cuda.set_device(1) # 这句用来设置pytorch在哪块GPU上运行
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
1
2
3
4
5
6
7
8
9
10
# 超参数设置
# Hyper-parameters
embed_size = 128
hidden_size = 1024
num_layers = 1
num_epochs = 5
num_samples = 1000 # number of words to be sampled
batch_size = 20
seq_length = 30
learning_rate = 0.002

Penn Treebank 数据集

1
2
3
4
corpus = Corpus()
ids = corpus.get_data('data/train.txt', batch_size)
vocab_size = len(corpus.dictionary)
num_batches = ids.size(1) // seq_length

基于RNN的语言模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class RNNLM(nn.Module):
def __init__(self, vocab_size, embed_size, hidden_size, num_layers):
super(RNNLM, self).__init__()
self.embed = nn.Embedding(vocab_size, embed_size)
self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first=True)
self.linear = nn.Linear(hidden_size, vocab_size)

def forward(self, x, h):
# Embed word ids to vectors
x = self.embed(x)

# Forward propagate LSTM
out, (h, c) = self.lstm(x, h)

# Reshape output to (batch_size*sequence_length, hidden_size)
out = out.reshape(out.size(0)*out.size(1), out.size(2))

# Decode hidden states of all time steps
out = self.linear(out)
return out, (h, c)
1
2
# 实例化一个模型
model = RNNLM(vocab_size, embed_size, hidden_size, num_layers).to(device)
1
2
3
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
1
2
3
# 定义函数:截断反向传播
def detach(states):
return [state.detach() for state in states]

训练模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
for epoch in range(num_epochs):
# 初始化隐状态和细胞状态
states = ( torch.zeros(num_layers, batch_size, hidden_size).to(device),
torch.zeros(num_layers, batch_size, hidden_size).to(device) )

for i in range(0, ids.size(1) - seq_length, seq_length):
# Get mini-batch inputs and targets
inputs = ids[:, i:i+seq_length].to(device)
targets = ids[:, (i+1):(i+1)+seq_length].to(device)

# Forward pass
states = detach(states)
outputs, states = model(inputs, states)
loss = criterion(outputs, targets.reshape(-1))

# Backward and optimize
model.zero_grad()
loss.backward()
clip_grad_norm(model.parameters(), 0.5)
optimizer.step()

step = (i+1) // seq_length
if step % 100 == 0:
print ('Epoch [{}/{}], Step[{}/{}], Loss: {:.4f}, Perplexity: {:5.2f}'
.format(epoch+1, num_epochs, step, num_batches, loss.item(), np.exp(loss.item())))
/home/ubuntu/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:19: UserWarning: torch.nn.utils.clip_grad_norm is now deprecated in favor of torch.nn.utils.clip_grad_norm_.


Epoch [1/5], Step[0/1549], Loss: 9.2070, Perplexity: 9966.60
Epoch [1/5], Step[100/1549], Loss: 5.9989, Perplexity: 402.99
Epoch [1/5], Step[200/1549], Loss: 5.9188, Perplexity: 371.96
Epoch [1/5], Step[300/1549], Loss: 5.7725, Perplexity: 321.35
Epoch [1/5], Step[400/1549], Loss: 5.6823, Perplexity: 293.63
Epoch [1/5], Step[500/1549], Loss: 5.1482, Perplexity: 172.12
Epoch [1/5], Step[600/1549], Loss: 5.1709, Perplexity: 176.07
Epoch [1/5], Step[700/1549], Loss: 5.3420, Perplexity: 208.93
Epoch [1/5], Step[800/1549], Loss: 5.1762, Perplexity: 177.00
Epoch [1/5], Step[900/1549], Loss: 5.0525, Perplexity: 156.42
Epoch [1/5], Step[1000/1549], Loss: 5.0810, Perplexity: 160.94
Epoch [1/5], Step[1100/1549], Loss: 5.3304, Perplexity: 206.52
Epoch [1/5], Step[1200/1549], Loss: 5.1753, Perplexity: 176.85
Epoch [1/5], Step[1300/1549], Loss: 5.1375, Perplexity: 170.29
Epoch [1/5], Step[1400/1549], Loss: 4.8377, Perplexity: 126.18
Epoch [1/5], Step[1500/1549], Loss: 5.1570, Perplexity: 173.65
Epoch [2/5], Step[0/1549], Loss: 5.4297, Perplexity: 228.07
Epoch [2/5], Step[100/1549], Loss: 4.5487, Perplexity: 94.51
Epoch [2/5], Step[200/1549], Loss: 4.6870, Perplexity: 108.53
Epoch [2/5], Step[300/1549], Loss: 4.6928, Perplexity: 109.15
Epoch [2/5], Step[400/1549], Loss: 4.4732, Perplexity: 87.64
Epoch [2/5], Step[500/1549], Loss: 4.1760, Perplexity: 65.11
Epoch [2/5], Step[600/1549], Loss: 4.4682, Perplexity: 87.20
Epoch [2/5], Step[700/1549], Loss: 4.4156, Perplexity: 82.73
Epoch [2/5], Step[800/1549], Loss: 4.3548, Perplexity: 77.85
Epoch [2/5], Step[900/1549], Loss: 4.1930, Perplexity: 66.22
Epoch [2/5], Step[1000/1549], Loss: 4.3182, Perplexity: 75.05
Epoch [2/5], Step[1100/1549], Loss: 4.4739, Perplexity: 87.70
Epoch [2/5], Step[1200/1549], Loss: 4.4112, Perplexity: 82.37
Epoch [2/5], Step[1300/1549], Loss: 4.2890, Perplexity: 72.90
Epoch [2/5], Step[1400/1549], Loss: 4.0021, Perplexity: 54.71
Epoch [2/5], Step[1500/1549], Loss: 4.3473, Perplexity: 77.27
Epoch [3/5], Step[0/1549], Loss: 6.7676, Perplexity: 869.23
Epoch [3/5], Step[100/1549], Loss: 3.8664, Perplexity: 47.77
Epoch [3/5], Step[200/1549], Loss: 4.0332, Perplexity: 56.44
Epoch [3/5], Step[300/1549], Loss: 3.9715, Perplexity: 53.06
Epoch [3/5], Step[400/1549], Loss: 3.8199, Perplexity: 45.60
Epoch [3/5], Step[500/1549], Loss: 3.4156, Perplexity: 30.44
Epoch [3/5], Step[600/1549], Loss: 3.9293, Perplexity: 50.87
Epoch [3/5], Step[700/1549], Loss: 3.8010, Perplexity: 44.75
Epoch [3/5], Step[800/1549], Loss: 3.6944, Perplexity: 40.22
Epoch [3/5], Step[900/1549], Loss: 3.4924, Perplexity: 32.87
Epoch [3/5], Step[1000/1549], Loss: 3.6192, Perplexity: 37.31
Epoch [3/5], Step[1100/1549], Loss: 3.7801, Perplexity: 43.82
Epoch [3/5], Step[1200/1549], Loss: 3.7905, Perplexity: 44.28
Epoch [3/5], Step[1300/1549], Loss: 3.4920, Perplexity: 32.85
Epoch [3/5], Step[1400/1549], Loss: 3.2644, Perplexity: 26.17
Epoch [3/5], Step[1500/1549], Loss: 3.6106, Perplexity: 36.99
Epoch [4/5], Step[0/1549], Loss: 4.5910, Perplexity: 98.59
Epoch [4/5], Step[100/1549], Loss: 3.3086, Perplexity: 27.35
Epoch [4/5], Step[200/1549], Loss: 3.4173, Perplexity: 30.49
Epoch [4/5], Step[300/1549], Loss: 3.3424, Perplexity: 28.29
Epoch [4/5], Step[400/1549], Loss: 3.3040, Perplexity: 27.22
Epoch [4/5], Step[500/1549], Loss: 2.9707, Perplexity: 19.51
Epoch [4/5], Step[600/1549], Loss: 3.4324, Perplexity: 30.95
Epoch [4/5], Step[700/1549], Loss: 3.2762, Perplexity: 26.48
Epoch [4/5], Step[800/1549], Loss: 3.1982, Perplexity: 24.49
Epoch [4/5], Step[900/1549], Loss: 2.9825, Perplexity: 19.74
Epoch [4/5], Step[1000/1549], Loss: 3.1104, Perplexity: 22.43
Epoch [4/5], Step[1100/1549], Loss: 3.2339, Perplexity: 25.38
Epoch [4/5], Step[1200/1549], Loss: 3.2937, Perplexity: 26.94
Epoch [4/5], Step[1300/1549], Loss: 3.0448, Perplexity: 21.00
Epoch [4/5], Step[1400/1549], Loss: 2.8098, Perplexity: 16.61
Epoch [4/5], Step[1500/1549], Loss: 3.1238, Perplexity: 22.73
Epoch [5/5], Step[0/1549], Loss: 3.6842, Perplexity: 39.81
Epoch [5/5], Step[100/1549], Loss: 2.8963, Perplexity: 18.11
Epoch [5/5], Step[200/1549], Loss: 3.1310, Perplexity: 22.90
Epoch [5/5], Step[300/1549], Loss: 3.0674, Perplexity: 21.49
Epoch [5/5], Step[400/1549], Loss: 2.9441, Perplexity: 18.99
Epoch [5/5], Step[500/1549], Loss: 2.6322, Perplexity: 13.90
Epoch [5/5], Step[600/1549], Loss: 3.0877, Perplexity: 21.93
Epoch [5/5], Step[700/1549], Loss: 2.8889, Perplexity: 17.97
Epoch [5/5], Step[800/1549], Loss: 2.9450, Perplexity: 19.01
Epoch [5/5], Step[900/1549], Loss: 2.6752, Perplexity: 14.52
Epoch [5/5], Step[1000/1549], Loss: 2.8156, Perplexity: 16.70
Epoch [5/5], Step[1100/1549], Loss: 2.8724, Perplexity: 17.68
Epoch [5/5], Step[1200/1549], Loss: 2.9378, Perplexity: 18.87
Epoch [5/5], Step[1300/1549], Loss: 2.6900, Perplexity: 14.73
Epoch [5/5], Step[1400/1549], Loss: 2.4771, Perplexity: 11.91
Epoch [5/5], Step[1500/1549], Loss: 2.8465, Perplexity: 17.23

测试和保存模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
with torch.no_grad():
with open('sample.txt', 'w') as f:
# 初始化隐状态和细胞状态
state = (torch.zeros(num_layers, 1, hidden_size).to(device),
torch.zeros(num_layers, 1, hidden_size).to(device))

# 随机选择一个单词
prob = torch.ones(vocab_size)
input = torch.multinomial(prob, num_samples=1).unsqueeze(1).to(device)

for i in range(num_samples):
# Forward propagate RNN
output, state = model(input, state)

# Sample a word id
prob = output.exp()
word_id = torch.multinomial(prob, num_samples=1).item()

# Fill input with sampled word id for the next time step
input.fill_(word_id)

# File write
word = corpus.dictionary.idx2word[word_id]
word = '\n' if word == '<eos>' else word + ' '
f.write(word)

if (i+1) % 100 == 0:
print('Sampled [{}/{}] words and save to {}'.format(i+1, num_samples, 'sample.txt'))
Sampled [100/1000] words and save to sample.txt
Sampled [200/1000] words and save to sample.txt
Sampled [300/1000] words and save to sample.txt
Sampled [400/1000] words and save to sample.txt
Sampled [500/1000] words and save to sample.txt
Sampled [600/1000] words and save to sample.txt
Sampled [700/1000] words and save to sample.txt
Sampled [800/1000] words and save to sample.txt
Sampled [900/1000] words and save to sample.txt
Sampled [1000/1000] words and save to sample.txt
1
2
# 保存模型
torch.save(model.state_dict(), 'model.ckpt')

对于整个流程一脸懵逼,结果也不是很懂