PyTorch 基础篇(3):Logistic Regression(逻辑斯蒂回归) 
参考代码
yunjey的 pytorch tutorial系列  
 
1 2 3 4 5 import  torchimport  torch.nn as  nnimport  torchvisionimport  torchvision.transforms as  transforms
1 2 3 4 5 6 input_size = 784  num_classes = 10  num_epochs = 5  batch_size = 100  learning_rate = 0.001  
MINIST数据集加载(image and labels) 1 2 3 4 train_dataset = torchvision.datasets.MNIST(root='../../../data/minist' ,                                             train=True ,                                             transform=transforms.ToTensor(),                                            download=True ) 
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Processing...
Done!
1 2 3 test_dataset = torchvision.datasets.MNIST(root='../../../data/minist' ,                                            train=False ,                                            transform=transforms.ToTensor()) 
1 2 3 4 5 6 7 8 train_loader = torch.utils.data.DataLoader(dataset=train_dataset,                                             batch_size=batch_size,                                             shuffle=True ) test_loader = torch.utils.data.DataLoader(dataset=test_dataset,                                            batch_size=batch_size,                                            shuffle=False ) 
Logistic Regression模型:加载和训练 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 model = nn.Linear(input_size, num_classes) criterion = nn.CrossEntropyLoss()   optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)   total_step = len (train_loader) for  epoch in  range (num_epochs):    for  i, (images, labels) in  enumerate (train_loader):                  images = images.reshape(-1 , 28 *28 )                           outputs = model(images)         loss = criterion(outputs, labels)                           optimizer.zero_grad()          loss.backward()         optimizer.step()                  if  (i+1 ) % 100  == 0 :             print  ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'                      .format (epoch+1 , num_epochs, i+1 , total_step, loss.item())) 
Epoch [1/5], Step [100/600], Loss: 2.2091
Epoch [1/5], Step [200/600], Loss: 2.0910
Epoch [1/5], Step [300/600], Loss: 2.0584
Epoch [1/5], Step [400/600], Loss: 1.9679
Epoch [1/5], Step [500/600], Loss: 1.8440
Epoch [1/5], Step [600/600], Loss: 1.7939
Epoch [2/5], Step [100/600], Loss: 1.7501
Epoch [2/5], Step [200/600], Loss: 1.6417
Epoch [2/5], Step [300/600], Loss: 1.6071
Epoch [2/5], Step [400/600], Loss: 1.5562
Epoch [2/5], Step [500/600], Loss: 1.5750
Epoch [2/5], Step [600/600], Loss: 1.4774
Epoch [3/5], Step [100/600], Loss: 1.4367
Epoch [3/5], Step [200/600], Loss: 1.3702
Epoch [3/5], Step [300/600], Loss: 1.3308
Epoch [3/5], Step [400/600], Loss: 1.3523
Epoch [3/5], Step [500/600], Loss: 1.3248
Epoch [3/5], Step [600/600], Loss: 1.3202
Epoch [4/5], Step [100/600], Loss: 1.2332
Epoch [4/5], Step [200/600], Loss: 1.1691
Epoch [4/5], Step [300/600], Loss: 1.2277
Epoch [4/5], Step [400/600], Loss: 1.1631
Epoch [4/5], Step [500/600], Loss: 1.1385
Epoch [4/5], Step [600/600], Loss: 1.0769
Epoch [5/5], Step [100/600], Loss: 1.0163
Epoch [5/5], Step [200/600], Loss: 1.1347
Epoch [5/5], Step [300/600], Loss: 1.0465
Epoch [5/5], Step [400/600], Loss: 1.0809
Epoch [5/5], Step [500/600], Loss: 0.9965
Epoch [5/5], Step [600/600], Loss: 1.0620
模型测试 1 2 3 4 5 6 7 8 9 10 11 12 13 with  torch.no_grad():    correct = 0      total = 0      for  images, labels in  test_loader:         images = images.reshape(-1 , 28 *28 )         outputs = model(images)         _, predicted = torch.max (outputs.data, 1 )         total += labels.size(0 )         correct += (predicted == labels).sum ()     print ('Accuracy of the model on the 10000 test images: {} %' .format (100  * correct / total)) 
Accuracy of the model on the 10000 test images: 82 %
1 2 torch.save(model.state_dict(), 'model.ckpt' )