Here we will cover some basics of pytorch which will help us get started. We will see some basic operations and get our hands dirty with pytorch.
We will start by importing necessary modules.
import torch import torch.nn as nn
Lets us start by initializing a tensor.
x = torch.tensor(([5.1,4.3,2.5],[5.1,4.3,2.5],[5.1,4.3,2.5]), dtype=torch.float)
In pytorch everything is a tensor. So the example above will give us a 3X3 sized tensor.
If you want to check the size of a tensor then –
good going so far! I hope you are coding alongside this article, if so try initializing a 2X3 sized tensor and print the tensor and its size.
The second thing which should be mentioned here is that how do we convert a numpy array to pytorch tensor? It is important because many a times you might be reading data from csv files etc or there may be a case where many library/modules may give you data in form of a numpy array.
so let us see how to solve this problem –
import numpy as np a = np.array([1,2,3]) # numpy array b = torch.from_numpy(a)
Okay, now let us talk about pytorch Variable. A Variable wraps a Tensor and supports nearly all the API’s defined by a Tensor. Variable also provides a backward method to perform backpropagation.
Here is how you can define a Variable and do backpropagation.
from torch.autograd import Variable a = Variable(torch.Tensor([[1,2],[3,4]]), requires_grad=True) b = torch.sum(a**2) b.backward() # compute gradients of b wrt a print(a.grad())
It is as simple as that! When you specify requires_grad=True in Variable and call tensor.backward().
Pytorch leverages autograd module. We will dive into autograd later. In case you want to read more about pytorch autograd then you can check that here. We will be soon writing another blog post explaining autograd.
That is all for this blog. If you want to know more about pytorch and read how to write a simple neural network and convolutional network in pytorch. Then check out the following articles –
Happy Coding! 🙂