This content originally appeared on DEV Community and was authored by Super Kai (Kazuya Ito)
*My post explains Step function, ReLU, Leaky ReLU and PReLU.
LeakyReLU() can get the 0D or more D tensor of the zero or more values computed by LeakyReLU function from the 0D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
negative_slope(Optional-Default:0.01-Type:float). - The 2nd argument for initialization is
inplace(Optional-Default:False-Type:bool). *Keep itFalsebecause it’s problematic withTrue. - The 1st argument is
input(Required-Type:tensoroffloat).
import torch
from torch import nn
my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])
lrelu = nn.LeakyReLU()
lrelu(input=my_tensor)
# tensor([8.0000, -0.0300, 0.0000, 1.0000, 5.0000, -0.0200, -0.0100, 4.0000])
lrelu
# LeakyReLU(negative_slope=0.01)
lrelu.negative_slope
# 0.01
lrelu = nn.LeakyReLU(negative_slope=0.01, inplace=True)
lrelu(input=my_tensor)
# tensor([8.0000, -0.0300, 0.0000, 1.0000, 5.0000, -0.0200, -0.0100, 4.0000])
my_tensor = torch.tensor([[8., -3., 0., 1.],
[5., 0., -1., 4.]])
lrelu = nn.LeakyReLU()
lrelu(input=my_tensor)
# tensor([[8.0000, -0.0300, 0.0000, 1.0000],
# [5.0000, 0.0000, -0.0100, 4.0000]])
my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
[[5., 0.], [-1., 4.]]])
lrelu = nn.LeakyReLU()
lrelu(input=my_tensor)
# tensor([[[8.0000, -0.0300], [0.0000, 1.0000]],
# [[5.0000, 0.0000], [-0.0100, 4.0000]]])
PReLU() can get the 1D or more D tensor of the zero or more values computed by PReLU function from the 1D or more D tensor of zero or more elements as shown below:
*Memos:
- The 1st argument for initialization is
num_parameters(Optional-Default:1-Type:int). *MeIt must be1 <= x. - The 2nd argument for initialization is
init(Optional-Default:0.25-Type:float). - The 3rd argument is
device(Optional-Type:str,intor device()): *Memos:- If
deviceis not given, get_default_device() is used. *My post explainsget_default_device()and set_default_device(). -
My post explains
deviceargument.
- If
- The 4th argument is
dtype(Optional-Type:dtype): *Memos:- If
dtypeis not given, get_default_dtype() is used. *My post explainsget_default_dtype()and set_default_dtype(). -
My post explains
dtypeargument.nt-functions-and-get-it-in-pytorch-1o2p) explainsdeviceargument.
- If
- The 1st argument is
input(Required-Type:tensoroffloat).
import torch
from torch import nn
my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])
prelu = nn.PReLU()
prelu(input=my_tensor)
# tensor([8.0000, -0.7500, 0.0000, 1.0000,
# 5.0000, -0.5000, -0.2500, 4.0000],
# grad_fn=<PreluKernelBackward0>)
prelu
# PReLU(num_parameters=1)
prelu.num_parameters
# 1
prelu.init
# 0.25
prelu.weight
# Parameter containing:
# tensor([0.2500], requires_grad=True)
prelu = nn.PReLU(num_parameters=1, init=0.25)
prelu(input=my_tensor)
# tensor([8.0000, -0.7500, 0.0000, 1.0000,
# 5.0000, -0.5000, -0.2500, 4.0000],
# grad_fn=<PreluKernelBackward0>)
prelu.weight
# Parameter containing:
# tensor([0.2500], requires_grad=True)
my_tensor = torch.tensor([[8., -3., 0., 1.],
[5., 0., -1., 4.]])
prelu = nn.PReLU()
prelu(input=my_tensor)
# tensor([[8.0000, -0.7500, 0.0000, 1.0000],
# [5.0000, 0.0000, -0.2500, 4.0000]],
# grad_fn=<PreluKernelBackward0>)
prelu.weight
# Parameter containing:
# tensor([0.2500], requires_grad=True)
prelu = nn.PReLU(num_parameters=4, init=0.25)
prelu(input=my_tensor)
# tensor([[8.0000, -0.7500, 0.0000, 1.0000],
# [5.0000, 0.0000, -0.2500, 4.0000]],
# grad_fn=<PreluKernelBackward0>)
prelu.weight
# Parameter containing:
# tensor([0.2500, 0.2500, 0.2500, 0.2500], requires_grad=True)
my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
[[5., 0.], [-1., 4.]]])
prelu = nn.PReLU()
prelu(input=my_tensor)
# tensor([[[8.0000, -0.7500], [0.0000, 1.0000]],
# [[5.0000, 0.0000],[-0.2500, 4.0000]]],
# grad_fn=<PreluKernelBackward0>)
prelu.weight
# Parameter containing:
# tensor([0.2500], requires_grad=True)
prelu = nn.PReLU(num_parameters=2, init=0.25)
prelu(input=my_tensor)
# tensor([[[8.0000, -0.7500], [0.0000, 1.0000]],
# [[5.0000, 0.0000], [-0.2500, 4.0000]]],
# grad_fn=<PreluKernelBackward0>)
prelu.weight
# Parameter containing:
# tensor([0.2500, 0.2500], requires_grad=True)
This content originally appeared on DEV Community and was authored by Super Kai (Kazuya Ito)


