Alpha blocks c9/1/2023 Pads the input tensor boundaries with a constant value. Pads the input tensor boundaries with zero. Pads the input tensor using replication of the input boundary. Pads the input tensor using the reflection of the input boundary. Registers a backward hook common to all the modules.Īpplies a 1D convolution over an input signal composed of several input planes.Īpplies a 2D convolution over an input signal composed of several input planes.Īpplies a 3D convolution over an input signal composed of several input planes.Īpplies a 1D transposed convolution operator over an input image composed of several input planes.Īpplies a 2D transposed convolution operator over an input image composed of several input planes.Īpplies a 3D transposed convolution operator over an input image composed of several input planes.Ī torch.nn.Conv1d module with lazy initialization of the in_channels argument of the Conv1d that is inferred from the input.size(1).Ī torch.nn.Conv2d module with lazy initialization of the in_channels argument of the Conv2d that is inferred from the input.size(1).Ī torch.nn.Conv3d module with lazy initialization of the in_channels argument of the Conv3d that is inferred from the input.size(1).Ī torch.nn.ConvTranspose1d module with lazy initialization of the in_channels argument of the ConvTranspose1d that is inferred from the input.size(1).Ī torch.nn.ConvTranspose2d module with lazy initialization of the in_channels argument of the ConvTranspose2d that is inferred from the input.size(1).Ī torch.nn.ConvTranspose3d module with lazy initialization of the in_channels argument of the ConvTranspose3d that is inferred from the input.size(1).Įxtracts sliding local blocks from a batched input tensor.Ĭombines an array of sliding local blocks into a large containing tensor.Īpplies a 1D max pooling over an input signal composed of several input planes.Īpplies a 2D max pooling over an input signal composed of several input planes.Īpplies a 3D max pooling over an input signal composed of several input planes.Īpplies a 1D average pooling over an input signal composed of several input planes.Īpplies a 2D average pooling over an input signal composed of several input planes.Īpplies a 3D average pooling over an input signal composed of several input planes.Īpplies a 2D fractional max pooling over an input signal composed of several input planes.Īpplies a 3D fractional max pooling over an input signal composed of several input planes.Īpplies a 1D power-average pooling over an input signal composed of several input planes.Īpplies a 2D power-average pooling over an input signal composed of several input planes.Īpplies a 1D adaptive max pooling over an input signal composed of several input planes.Īpplies a 2D adaptive max pooling over an input signal composed of several input planes.Īpplies a 3D adaptive max pooling over an input signal composed of several input planes.Īpplies a 1D adaptive average pooling over an input signal composed of several input planes.Īpplies a 2D adaptive average pooling over an input signal composed of several input planes.Īpplies a 3D adaptive average pooling over an input signal composed of several input planes. Registers a global forward hook for all the modules Registers a forward pre-hook common to all modules. Non-linear Activations (weighted sum, nonlinearity)ĭataParallel Layers (multi-GPU, distributed)Ī kind of Tensor that is to be considered a module parameter.īase class for all neural network modules. These are the basic building blocks for graphs: Extending torch.func with autograd.Function.CPU threading and TorchScript inference.CUDA Automatic Mixed Precision examples.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |