PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser, are now available (2023)

by Team PyTorch

We are excited to announce the release of PyTorch 1.12 (release note)! This release is composed of over 3124 commits, 433 contributors. Along with 1.12, we are releasing beta versions of AWS S3 Integration, PyTorch Vision Models on Channels Last on CPU, Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16 and FSDP API. We want to sincerely thank our dedicated community for your contributions.

Summary:

  • Functional APIs to functionally apply module computation with a given set of parameters
  • Complex32 and Complex Convolutions in PyTorch
  • DataPipes from TorchData fully backward compatible with DataLoader
  • functorch with improved coverage for APIs
  • nvFuser a deep learning compiler for PyTorch
  • Changes to float32 matrix multiplication precision on Ampere and later CUDA hardware
  • TorchArrow, a new beta library for machine learning preprocessing over batch data

Frontend APIs

Introducing TorchArrow

We’ve got a new Beta release ready for you to try and use: TorchArrow. This is a library for machine learning preprocessing over batch data. It features a performant and Pandas-style, easy-to-use API in order to speed up your preprocessing workflows and development.

Currently, it provides a Python DataFrame interface with the following features:

  • High-performance CPU backend, vectorized and extensible User-Defined Functions (UDFs) with Velox
  • Seamless handoff with PyTorch or other model authoring, such as Tensor collation and easily plugging into PyTorch DataLoader and DataPipes
  • Zero copy for external readers via Arrow in-memory columnar format

For more details, please find our 10-min tutorial, installation instructions, API documentation, and a prototype for data preprocessing in TorchRec.

(Beta) Functional API for Modules

PyTorch 1.12 introduces a new beta feature to functionally apply Module computation with a given set of parameters. Sometimes, the traditional PyTorch Module usage pattern that maintains a static set of parameters internally is too restrictive. This is often the case when implementing algorithms for meta-learning, where multiple sets of parameters may need to be maintained across optimizer steps.

The new torch.nn.utils.stateless.functional_call() API allows for:

  • Module computation with full flexibility over the set of parameters used
  • No need to reimplement your module in a functional way
  • Any parameter or buffer present in the module can be swapped with an externally-defined value for use in the call. Naming for referencing parameters / buffers follows the fully-qualified form in the module’s state_dict()

Example:

(Video) PyTorch Crash Course - Getting Started with Deep Learning

import torchfrom torch import nnfrom torch.nn.utils.stateless import functional_callclass MyModule(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(3, 3) self.bn = nn.BatchNorm1d(3) self.fc2 = nn.Linear(3, 3) def forward(self, x): return self.fc2(self.bn(self.fc1(x)))m = MyModule()# Define parameter / buffer values to use during module computation.my_weight = torch.randn(3, 3, requires_grad=True)my_bias = torch.tensor([1., 2., 3.], requires_grad=True)params_and_buffers = { 'fc1.weight': my_weight, 'fc1.bias': my_bias, # Custom buffer values can be used too. 'bn.running_mean': torch.randn(3),}# Apply module computation to the input with the specified parameters / buffers.inp = torch.randn(5, 3)output = functional_call(m, params_and_buffers, inp)

(Beta) Complex32 and Complex Convolutions in PyTorch

PyTorch today natively supports complex numbers, complex autograd, complex modules, and numerous complex operations, including linear algebra and Fast Fourier Transform (FFT) operators. Many libraries, including torchaudio and ESPNet, already make use of complex numbers in PyTorch, and PyTorch 1.12 further extends complex functionality with complex convolutions and the experimental complex32 (“complex half”) data type that enables half precision FFT operations. Due to the bugs in CUDA 11.3 package, we recommend using CUDA 11.6 package from wheels if you are using complex numbers.

(Beta) Forward-mode Automatic Differentiation

Forward-mode AD allows the computation of directional derivatives (or equivalently, Jacobian-vector products) eagerly in the forward pass. PyTorch 1.12 significantly improves the operator coverage for forward-mode AD. See our tutorial for more information.

TorchData

BC DataLoader + DataPipe

`DataPipe` from TorchData becomes fully backward compatible with the existing `DataLoader` regarding shuffle determinism and dynamic sharding in both multiprocessing and distributed environments. For more details, please check out the tutorial.

(Beta) AWS S3 Integration

DataPipes based on AWSSDK have been integrated into TorchData. It provides the following features backed by native AWSSDK:

  • Retrieve list of urls from each S3 bucket based on prefix
    • Support timeout to prevent hanging indefinitely
    • Support to specify S3 bucket region
  • Load data from S3 urls
    • Support buffered and multi-part download
    • Support to specify S3 bucket region

AWS native DataPipes are still in the beta phase. And, we will keep tuning them to improve their performance.

(Prototype) DataLoader2

DataLoader2 became available in prototype mode. We are introducing new ways to interact between DataPipes, DataLoading API, and backends (aka ReadingServices). Feature is stable in terms of API, but functionally not complete yet. We welcome early adopters and feedback, as well as potential contributors.

(Video) 006 Test & release 2nd functionality: Rendering reference websites in jupyter notebook | #c17hawke

For more details, please checkout the link.

functorch

Inspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples of these include:

  • running ensembles of models on a single machine
  • efficiently computing Jacobians and Hessians
  • computing per-sample-gradients (or other per-sample quantities)

We’re excited to announce functorch 0.2.0 with a number of improvements and new experimental features.

Significantly improved coverage

We significantly improved coverage for functorch.jvp (our forward-mode autodiff API) and other APIs that rely on it (functorch.{jacfwd, hessian}).

(Prototype) functorch.experimental.functionalize

Given a function f, functionalize(f) returns a new function without mutations (with caveats). This is useful for constructing traces of PyTorch functions without in-place operations. For example, you can use make_fx(functionalize(f)) to construct a mutation-free trace of a pytorch function. To learn more, please see the documentation.

For more details, please see our installation instructions, documentation, tutorials, and release notes.

Performance Improvements

Introducing nvFuser, a deep learning compiler for PyTorch

In PyTorch 1.12, Torchscript is updating its default fuser (for Volta and later CUDA accelerators) to nvFuser, which supports a wider range of operations and is faster than NNC, the previous fuser for CUDA devices. A soon to be published blog post will elaborate on nvFuser and show how it speeds up training on a variety of networks.

See the nvFuser documentation for more details on usage and debugging.

(Video) 005 Create 2nd functionality: Rendering reference websites in Jupyter Notebook | #c17hawke

Changes to float32 matrix multiplication precision on Ampere and later CUDA hardware

PyTorch supports a variety of “mixed precision” techniques, like the torch.amp (Automated Mixed Precision) module and performing float32 matrix multiplications using the TensorFloat32 datatype on Ampere and later CUDA hardware for faster internal computations. In PyTorch 1.12 we’re changing the default behavior of float32 matrix multiplications to always use full IEEE fp32 precision, which is more precise but slower than using the TensorFloat32 datatype for internal computation. For devices with a particularly high ratio of TensorFloat32 to float32 throughput such as A100, this change in defaults can result in a large slowdown.

If you’ve been using TensorFloat32 matrix multiplications then you can continue to do so by setting torch.backends.cuda.matmul.allow_tf32 = True

which is supported since PyTorch 1.7. Starting in PyTorch 1.12 the new matmul precision API can be used, too: torch.set_float32_matmul_precision(“highest”|”high”|”medium”)

To reiterate, PyTorch’s new default is “highest” precision for all device types. We think this provides better consistency across device types for matrix multiplications. Documentation for the new precision API can be found here. Setting the “high” or “medium” precision types will enable TensorFloat32 on Ampere and later CUDA devices. If you’re updating to PyTorch 1.12 then to preserve the current behavior and faster performance of matrix multiplications on Ampere devices, set precision to “high”.

Using mixed precision techniques is essential for training many modern deep learning networks efficiently, and if you’re already using torch.amp this change is unlikely to affect you. If you’re not familiar with mixed precision training then see our soon to be published “What Every User Should Know About Mixed Precision Training in PyTorch” blogpost.

(Beta) Accelerating PyTorch Vision Models with Channels Last on CPU

Memory formats have a significant impact on performance when running vision models, generally Channels Last is more favorable from a performance perspective due to better data locality. 1.12 includes fundamental concepts of memory formats and demonstrates performance benefits using Channels Last on popular PyTorch vision models on Intel® Xeon® Scalable processors.

  • Enables Channels Last memory format support for the commonly used operators in CV domain on CPU, applicable for both inference and training
  • Provides native level optimization on Channels Last kernels from ATen, applicable for both AVX2 and AVX512
  • Delivers 1.3x to 1.8x inference performance gain over Channels First for TorchVision models on Intel® Xeon® Ice Lake (or newer) CPUs

(Beta) Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16

Reduced precision numeric formats like bfloat16 improves PyTorch performance across multiple deep learning training workloads. PyTorch 1.12 includes the latest software enhancements on bfloat16 which applies to a broader scope of user scenarios and showcases even higher performance gains. The main improvements include:

  • 2x hardware compute throughput vs. float32 with the new bfloat16 native instruction VDPBF16PS, introduced on Intel® Xeon® Cooper Lake CPUs
  • 1/2 memory footprint of float32, faster speed for memory bandwidth intensive operators
  • 1.4x to 2.2x inference performance gain over float32 for TorchVision models on Intel® Xeon® Cooper Lake (or newer) CPUs

(Prototype) Introducing Accelerated PyTorch Training on Mac

With the PyTorch 1.12 release, developers and researchers can now take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Accelerated GPU training is enabled using Apple’s Metal Performance Shaders (MPS) as a backend. The benefits include performance speedup from accelerated GPU training and the ability to train larger networks or batch sizes locally. Learn more here.

(Video) 52 Weeks of Live Coding .NET: Episode 2

PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser, are now available (1)

Accelerated GPU training and evaluation speedups over CPU-only (times faster)

Alongside the new MPS device support, the M1 binaries for Core and Domain libraries that have been available for the last few releases are now an official prototype feature. These binaries can be used to run PyTorch natively on Apple Silicon.

(Prototype) BetterTransformer: Fastpath execution for Transformer Encoder Inference

PyTorch now supports CPU and GPU fastpath implementations (“BetterTransformer”) for several Transformer Encoder modules including TransformerEncoder, TransformerEncoderLayer, and MultiHeadAttention (MHA). The BetterTransformer fastpath architecture Better Transformer is consistently faster – 2x for many common execution scenarios, depending on model and input characteristics. The new BetterTransformer-enabled modules are API compatible with previous releases of the PyTorch Transformer API and will accelerate existing models if they meet fastpath execution requirements, as well as read models trained with previous versions of PyTorch. PyTorch 1.12 includes:

  • BetterTransformer integration for Torchtext’s pretrained RoBERTa and XLM-R models
  • Torchtext which builds on the PyTorch Transformer API
  • Fastpath execution for improved performance by reducing execution overheads with fused kernels which combines multiple operators into a single kernel
  • Option to achieve additional speedups by taking advantage of data sparsity during the processing of padding tokens in natural-language processing (by setting enable_nested_tensor=True when creating a TransformerEncoder)
  • Diagnostics to help users understand why fastpath execution did not occur

PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser, are now available (2)

Distributed

(Beta) Fully Sharded Data Parallel (FSDP) API

FSDP API helps easily scale large model training by sharding a model’s parameters, gradients and optimizer states across data parallel workers while maintaining the simplicity of data parallelism. The prototype version was released in PyTorch 1.11 with a minimum set of features that helped scaling tests of models with up to 1T parameters.

In this beta release, FSDP API added the following features to support various production workloads. Highlights of the the newly added features in this beta release include:

  1. Universal sharding strategy API - Users can easily change between sharding strategies with a single line change, and thus compare and use DDP (only data sharding), FSDP (full model and data sharding), or Zero2 (only sharding of optimizer and gradients) to optimize memory and performance for their specific training needs
  2. Fine grained mixed precision policies - Users can specify a mix of half and full data types (bfloat16, fp16 or fp32) for model parameters, gradient communication, and buffers via mixed precision policies. Models are automatically saved in fp32 to allow for maximum portability
  3. Transformer auto wrapping policy - allows for optimal wrapping of Transformer based models by registering the models layer class, and thus accelerated training performance
  4. Faster model initialization using device_id init - initialization is performed in a streaming fashion to avoid OOM issues and optimize init performance vs CPU init
  5. Rank0 streaming for full model saving of larger models - Fully sharded models can be saved by all GPU’s streaming their shards to the rank 0 GPU, and the model is built in full state on the rank 0 CPU for saving

For more details and example code, please checkout the documentation and the tutorial.

Thanks for reading, If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn.

Cheers!

Team PyTorch

FAQs

Does PyTorch have a functional API? ›

(Beta) Functional API for Modules

PyTorch 1.12 introduces a new beta feature to functionally apply Module computation with a given set of parameters.

What is the current version of PyTorch? ›

PyTorch
Original author(s)Adam Paszke Sam Gross Soumith Chintala Gregory Chanan
Initial releaseSeptember 2016
Stable release1.12.1 / 5 August 2022
Repositorygithub.com/pytorch/pytorch
Written inPython C++ CUDA
10 more rows

Is Torchvision installed with PyTorch? ›

This library is part of the PyTorch project. PyTorch is an open source machine learning framework.

What is TorchArrow? ›

TorchArrow is a torch. Tensor-like Python DataFrame library for data preprocessing in PyTorch models, with two high-level features: DataFrame library (like Pandas) with strong GPU or other hardware acceleration (under development) and PyTorch ecosystem integration.

Can I mix PyTorch and TensorFlow? ›

While I do not recommend combining machine learning frameworks TensorFlow and PyTorch in a single project, it is practical in some cases. GPU memory clearing is necessary when using large models like Transformers and switching between TensorFlow and PyTorch during runtime.

Which one is better PyTorch or TensorFlow? ›

TensorFlow offers better visualization, which allows developers to debug better and track the training process. PyTorch, however, provides only limited visualization. TensorFlow also beats PyTorch in deploying trained models to production, thanks to the TensorFlow Serving framework.

Can I pip install PyTorch? ›

Package Manager

To install the PyTorch binaries, you will need to use one of two supported package managers: Anaconda or pip.

How do I install Python torch library? ›

To install PyTorch, you have to run the installation command of PyTorch on your command prompt. This command is available on https://pytorch.org/. Select language and cuda version as per your requirement. Now, run python -version, and Conda -version command to check Conda and python packages are installed or not.

Is Torch same as PyTorch? ›

Torch provides lua wrappers to the THNN library while Pytorch provides Python wrappers for the same. PyTorch's recurrent nets, weight sharing and memory usage with the flexibility of interfacing with C, and the current speed of Torch.

How do I find my Torchvision version? ›

Check TorchVision Version - PyTorch Tutorial - YouTube

How do you check PyTorch is installed or not? ›

You can use torch. __version__ to check the version of PyTorch. If you have not imported PyTorch, use import torch first. If you used pip to install PyTorch, run pip3 show torch to show all the information of the installation, which also includes the version of PyTorch.

How do I activate the PyTorch environment? ›

Manual installation of PyTorch in a conda environment

Create a conda environment with conda create -n my-torch python=3.7 -y. Activate the new environment with conda activate my-torch.

What is TorchX? ›

TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and support for E2E production ML pipelines when you're ready.

Why use PyTorch instead of TensorFlow? ›

So, both TensorFlow and PyTorch provide useful abstractions to reduce amounts of boilerplate code and speed up model development. The main difference between them is that PyTorch may feel more “pythonic” and has an object-oriented approach while TensorFlow has several options from which you may choose.

How do I convert PyTorch model to TensorFlow? ›

Converting a PyTorch model to TensorFlow
  1. Save the trained model. torch.save(model.state_dict(), 'mnist.pth')
  2. Load the saved model. Generate and pass random input so the Pytorch exporter can trace the model and save it to an ONNX file.
8 Mar 2021

Does Google use TensorFlow or PyTorch? ›

TensorFlow is developed by Google Brain and actively used at Google both for research and production needs. Its closed-source predecessor is called DistBelief. PyTorch is a cousin of lua-based Torch framework which was developed and used at Facebook.

Is PyTorch difficult? ›

It is easy to learn, use, extend, and debug. Great API: PyTorch shines in term of usability due to better designed Object Oriented classes which encapsulate all of the important data choices along with the choice of model architecture. The documentation of PyTorch is also very brilliant and helpful for beginners.

Is PyTorch used in industry? ›

The three top industries that use PyTorch for Data Science And Machine Learning are Machine Learning (965), Artificial Intelligence (890), Deep Learning (401).

How long does it take to learn PyTorch? ›

Depending upon your proficiency in Python and machine learning knowledge, it can take from one to three month for learning and mastering PyTorch.

How do I import PyTorch? ›

Let's go over the steps:
  1. Download and install Anaconda (choose the latest Python version).
  2. Go to PyTorch's site and find the get started locally section.
  3. Specify the appropriate configuration options for your particular environment.
  4. Run the presented command in the terminal to install PyTorch.
7 Sept 2018

How do I enable CUDA for PyTorch? ›

1. CUDA Installation
  1. 1.1. Download. First, go to https://developer.nvidia.com/cuda-downloads. Select the following settings to download the CUDA: ...
  2. 1.2. Install. After downloading, click the exe file to install CUDA: ...
  3. 1.3. Check. You may check the CUDA version that just installed.

How do I install PyTorch with CUDA? ›

Installing Pytorch with CUDA support on Windows 10
  1. Download NVIDIA CUDA Toolkit.
  2. Download and Install cuDNN.
  3. Get the driver software for the GPU.
  4. Download Anaconda.
  5. Download Pycharm.

How do I import PyTorch in Jupyter? ›

Installing PyTorch
  1. Install miniconda. ...
  2. Create a conda environment. ...
  3. Activate the conda environment. ...
  4. Install python packages in conda environment. ...
  5. Setting up CUDA. ...
  6. Configuring Jupyter. ...
  7. Running jupyter lab remotely. ...
  8. Verify that everything works.

How do I run PyTorch without CUDA? ›

If you want to install PyTorch without CUDA support, then you can use the “cpuonly” build. This build will install PyTorch with all the necessary dependencies for running on a CPU only.

Is PyTorch a Python library? ›

PyTorch is an open source machine learning (ML) framework based on the Python programming language and the Torch library. It is one of the preferred platforms for deep learning research. The framework is built to speed up the process between research prototyping and deployment.

Does PyTorch use numpy? ›

Pytorch tensors are similar to numpy arrays, but can also be operated on CUDA-capable Nvidia GPU. Numpy arrays are mainly used in typical machine learning algorithms (such as k-means or Decision Tree in scikit-learn) whereas pytorch tensors are mainly used in deep learning which requires heavy matrix computation.

What is the difference between Torch tensor and Torch tensor? ›

torch. tensor infers the dtype automatically, while torch. Tensor returns a torch.

How do PyTorch modules work? ›

PyTorch uses modules to represent neural networks. Modules are: Building blocks of stateful computation. PyTorch provides a robust library of modules and makes it simple to define new custom modules, allowing for easy construction of elaborate, multi-layer neural networks.

How do I check Python version? ›

To get the Version of the python Interpreter, they are listed as follows:
  1. Using sys. version method.
  2. Using python_version() function.
  3. Using Python -V command.
26 Aug 2022

How do I know if Python is installed? ›

Show activity on this post.
  1. Open Command Prompt > Type Python Or py > Hit Enter If Python Is Installed it will show the version Details Otherwise It will Open Microsoft Store To Download From Microsoft Store.
  2. Just go in cmd and type where python if it installed it will open a prompt .
22 Jun 2019

How do I install pip? ›

Step 1: Download the get-pip.py (https://bootstrap.pypa.io/get-pip.py) file and store it in the same directory as python is installed. Step 2: Change the current path of the directory in the command line to the path of the directory where the above file exists. Step 4: Now wait through the installation process. Voila!

How do I install Python torch on Windows? ›

  1. Select Windows as your operating system.
  2. Select your Package Manager such as pip or conda.
  3. Select you python version.
  4. Select CUDA or choose none You will get the command that will install pytorch on your system based on your selection.
11 Dec 2017

Should I install PyTorch in virtual environment? ›

To ensure that the installation of PyTorch and it's dependencies has no adverse effect on your system's Python installation, it's advisable to install it in a virtual Python environment.

Can I run PyTorch on Windows? ›

Currently, PyTorch on Windows only supports Python 3. x; Python 2. x is not supported. After the installation is complete, verify your Anaconda and Python versions.

What is a Python virtual environment? ›

A virtual environment is a Python environment such that the Python interpreter, libraries and scripts installed into it are isolated from those installed in other virtual environments, and (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system.

What is Torch nn functional in python? ›

Applies a 2D convolution over an input image composed of several input planes. conv3d. Applies a 3D convolution over an input image composed of several input planes. conv_transpose1d. Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called “deconvolution ...

What does torch sigmoid do? ›

An operation done based on elements where any real number is reduced to a value between 0 and 1 with two different patterns in PyTorch is called Sigmoid function. This is used as final layers of binary classifiers where model predictions are treated like probabilities where the outputs give true values.

What does ReLU function do? ›

The rectified linear activation function or ReLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero.

What is PyTorch nn? ›

PyTorch: nn

The nn package defines a set of Modules, which you can think of as a neural network layer that produces output from input and may have some trainable weights.

Is Torch same as PyTorch? ›

Torch provides lua wrappers to the THNN library while Pytorch provides Python wrappers for the same. PyTorch's recurrent nets, weight sharing and memory usage with the flexibility of interfacing with C, and the current speed of Torch.

How do I import a torch? ›

To make sure PyTorch is installed in your system, just type python3 in your terminal and run it. After that type import torch for use PyTorch library at last type and run print(torch. __version__) it shows which version of PyTorch was installed on your system if PyTorch was installed on your system.

How do PyTorch modules work? ›

PyTorch uses modules to represent neural networks. Modules are: Building blocks of stateful computation. PyTorch provides a robust library of modules and makes it simple to define new custom modules, allowing for easy construction of elaborate, multi-layer neural networks.

What is a logit in PyTorch? ›

The vector of raw (non-normalized) predictions that a classification model generates, which is ordinarily then passed to a normalization function. If the model is solving a multi-class classification problem, logits typically become an input to the softmax function.

How do you use ReLU in PyTorch? ›

In PyTorch, you can construct a ReLU layer using the simple function relu1 = nn. ReLU with the argument inplace=False. Since the ReLU function is applied element-wise, there's no need to specify input or output dimensions. The argument inplace determines how the function treats the input.

What activation function does PyTorch use? ›

Types of Pytorch Activation Function

Sigmoid Activation Function. Tanh Activation Function. Softmax Activation Function.

Why ReLU is most used? ›

The main reason why ReLu is used is because it is simple, fast, and empirically it seems to work well. Empirically, early papers observed that training a deep network with ReLu tended to converge much more quickly and reliably than training a deep network with sigmoid activation.

Which activation function is the best? ›

The rectified linear activation function, or ReLU activation function, is perhaps the most common function used for hidden layers. It is common because it is both simple to implement and effective at overcoming the limitations of other previously popular activation functions, such as Sigmoid and Tanh.

What is ReLU and Softmax? ›

As per our business requirement, we can choose our required activation function. Generally , we use ReLU in hidden layer to avoid vanishing gradient problem and better computation performance , and Softmax function use in last output layer .

How many epochs should I train? ›

The right number of epochs depends on the inherent perplexity (or complexity) of your dataset. A good rule of thumb is to start with a value that is 3 times the number of columns in your data. If you find that the model is still improving after all epochs complete, try again with a higher value.

What does nn module mean? ›

The basic idea behind developing the PyTorch framework is to develop a neural network, train, and build the model. PyTorch has two main features as a computational graph and the tensors which is a multi-dimensional array that can be run on GPU. PyTorch nn module has high-level APIs to build a neural network.

Top Articles
Latest Posts
Article information

Author: Dong Thiel

Last Updated: 02/10/2023

Views: 6089

Rating: 4.9 / 5 (79 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Dong Thiel

Birthday: 2001-07-14

Address: 2865 Kasha Unions, West Corrinne, AK 05708-1071

Phone: +3512198379449

Job: Design Planner

Hobby: Graffiti, Foreign language learning, Gambling, Metalworking, Rowing, Sculling, Sewing

Introduction: My name is Dong Thiel, I am a brainy, happy, tasty, lively, splendid, talented, cooperative person who loves writing and wants to share my knowledge and understanding with you.