back to home

NVIDIA / apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

8,926 stars
1,514 forks
756 issues
PythonCudaC++

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing NVIDIA/apex in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind-ai.vercel.app/repo/NVIDIA/apex)
Preview:Analyzed by RepoMind

Repository Summary (README)

Preview

Introduction

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intent of Apex is to make up-to-date utilities available to users as quickly as possible.

Installation

Each apex.contrib module requires one or more install options other than --cpp_ext and --cuda_ext. Note that contrib modules do not necessarily support stable PyTorch releases, some of them might only be compatible with nightlies.

Containers

NVIDIA PyTorch Containers are available on NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch. The containers come with all the custom extensions available at the moment.

See the NGC documentation for details such as:

  • how to pull a container
  • how to run a pulled container
  • release notes

From Source

To install Apex from source, we recommend using the nightly Pytorch obtainable from https://github.com/pytorch/pytorch.

The latest stable release obtainable from https://pytorch.org should also work.

We recommend installing Ninja to make compilation faster.

Linux

For performance and full functionality, we recommend installing Apex with CUDA and C++ extensions using environment variables:

Using Environment Variables (Recommended)

git clone https://github.com/NVIDIA/apex
cd apex
# Build with core extensions (cpp and cuda)
APEX_CPP_EXT=1 APEX_CUDA_EXT=1 pip install -v --no-build-isolation .

# To build with additional extensions, specify them with environment variables
APEX_CPP_EXT=1 APEX_CUDA_EXT=1 APEX_FAST_MULTIHEAD_ATTN=1 APEX_FUSED_CONV_BIAS_RELU=1 pip install -v --no-build-isolation .

# To build all contrib extensions at once
APEX_CPP_EXT=1 APEX_CUDA_EXT=1 APEX_ALL_CONTRIB_EXT=1 pip install -v --no-build-isolation .

To reduce the build time, parallel building can be enabled:

NVCC_APPEND_FLAGS="--threads 4" APEX_PARALLEL_BUILD=8 APEX_CPP_EXT=1 APEX_CUDA_EXT=1 pip install -v --no-build-isolation .

When CPU cores or memory are limited, the --parallel option is generally preferred over --threads. See pull#1882 for more details.

Using Command-Line Flags (Legacy Method)

The traditional command-line flags are still supported:

# Using pip config-settings (pip >= 23.1)
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./

# For older pip versions
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./

# To build with additional extensions
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_multihead_attn" ./

Python-Only Build

APEX also supports a Python-only build via:

pip install -v --disable-pip-version-check --no-build-isolation --no-cache-dir ./

A Python-only build omits:

  • Fused kernels required to use apex.optimizers.FusedAdam.
  • Fused kernels required to use apex.normalization.FusedLayerNorm and apex.normalization.FusedRMSNorm.
  • Fused kernels that improve the performance and numerical stability of apex.parallel.SyncBatchNorm.
  • Fused kernels that improve the performance of apex.parallel.DistributedDataParallel and apex.amp. DistributedDataParallel, amp, and SyncBatchNorm will still be usable, but they may be slower.

[Experimental] Windows

pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" . may work if you were able to build Pytorch from source on your system. A Python-only build via pip install -v --no-cache-dir . is more likely to work.
If you installed Pytorch in a Conda environment, make sure to install Apex in that same environment.

Custom C++/CUDA Extensions and Install Options

If a requirement of a module is not met, then it will not be built.

Module NameEnvironment VariableInstall OptionMisc
apex_CAPEX_CPP_EXT=1--cpp_ext
amp_CAPEX_CUDA_EXT=1--cuda_ext
syncbnAPEX_CUDA_EXT=1--cuda_ext
fused_layer_norm_cudaAPEX_CUDA_EXT=1--cuda_extapex.normalization
mlp_cudaAPEX_CUDA_EXT=1--cuda_ext
scaled_upper_triang_masked_softmax_cudaAPEX_CUDA_EXT=1--cuda_ext
generic_scaled_masked_softmax_cudaAPEX_CUDA_EXT=1--cuda_ext
scaled_masked_softmax_cudaAPEX_CUDA_EXT=1--cuda_ext
fused_weight_gradient_mlp_cudaAPEX_CUDA_EXT=1--cuda_extRequires CUDA>=11
permutation_search_cudaAPEX_PERMUTATION_SEARCH=1--permutation_searchapex.contrib.sparsity
bnpAPEX_BNP=1--bnpapex.contrib.groupbn
xentropyAPEX_XENTROPY=1--xentropyapex.contrib.xentropy
focal_loss_cudaAPEX_FOCAL_LOSS=1--focal_lossapex.contrib.focal_loss
fused_index_mul_2dAPEX_INDEX_MUL_2D=1--index_mul_2dapex.contrib.index_mul_2d
fused_adam_cudaAPEX_DEPRECATED_FUSED_ADAM=1--deprecated_fused_adamapex.contrib.optimizers
fused_lamb_cudaAPEX_DEPRECATED_FUSED_LAMB=1--deprecated_fused_lambapex.contrib.optimizers
fast_layer_normAPEX_FAST_LAYER_NORM=1--fast_layer_normapex.contrib.layer_norm. different from fused_layer_norm
fmhalibAPEX_FMHA=1--fmhaapex.contrib.fmha
fast_multihead_attnAPEX_FAST_MULTIHEAD_ATTN=1--fast_multihead_attnapex.contrib.multihead_attn
transducer_joint_cudaAPEX_TRANSDUCER=1--transducerapex.contrib.transducer
transducer_loss_cudaAPEX_TRANSDUCER=1--transducerapex.contrib.transducer
cudnn_gbn_libAPEX_CUDNN_GBN=1--cudnn_gbnRequires cuDNN>=8.5, apex.contrib.cudnn_gbn
peer_memory_cudaAPEX_PEER_MEMORY=1--peer_memoryapex.contrib.peer_memory
nccl_p2p_cudaAPEX_NCCL_P2P=1--nccl_p2pRequires NCCL >= 2.10, apex.contrib.nccl_p2p
fast_bottleneckAPEX_FAST_BOTTLENECK=1--fast_bottleneckRequires peer_memory_cuda and nccl_p2p_cuda, apex.contrib.bottleneck
fused_conv_bias_reluAPEX_FUSED_CONV_BIAS_RELU=1--fused_conv_bias_reluRequires cuDNN>=8.4, apex.contrib.conv_bias_relu
distributed_adam_cudaAPEX_DISTRIBUTED_ADAM=1--distributed_adamapex.contrib.optimizers
distributed_lamb_cudaAPEX_DISTRIBUTED_LAMB=1--distributed_lambapex.contrib.optimizers
_apex_nccl_allocatorAPEX_NCCL_ALLOCATOR=1--nccl_allocatorRequires NCCL >= 2.19, apex.contrib.nccl_allocator
_apex_gpu_direct_storageAPEX_GPU_DIRECT_STORAGE=1--gpu_direct_storageapex.contrib.gpu_direct_storage

You can also build all contrib extensions at once by setting APEX_ALL_CONTRIB_EXT=1.