NVIDIA L40G

NVIDIA L40G

NVIDIA L40G is a Professional video accelerator from NVIDIA. It began to be released in October 2022. The GPU has a boost frequency of 2475MHz. It also has a memory frequency of 2250MHz. Its characteristics, as well as benchmark results, are presented in more detail below.

Basic

Label Name
NVIDIA
Platform
Professional
Launch Date
October 2022
Model Name
L40G
Generation
Tesla Ada
Base Clock
1005MHz
Boost Clock
2475MHz
Shading Units
?
The most fundamental processing unit is the Streaming Processor (SP), where specific instructions and tasks are executed. GPUs perform parallel computing, which means multiple SPs work simultaneously to process tasks.
18176
SM Count
?
Multiple Streaming Processors (SPs), along with other resources, form a Streaming Multiprocessor (SM), which is also referred to as a GPU's major core. These additional resources include components such as warp schedulers, registers, and shared memory. The SM can be considered the heart of the GPU, similar to a CPU core, with registers and shared memory being scarce resources within the SM.
142
Transistors
76,300 million
RT Cores
142
Tensor Cores
?
Tensor Cores are specialized processing units designed specifically for deep learning, providing higher training and inference performance compared to FP32 training. They enable rapid computations in areas such as computer vision, natural language processing, speech recognition, text-to-speech conversion, and personalized recommendations. The two most notable applications of Tensor Cores are DLSS (Deep Learning Super Sampling) and AI Denoiser for noise reduction.
568
TMUs
?
Texture Mapping Units (TMUs) serve as components of the GPU, which are capable of rotating, scaling, and distorting binary images, and then placing them as textures onto any plane of a given 3D model. This process is called texture mapping.
568
L1 Cache
128 KB (per SM)
L2 Cache
48MB
Bus Interface
PCIe 4.0 x16
Foundry
TSMC
Process Size
4 nm
Architecture
Ada Lovelace
TDP
300W

Memory Specifications

Memory Size
24GB
Memory Type
GDDR6
Memory Bus
?
The memory bus width refers to the number of bits of data that the video memory can transfer within a single clock cycle. The larger the bus width, the greater the amount of data that can be transmitted instantaneously, making it one of the crucial parameters of video memory. The memory bandwidth is calculated as: Memory Bandwidth = Memory Frequency x Memory Bus Width / 8. Therefore, when the memory frequencies are similar, the memory bus width will determine the size of the memory bandwidth.
384bit
Memory Clock
2250MHz
Bandwidth
?
Memory bandwidth refers to the data transfer rate between the graphics chip and the video memory. It is measured in bytes per second, and the formula to calculate it is: memory bandwidth = working frequency × memory bus width / 8 bits.
864.0 GB/s

Theoretical Performance

Pixel Rate
?
Pixel fill rate refers to the number of pixels a graphics processing unit (GPU) can render per second, measured in MPixels/s (million pixels per second) or GPixels/s (billion pixels per second). It is the most commonly used metric to evaluate the pixel processing performance of a graphics card.
475.2 GPixel/s
Texture Rate
?
Texture fill rate refers to the number of texture map elements (texels) that a GPU can map to pixels in a single second.
1406 GTexel/s
FP16 (half)
?
An important metric for measuring GPU performance is floating-point computing capability. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable. Single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks, while double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy.
89.97 TFLOPS
FP64 (double)
?
An important metric for measuring GPU performance is floating-point computing capability. Double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy, while single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
1406 GFLOPS
FP32 (float)
?
An important metric for measuring GPU performance is floating-point computing capability. Single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks, while double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
89.942 TFlops

Miscellaneous

Vulkan Version
?
Vulkan is a cross-platform graphics and compute API by Khronos Group, offering high performance and low CPU overhead. It lets developers control the GPU directly, reduces rendering overhead, and supports multi-threading and multi-core processors.
1.3
OpenCL Version
3.0
OpenGL
4.6
DirectX
12 Ultimate (12_2)
CUDA
8.9
Power Connectors
1x 16-pin
ROPs
?
The Raster Operations Pipeline (ROPs) is primarily responsible for handling lighting and reflection calculations in games, as well as managing effects like anti-aliasing (AA), high resolution, smoke, and fire. The more demanding the anti-aliasing and lighting effects in a game, the higher the performance requirements for the ROPs; otherwise, it may result in a sharp drop in frame rate.
192
Shader Model
6.6
Suggested PSU
700W

FP32 (float)

89.942 TFlops

Compared to Other GPU

SiliconCat Rating

11
Ranks 11 among all GPU on our website
FP32 (float)
Instinct MI300X
AMD, December 2023
163.351 TFlops
L40G
NVIDIA, October 2022
89.942 TFlops
RTX 5880 Ada Generation
NVIDIA, January 2024
71.789 TFlops
Radeon PRO W7900
AMD, April 2023
61.302 TFlops
H100 PCIe
NVIDIA, March 2022
51.205 TFlops