Top 50

NVIDIA GeForce RTX 4080 SUPER

NVIDIA GeForce RTX 4080 SUPER

NVIDIA GeForce RTX 4080 SUPER is a Desktop video accelerator from NVIDIA. The GPU has a boost frequency of 2505MHz. It also has a memory frequency of 1400MHz. Its characteristics, as well as benchmark results, are presented in more detail below.

Top Desktop GPU: 24

Basic

Label Name
NVIDIA
Platform
Desktop
Model Name
GeForce RTX 4080 SUPER
Generation
GeForce 40
Base Clock
2205MHz
Boost Clock
2505MHz
Shading Units
?
The most fundamental processing unit is the Streaming Processor (SP), where specific instructions and tasks are executed. GPUs perform parallel computing, which means multiple SPs work simultaneously to process tasks.
10240
SM Count
?
Multiple Streaming Processors (SPs), along with other resources, form a Streaming Multiprocessor (SM), which is also referred to as a GPU's major core. These additional resources include components such as warp schedulers, registers, and shared memory. The SM can be considered the heart of the GPU, similar to a CPU core, with registers and shared memory being scarce resources within the SM.
80
L1 Cache
128 KB (per SM)
L2 Cache
64MB
Bus Interface
PCIe 4.0 x16
TDP
340W

Memory Specifications

Memory Size
16GB
Memory Type
GDDR6X
Memory Bus
?
The memory bus width refers to the number of bits of data that the video memory can transfer within a single clock cycle. The larger the bus width, the greater the amount of data that can be transmitted instantaneously, making it one of the crucial parameters of video memory. The memory bandwidth is calculated as: Memory Bandwidth = Memory Frequency x Memory Bus Width / 8. Therefore, when the memory frequencies are similar, the memory bus width will determine the size of the memory bandwidth.
256bit
Memory Clock
1400MHz
Bandwidth
?
Memory bandwidth refers to the data transfer rate between the graphics chip and the video memory. It is measured in bytes per second, and the formula to calculate it is: memory bandwidth = working frequency × memory bus width / 8 bits.
716.8 GB/s

Theoretical Performance

Pixel Rate
?
Pixel fill rate refers to the number of pixels a graphics processing unit (GPU) can render per second, measured in MPixels/s (million pixels per second) or GPixels/s (billion pixels per second). It is the most commonly used metric to evaluate the pixel processing performance of a graphics card.
280.6 GPixel/s
Texture Rate
?
Texture fill rate refers to the number of texture map elements (texels) that a GPU can map to pixels in a single second.
801.6 GTexel/s
FP16 (half)
?
An important metric for measuring GPU performance is floating-point computing capability. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable. Single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks, while double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy.
51.30 TFLOPS
FP64 (double)
?
An important metric for measuring GPU performance is floating-point computing capability. Double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy, while single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
801.6 GFLOPS
FP32 (float)
?
An important metric for measuring GPU performance is floating-point computing capability. Single-precision floating-point numbers (32-bit) are used for common multimedia and graphics processing tasks, while double-precision floating-point numbers (64-bit) are required for scientific computing that demands a wide numeric range and high accuracy. Half-precision floating-point numbers (16-bit) are used for applications like machine learning, where lower precision is acceptable.
51.285 TFlops

Miscellaneous

Vulkan Version
?
Vulkan is a cross-platform graphics and compute API by Khronos Group, offering high performance and low CPU overhead. It lets developers control the GPU directly, reduces rendering overhead, and supports multi-threading and multi-core processors.
1.3
OpenCL Version
3.0

FP32 (float)

51.285 TFlops

3DMark Time Spy

28395

Vulkan

219989

OpenCL

254268

Compared to Other GPU

66%
82%
95%
Better then 66% GPU over the past year
Better then 82% GPU over the past 3 years
Better then 95% GPU

SiliconCat Rating

24
Ranks 24 among Desktop GPU on our website
40
Ranks 40 among all GPU on our website
FP32 (float)
GeForce RTX 4090D
NVIDIA, December 2023
73.518 TFlops
Radeon RX 7900 XTX
AMD, November 2022
61.402 TFlops
51.285 TFlops
Instinct MI250X
AMD, November 2021
46.908 TFlops
41.137 TFlops
3DMark Time Spy
GeForce RTX 4090
NVIDIA, September 2022
36957
Radeon RX 6800M
AMD, May 2021
11457
9099
GeForce RTX 2070 Mobile
NVIDIA, January 2019
7229
Vulkan
GeForce RTX 4090
NVIDIA, September 2022
254749
GeForce GTX 1080 Ti
NVIDIA, March 2017
83205
Radeon RX 6550M
AMD, January 2023
54373
Radeon R9 M295X
AMD, November 2014
29028
OpenCL
L40S
NVIDIA, October 2022
362331
Arc A770M
Intel, January 2022
94927
Radeon RX 5700
AMD, July 2019
66428
GeForce GTX 1070
NVIDIA, June 2016
46137