GPU Models
Available GPUs
Browse GPU models and find detailed specs, pricing, and capabilities. Deploy in minutes with per-second billing.
Best ValueAda Lovelace ArchitectureLowest cost per hour for GPU compute
NVIDIA GeForce RTX 4090
AD102
$0.35/hr
per GPU
VRAM
24 GB GDDR6X
CUDA Cores
16,384
Bandwidth
1,008 GB/s
Tensor Cores
512
Proven GPU built on NVIDIA Ada Lovelace architecture with 24GB GDDR6X memory for AI, deep learning, and rendering workloads.
View details
Most PopularBlackwell ArchitectureTop pick for AI training and inference
NVIDIA GeForce RTX 5090
GB202
$0.48/hr
per GPU
VRAM
32 GB GDDR7
CUDA Cores
21,760
Bandwidth
1,792 GB/s
Tensor Cores
680
Next-generation GPU built on NVIDIA Blackwell architecture with 32GB GDDR7 memory for AI workloads, machine learning, and high-performance computing.
View details
Most MemoryBlackwell Architecture96GB for large models and datasets
NVIDIA RTX PRO 6000
GB202
$0.80/hr
per GPU
VRAM
96 GB GDDR7 ECC
CUDA Cores
24,064
Bandwidth
1,792 GB/s
Tensor Cores
752
Professional-grade GPU with 96GB GDDR7 ECC memory on NVIDIA Blackwell architecture for large-scale AI training, deep learning, and enterprise workloads.
View details
Compare GPU Models
Side-by-side comparison of all available GPU models.
| Specification | RTX 4090 | RTX 5090 | RTX PRO 6000 |
|---|---|---|---|
| Architecture | Ada Lovelace | Blackwell | Blackwell |
| VRAM | 24 GB GDDR6X | 32 GB GDDR7 | 96 GB GDDR7 ECC |
| CUDA Cores | 16,384 | 21,760 | 24,064 |
| Tensor Cores | 512 (4th Generation) | 680 (5th Generation) | 752 (5th Generation) |
| RT Cores | 128 (3rd Generation) | 170 (4th Generation) | 188 (4th Generation) |
| Memory Bandwidth | 1,008 GB/s | 1,792 GB/s | 1,792 GB/s |
| Memory Bus | 384-bit | 512-bit | 512-bit |
| Boost Clock | 2,520 MHz | 2,407 MHz | 2,617 MHz |
| TDP | 450W | 575W | 600W |
| PCIe | PCIe Gen 4 x16 | PCIe Gen 5 x16 | PCIe Gen 5 x16 |
| Process | TSMC 4N | TSMC 4N | TSMC 4NP |
| On-demand | $0.35/hr | $0.48/hr | $0.80/hr |
| Interruptible | $0.08/hr | $0.10/hr | $0.10/hr |
| CPU per GPU | 16 cores (EPYC 7B13) | 22 cores (EPYC 9654) | 22 cores (EPYC 9654) |
Ready to deploy?
Spin up a GPU instance in minutes. Per-second billing, no contracts.
