Rent RTX PRO 6000 GPUs from $0.80/hr
Professional-grade GPU with 96GB GDDR7 ECC memory on NVIDIA Blackwell architecture for large-scale AI training, deep learning, and enterprise workloads.
Includes $1 free credit to try instantly. No commitment.
Powering the next generation of AI & high-performance computing.
The RTX PRO 6000 is NVIDIA's flagship professional GPU, featuring the full Blackwell GB202 die with 24,064 CUDA cores and an massive 96GB of ECC GDDR7 memory. Designed for the most demanding AI, deep learning, and HPC workloads where memory capacity is critical.
Full Blackwell GB202 Die
Unlocked GPU die with 24,064 CUDA cores — the maximum available in the Blackwell consumer/professional lineup for uncompromised compute performance.
96GB GDDR7 ECC Memory
Massive memory capacity with error-correcting code ensures data integrity for mission-critical AI training and scientific computing workloads.
5th-Gen Tensor Cores
752 Tensor Cores deliver peak AI acceleration for training large language models, running multi-billion parameter inference, and complex ML pipelines.
1,792 GB/s Bandwidth
High-speed memory bandwidth on a 512-bit bus sustains throughput for memory-bound workloads, large batch training, and high-resolution data processing.
Key specs at a glance.
Essential technical specifications to help you choose the right GPU for your workload.
| GPU Architecture | Blackwell (GB202) |
| CUDA Cores | 24,064 |
| Tensor Cores | 752 (5th Generation) |
| RT Cores | 188 (4th Generation) |
| Base Clock | 1,590 MHz |
| Boost Clock | 2,617 MHz |
| Memory Size | 96 GB GDDR7 ECC |
| Memory Bandwidth | 1,792 GB/s |
| Memory Bus Width | 512-bit |
| TDP | 600W |
| PCIe Interface | PCIe Gen 5 x16 |
| Manufacturing Process | TSMC 4NP |
| Transistors | 92.2 billion |
| Host CPU | AMD EPYC 9654 |
| CPU Cores per GPU | 22 cores |
RTX PRO 6000 Pricing
Per-second billing with no minimum commitments. Configure from 1 to 8 GPUs per instance.
Great for...
Ideal workloads and use cases for the RTX PRO 6000.
Large Model Training
Train models that require more than 24-32GB of VRAM without model parallelism. 96GB enables full-precision training of large language models and complex architectures.
Enterprise AI & LLM Inference
Serve large language models that won't fit in consumer GPU memory. Run 70B+ parameter models in full precision for production inference.
Scientific & HPC Workloads
Accelerate molecular dynamics, climate simulations, and other HPC workloads with ECC memory for data integrity and massive compute throughput.
Multi-Model Pipelines
Run complex AI pipelines with multiple models loaded simultaneously. 96GB of VRAM eliminates model swapping overhead for production workflows.
Ready to deploy RTX PRO 6000?
Launch a RTX PRO 6000 instance in minutes. Per-second billing, no contracts.
