
Tesla K20X is ideal for delivering record acceleration and more efficient compute performance for big data applications in fields including seismic processing; computational biology and chemistry; weather and climate modeling; image, video and signal processing; computational finance, computational physics; CAE and CFD; and data analytics. |
Designed for double-precision applications across the broader supercomputing market, the Tesla K20X delivers over 1.31 TFlops peak double-precision. Kepler based Tesla GPU Accelerators are also available for workstations. |
PNY provides unsurpassed service and commitment to its professional graphics customers offering: 3 year warranty, pre- and post-sales support, dedicated Quadro Field Application engineers and direct tech support hot lines. |
TESLA K20X - PRODUCT SPECIFICATIONS | |
CUDA Parallel Processing Cores | 2688 |
Interface | 320-bit |
Frame Buffer Memory | 6 GB GDDR5 |
Memory Bandwidth | 250 GBytes/sec |
Displau Connectors | None |
Max. Power Consumption | 225 W |
Power Connectors | (2) x 6-pin PCI Express power connectors |
Graphics Bus | PCI Express 2.0 x16 |
Form Factor | 110 mm (H) x 265 mm (L) - Dual Slot |
Thermal Solution | Passive |
Peak double precision floating point performance (board) | 1.31 teraflops |
Peak single precision floating pointperformance (board) | 3.95 teraflops |
Number of GPUs | 1 x GK110 |
GPU computing applications | CFD, CAE, financial computing, computational chemistry and physics, data analytics, satellite imaging, weather modeling |
Architecture features | SMX, Dynamic Parallelism, Hyper-Q |
System | Servers only |
ECC Memory Error Protection | Meets a critical requirement for computing accuracy and reliability in datacenters and supercomputing centers. External and internal memories are ECC protected in Tesla K20X. |
System Monitoring Features | Integrates the GPU subsystem with the host system’s monitoring and management capabilities such as IPMI or OEM-proprietary tools. IT staff can thus manage the GPU processors in the computing system using widely used cluster/grid management solutions. |
L1 and L2 caches | Accelerates algorithms such as physics solvers, ray-tracing, and sparse matrix multiplication where data addresses are not known beforehand |
Asynchronous Transfer with dual DMA engines | Turbocharges system performance by transferring data over the PCIe bus while the computing cores are crunching other data. |
Flexible programming environment with broad support of programming languages and APIs |
Choose OpenACC, CUDA toolkits for C, C++, or Fortran to express application parallelism and take advantage of the innovative Kepler architecture. |