Tegra

(Redirected from Tegra 4)

Tegra is a system on a chip (SoC) series developed by Nvidia for mobile devices such as smartphones, personal digital assistants, and mobile Internet devices. The Tegra integrates an ARM architecture central processing unit (CPU), graphics processing unit (GPU), northbridge, southbridge, and memory controller onto one package. Early Tegra SoCs are designed as efficient multimedia processors. The Tegra-line evolved to emphasize performance for gaming and machine learning applications without sacrificing power efficiency, before taking a drastic shift in direction towards platforms that provide vehicular automation with the applied "Nvidia Drive" brand name on reference boards and its semiconductors; and with the "Nvidia Jetson" brand name for boards adequate for AI applications within e.g. robots or drones, and for various smart high level automation purposes.

Nvidia Tegra T20 (Tegra 2) and T30 (Tegra 3) chips
A Tegra X1 inside a Shield TV

History

The Tegra APX 2500 was announced on February 12, 2008. The Tegra 6xx product line was revealed on June 2, 2008,[1] and the APX 2600 was announced in February 2009. The APX chips were designed for smartphones, while the Tegra 600 and 650 chips were intended for smartbooks and mobile Internet devices (MID).[2]

The first product to use the Tegra was Microsoft's Zune HD media player in September 2009, followed by the Samsung M1.[3] Microsoft's Kin was the first cellular phone to use the Tegra;[4] however, the phone did not have an app store, so the Tegra's power did not provide much advantage. In September 2008, Nvidia and Opera Software announced that they would produce a version of the Opera 9.5 browser optimized for the Tegra on Windows Mobile and Windows CE.[5][6] At Mobile World Congress 2009, Nvidia introduced its port of Google's Android to the Tegra.

On January 7, 2010, Nvidia officially announced and demonstrated its next generation Tegra system-on-a-chip, the Nvidia Tegra 250, at Consumer Electronics Show 2010.[7] Nvidia primarily supports Android on Tegra 2, but booting other ARM-supporting operating systems is possible on devices where the bootloader is accessible. Tegra 2 support for the Ubuntu Linux distribution was also announced on the Nvidia developer forum.[8]

Nvidia announced the first quad-core SoC at the February 2011 Mobile World Congress event in Barcelona. Though the chip was codenamed Kal-El, it is now branded as Tegra 3. Early benchmark results show impressive gains over Tegra 2,[9][10] and the chip was used in many of the tablets released in the second half of 2011.

In January 2012, Nvidia announced that Audi had selected the Tegra 3 processor for its In-Vehicle Infotainment systems and digital instruments display.[11] The processor will be integrated into Audi's entire line of vehicles worldwide, beginning in 2013. The process is ISO 26262-certified.[12]

In summer of 2012 Tesla Motors began shipping the Model S all electric, high performance sedan, which contains two NVIDIA Tegra 3D Visual Computing Modules (VCM). One VCM powers the 17-inch touchscreen infotainment system, and one drives the 12.3-inch all digital instrument cluster."[13]

In March 2015, Nvidia announced the Tegra X1, the first SoC to have a graphics performance of 1 teraflop. At the announcement event, Nvidia showed off Epic Games' Unreal Engine 4 "Elemental" demo, running on a Tegra X1.

On October 20, 2016, Nvidia announced that the Nintendo Switch hybrid video game console will be powered by Tegra hardware.[14] On March 15, 2017, TechInsights revealed the Nintendo Switch is powered by a custom Tegra X1 (model T210), with lower clockspeeds.[15]

Models

Tegra APX

Tegra APX 2500
Tegra APX 2600
  • Enhanced NAND flash
  • Video codecs:[16]
    • 720p H.264 Baseline Profile encode or decode
    • 720p VC-1/WMV9 Advanced Profile decode
    • D-1 MPEG-4 Simple Profile encode or decode

Tegra 6xx

Tegra 600
  • Targeted for GPS segment and automotiveRed
  • Processor: ARM11 700 MHz MPCore
  • Memory: low-power DDR (DDR-333, 166 MHz)
  • SXGA, HDMI, USB, stereo jack
  • HD camera 720p
Tegra 650
  • Targeted for GTX of handheld and notebook
  • Processor: ARM11 800 MHz MPCore
  • Low power DDR (DDR-400, 200 MHz)
  • Less than 1 watt envelope
  • HD image processing for advanced digital still camera and HD camcorder functions
  • Display supports 1080p at 24 frame/s, HDMI v1.3, WSXGA+ LCD and CRT, and NTSC/PAL TV output
  • Direct support for Wi-Fi, disk drives, keyboard, mouse, and other peripherals
  • A complete board support package (BSP) to enable fast time to market for Windows Mobile-based designs

Tegra 2

Nvidia Tegra 2 T20

The second generation Tegra SoC has a dual-core ARM Cortex-A9 CPU, an ultra low power (ULP) GeForce GPU,[17] a 32-bit memory controller with either LPDDR2-600 or DDR2-667 memory, a 32 KB/32 KB L1 cache per core and a shared 1 MB L2 cache.[18] Tegra 2's Cortex A9 implementation does not include ARM's SIMD extension, NEON. There is a version of the Tegra 2 SoC supporting 3D displays; this SoC uses a higher clocked CPU and GPU.

The Tegra 2 video decoder is largely unchanged from the original Tegra and has limited support for HD formats.[19] The lack of support for high-profile H.264 is particularly troublesome when using online video streaming services.

Common features:

  • CPU cache: L1: 32 KB instruction + 32 KB data, L2: 1 MB
  • 40 nm semiconductor technology
Model
number
CPUGPUMemoryAdoption
ProcessorCoresFrequencyMicro-
architecture
Core
configuration1
FrequencyTypeAmountBus
width
Band-
width
Availability
AP20H (Ventana/Unknown)Cortex-A921.0 GHzVLIW-based
VEC4 units[20]
4:4:4:4[21]300 MHzLPDDR2 300 MHz
DDR2 333 MHz
?32 bit
single-channel
2.4 GB/s
2.7 GB/s
Q1 2010
T20 (Harmony/Ventana)333 MHz
AP251.2 GHz400 MHzQ1 2011
T25

1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units

Devices

ModelDevices
AP20HMotorola Atrix 4G, Motorola Droid X2, Motorola Photon, LG Optimus 2X / LG Optimus Dual P990 / Optimus 2x SU660 (?), Samsung Galaxy R, Samsung Captivate Glide,
T-Mobile G2X P999, Acer Iconia Tab A200 and A500, LG Optimus Pad, Motorola Xoom,[22] Sony Tablet S, Dell Streak Pro,[23] Toshiba Thrive[24] tablet, T-Mobile G-Slate
AP25Fusion Garage Grid 10[citation needed]
T20Avionic Design Tamonten Processor Board,[25] Notion Ink Adam tablet, Olivetti OliPad 100, ViewSonic G Tablet, ASUS Eee Pad Transformer, Samsung Galaxy Tab 10.1, Toshiba AC100, CompuLab Trim-Slice nettop, Velocity Micro Cruz Tablet L510, Acer Iconia Tab A100
Un­knownTesla Motors Model S 2012~2017 and Model X 2015~2017 instrument cluster (IC)[26][27]

Tegra 3

The Ouya uses a Tegra 3 T33-P-A3.

NVIDIA's Tegra 3 (codenamed "Kal-El")[28] is functionally a SoC with a quad-core ARM Cortex-A9 MPCore CPU, but includes a fifth "companion" core in what Nvidia refers to as a "variable SMP architecture".[29] While all cores are Cortex-A9s, the companion core is manufactured with a low-power silicon process. This core operates transparently to applications and is used to reduce power consumption when processing load is minimal. The main quad-core portion of the CPU powers off in these situations.

Tegra 3 is the first Tegra release to support ARM's SIMD extension, NEON.

The GPU in Tegra 3 is an evolution of the Tegra 2 GPU, with 4 additional pixel shader units and higher clock frequency. It can also output video up to 2560×1600 resolution and supports 1080p MPEG-4 AVC/h.264 40 Mbit/s High-Profile, VC1-AP, and simpler forms of MPEG-4 such as DivX and Xvid.[30]

The Tegra 3 was released on November 9, 2011.[31]

Common features:

  • CPU cache: L1: 32 KB instruction + 32 KB data, L2: 1 MB
  • 40 nm LPG semiconductor technology by TSMC
Model
number
CPUGPUMemoryAdoption
ProcessorCoresFrequency
(multi- / single-core mode)
Micro-
architecture
Core
configuration1
FrequencyTypeAmountBus
width
Band-
width
Availability
T30LCortex-A94+11.2 GHz / up to 1.3 GHzVLIW-based
VEC4 units[20]
8:4:8:8[32]416 MHzDDR3-1333?32 bit
single-channel
5.3 GB/s[33]Q1 2012
T301.4 GHz / up to 1.5 GHz520 MHzLPDDR2-1066
DDR3-L-1500
?4.3 GB/s
6.0 GB/s[34]
Q4 2011
AP33
T331.6 GHz / up to 1.7 GHz[33]DDR3-1600?6.4 GB/s[33]Q2 2012

1 Pixel shaders : Vertex shaders : Texture mapping units : Render output units

Devices

ModelDevices
AP33LG Optimus 4X HD, HTC One X, XOLO Play T1000,[35] Coolpad 8735
T30Asus Eee Pad Transformer Prime (TF201),[36] IdeaTab K2 / LePad K2,[37] Acer Iconia Tab A510, Fuhu Inc. nabi 2 Tablet,[38] Microsoft Surface RT,[39] Lenovo IdeaPad Yoga 11,[40][41]
T30ITesla Model S 2012~2017 and Model X 2015~2017 media control unit (MCU)[27][42]
T30LAsus Transformer Pad TF300T, Microsoft Surface, Nexus 7 (2012),[43] Sony Xperia Tablet S, Acer Iconia Tab A210, Toshiba AT300 (Excite 10),[44][unreliable source?] BLU Quattro 4.5,[45] Coolpad 9070
T33Asus Transformer Pad Infinity (TF700T), Fujitsu ARROWS X F-02E, Ouya, HTC One X+

Tegra 4

The Tegra 4 (codenamed "Wayne") was announced on January 6, 2013, and is a SoC with a quad-core CPU, but includes a fifth low-power Cortex A15 companion core which is invisible to the OS and performs background tasks to save power. This power-saving configuration is referred to as "variable SMP architecture" and operates like the similar configuration in Tegra 3.[46]

The GeForce GPU in Tegra 4 is again an evolution of its predecessors. However, numerous feature additions and efficiency improvements were implemented. The number of processing resources was dramatically increased, and clock rate increased as well. In 3D tests, the Tegra 4 GPU is typically several times faster than that of Tegra 3.[47] Additionally, the Tegra 4 video processor has full support for hardware decoding and encoding of WebM video (up to 1080p 60 Mbit/s @ 60fps).[48]

Along with Tegra 4, Nvidia also introduced i500, an optional software modem based on Nvidia's acquisition of Icera, which can be reprogrammed to support new network standards. It supports category 3 (100 Mbit/s) LTE but will later be updated to Category 4 (150 Mbit/s).

Common features:

  • CPU cache: L1: 32 KB instruction + 32 KB data, L2: 2 MB
  • 28 nm HPL semiconductor technology
Model
number
CPUGPUMemoryAdoption
ProcessorCoresFrequencyMicroarchitectureCore
configuration1
FrequencyTypeAmountBus
width
Band-
width
Availability
T114[49][unreliable source?]Cortex-A154+1up to 1.9 GHzVLIW-based VEC4 units[50]72 (48:24:4)[20][50]672 MHz[51]DDR3L or LPDDR3?32 bit dual-channelup to 14.9 GB/s (1866 MT/s data rate)[52][53]Q2 2013[54]

1 Pixel shaders : Vertex shaders : Pixel pipelines (pairs 1x TMU and 1x ROP)

Devices

ModelDevices
T114Nvidia Shield Portable, Tegra Note 7, Microsoft Surface 2, HP Slate 7 Extreme,[55] HP Slate 7 Beats Special Edition,[56] HP Slate 8 Pro,[57] HP SlateBook x2,[58] HP SlateBook 14,[59] HP Slate 21,[60] ZTE N988S, nabi Big Tab, Nuvola NP-1, Project Mojo, Asus Transformer Pad TF701T, Toshiba AT10-LE-A (Excite Pro), Vizio 10" tablet, Wexler.Terra 7, Wexler.Terra 10, Acer TA272HUL AIO, Xiaomi Mi 3 (TD-LTE version),[61] Coolpad 8970L (大观 4),[62] Audi Tablet,[63] Le Pan TC1020 10.1",[64] Matrimax iPLAY 7,[65] Kobo Arc 10HD[66]

Tegra 4i

The Tegra 4i (codenamed "Grey") was announced on February 19, 2013. With hardware support for the same audio and video formats,[48] but using Cortex-A9 cores instead of Cortex-A15, the Tegra 4i is a low-power variant of the Tegra 4 and is designed for phones and tablets. Unlike its Tegra 4 counterpart, the Tegra 4i also integrates the Icera i500 LTE/HSPA+ baseband processor onto the same die.

Common features:

  • 28 nm HPM semiconductor technology
  • CPU cache: L1: 32 KB instruction + 32 KB data, L2: 1 MB
Model
number
CPUGPUMemoryAdoption
ProcessorCoresFrequencyMicroarchitectureCore
configuration1
FrequencyTypeAmountBus
width
Band-
width
Availability
T148?[67]Cortex-A9 "R4"4+1up to 2.0 GHzVLIW-based VEC4 units[50]60 (48:12:2)[50]660 MHz[51]LPDDR332 bit single-channel6.4–7.5 GB/s (800–933 MHz)[53]Q1 2014

1 Pixel shaders : Vertex shaders : Pixel pipelines (pairs 1x TMU and 1x ROP)

Devices
ModelDevices
T148?Blackphone, LG G2 mini LTE, Wiko Highway 4G,[68] Explay 4Game,[69] Wiko Wax[70][71] QMobile Noir LT-250[72]

Tegra K1

Nvidia's Tegra K1 (codenamed "Logan") features ARM Cortex-A15 cores in a 4+1 configuration similar to Tegra 4, or Nvidia's 64-bit Project Denver dual-core processor as well as a Kepler graphics processing unit with support for Direct3D 12, OpenGL ES 3.1, CUDA 6.5, OpenGL 4.4/OpenGL 4.5, and Vulkan.[73][74] Nvidia claims that it outperforms both the Xbox 360 and the PS3, whilst consuming significantly less power.[75]

Support Adaptive Scalable Texture Compression.[76]

In late April 2014, Nvidia shipped the "Jetson TK1" development board containing a Tegra K1 SoC and running Ubuntu Linux.[77][unreliable source?]

Model
number
CPUGPUMemoryAdoption
ProcessorCoresFrequencyMicro-
architecture
Core
configuration1
FrequencyGFLOPS
(FP32)
TypeAmountBus
width
Band-
width
Availability
T124[80]Cortex-A15 R3
(32-bit)
4+1up to 2.3 GHz[81]GK20A
(Kepler)
192:8:4[82]756–951 MHz290–365[83]DDR3L
LPDDR3[82]
max 8 GB
with 40-bit address extension2
64 bit17 GB/s[82]Q2 2014
T132Denver
(64-bit)
2[82]up to 2.5 GHz[81]max 8 GB??Q3 2014

1 Unified Shaders : Texture mapping units : Render output units

2 ARM Large Physical Page Extension (LPAE) supports 1 TiB (240 bytes). The 8 GiB limitation is part-specific.

Devices

ModelDevices
T124Jetson TK1 development board,[84] Nvidia Shield Tablet,[85] Acer Chromebook 13,[86] HP Chromebook 14 G3,[87] Xiaomi MiPad,[88] Snail Games
OBox, UTStarcom MC8718, Google Project Tango tablet,[89] Apalis TK1 System on Module,[90] Fuze Tomahawk F1,[91] JXD Singularity S192[92]
T132HTC Nexus 9[93][94]

In December 2015, the web page of wccftech.com published an article stating that Tesla is going to use a Tegra K1 based design derived from the template of the Nvidia Visual Computing Module (VCM) for driving the infotainment systems and providing visual driving aid in the respective vehicle models of that time.[95] This news has, as of now, found no similar successor or other clear confirmation later on in any other place on such a combination of a multimedia with an auto pilot system for these vehicle models.

Tegra X1

The X1 is the basis for the Nintendo Switch video game console.
Die shot of the Tegra X1
Tegra X1 in Nvidia Shield TV

Released in 2015, Nvidia's Tegra X1 (codenamed "Erista") features two CPU clusters, one with four ARM Cortex-A57 cores and the other with four ARM Cortex-A53 cores, as well as a Maxwell-based graphics processing unit.[96][97]It supports Adaptive Scalable Texture Compression.[76] Only one cluster of cores can be active at once, with the cluster switch being handled by software on the BPMP-L. Devices utilizing the Tegra X1 have only been seen to utilize the cluster with the more powerful ARM Cortex-A57 cores. The other cluster with four ARM Cortex-A53 cores cannot be accessed without first powering down the Cortex-A57 cores (both clusters must be in the CC6 off state).[98] Nvidia has removed the ARM Cortex-A53 cores from later versions of technical documentation, implying that they have been removed from the die.[99][100] The Tegra X1 was found to be vulnerable to a Fault Injection (FI) voltage glitching attack, which allowed for arbitrary code execution and homebrew software on the devices it was implemented in.[101]

A revision (codenamed "Mariko") with greater power efficiency, known officially as Tegra X1+ was released in 2019,[102] fixing the Fusée Gelée exploit. It's also known as T214 and T210B01.

Model
number
SOC VariantProcessCPUGPUMemoryAdoption
ProcessorCoresFrequency1Micro-
architecture
Core
configuration2
FrequencyGFLOPS
(FP32)
GFLOPS
(FP16)
TypeAmount3Bus
width
Band-
width4
Availability
T210ODNX02-A2

TM670D-A1

TM670M-A2

TM671D-A2

TSMC 20 nmCortex-A57 +
Cortex-A53[106]: 753 
A57: 4
A53: 4[106]
A57: 2.2 GHz[107]
A53: 1.3 GHz
GM20B
(Maxwell)[106]: 14 
256:[106] 16:161000 MHz5121024LPDDR3 / LPDDR48 GB[106]64 bit[106]25.6 GB/sQ2 2015
TM660M-A2A57: 1.428 GHz
A53: ? GHz
128:16:16921 MHz236472LPDDR3? / LPDDR4: 773 4 GBMarch 2019
T214 / T210b01ODNX10-A1

TM675M-A1

TSMC 16 nmCortex-A57A57: 4A57: 2.1 GHz[108]GM21B (Maxwell)[109]256:16:161267 MHz[110]6491298LPDDR4 /LPDDR4X8 GB34.1 GB/sQ2 2019

1 CPU frequency may be clocked differently than the maximum validated by Nvidia at the OEM's discretion

2 Unified Shaders : Texture mapping units : Render output units

3 Maximum validated amount of memory, implementation is board specific

4 Maximum validated memory bandwidth, implementation is board specific

Devices

ModelSOC VariantDevices
T210ODNX02-A2Nintendo Switch (2017, HAC-001) [111][15]
TM670D-A1Nvidia Shield Android TV (2015)
TM670M-A2Nvidia Shield Android TV (2017)
TM660M-A2Jetson Nano 4 GB, Jetson Nano 2 GB
TM671D-A2Google Pixel C
Un­knownNvidia Jetson TX1 development board,[112] Nvidia Drive CX & PX
T210b01ODNX10-A1Nintendo Switch (2019, HAC-001(-01)), Nintendo Switch: OLED Model (HEG-001), Nintendo Switch Lite (HDH-001)
TM675M-A1Nvidia Shield Android TV (2019)

Tegra X2

Nvidia's Tegra X2[113][114] (codenamed "Parker") features Nvidia's own custom general-purpose ARMv8-compatible core Denver 2 as well as code-named Pascal graphics processing core with GPGPU support.[115] The chips are made using FinFET process technology using TSMC's 16 nm FinFET+ manufacturing process.[116][117][118]

  • CPU: Nvidia Denver2 ARMv8 (64-bit) dual-core + ARMv8 ARM Cortex-A57 quad-core (64-bit)
  • RAM: up to 8 GB LPDDR4[119]
  • GPU: Pascal-based, 256 CUDA cores; type: GP10B[120]
  • TSMC 16 nm, FinFET process
  • TDP: 7.5–15 W[121]
Model
number
CPUGPUMemoryAdoption
ProcessorCoresFrequencyMicro-
architecture
Core
configuration1
FrequencyGFLOPS
(FP32)
GFLOPS
(FP16)
TypeAmountBus
width
Band-
width
Availability
T186Denver2 +
Cortex-A57
2 + 4Denver2: 1.4–2.0 GHz
A57: 1.2–2.0 GHz
GP10B (Pascal)[122][unreliable source?]256:16:16 (2)[123]854–1465 MHz437–750874–1500LPDDR48 GB128 bit59.7 GB/s

1 Unified Shaders : Texture mapping units : Render output units (SM count)

Devices

ModelDevices
T186Nvidia Drive PX2 (variants),
ZF ProAI 1.1[124]
T186Nvidia Jetson TX2[121]
Un­knownMercedes-Benz MBUX (infotainment system)[125]
Un­known1 unit along with 1 GPU semiconductor is part of the ECU for "Tesla vision" functionality in all Tesla vehicles since October 2016[126][127]
T186Magic Leap One[128][129] (mixed environment glasses)
Un­knownSkydio 2 (drone)[130]

Xavier

The Xavier Tegra SoC, named after the comic book character Professor X, was announced on 28 September 2016, and by March 2019, it had been released.[131] It contains 7 billion transistors and 8 custom ARMv8 cores, a Volta GPU with 512 CUDA cores, an open sourced TPU (Tensor Processing Unit) called DLA (Deep Learning Accelerator).[132][133] It is able to encode and decode 8K Ultra HD (7680×4320). Users can configure operating modes at 10 W, 15 W, and 30 W TDP as needed and the die size is 350 mm2.[134][135][136] Nvidia confirmed the fabrication process to be 12 nm FinFET at CES 2018.[137]

  • CPU: Nvidia custom Carmel ARMv8.2-A (64-bit), 8 cores 10-wide superscalar[138]
  • GPU: Volta-based, 512 CUDA cores with 1.4 TFLOPS;[139] type: GV11B[140][120]
  • TSMC 12 nm, FinFET process[137]
  • 20 TOPS DL and 160 SPECint @ 20 W;[134] 30 TOPS DL @ 30 W[136] (TOPS DL = Deep Learning Tera-Ops)
    • 20 TOPS DL via the GPU based tensor cores
    • 10 TOPS DL (INT8) via the DLA unit that shall achieve 5 TFLOPS (FP16)[139]
  • 1.6 TOPS in the PVA unit (Programmable Vision Accelerator,[141] for StereoDisparity/OpticalFlow/ImageProcessing)
  • 1.5 GPix/s in the ISP unit (Image Signal Processor, with native full-range HDR and tile processing support)
  • Video processor for 1.2 GPix/s encoding and 1.8 GPix/s decode[139] including 8k video support[135]
  • MIPI-CSI-3 with 16 lanes[142][143]
  • 1 Gbit/s Ethernet
  • 10 Gbit/s Ethernet
Model
number
SOC VariantCPUGPUMemoryAdoption
ProcessorCoresFrequencyMicro-
architecture
Core
configuration1
FrequencyGFLOPS
(FP32)
GFLOPS
(FP16)
TypeAmountBus
width
Band-
width
Availability
T194[144]Un­knownCarmel8up to 2.26 GHzGV10B[145] (Volta)512:32:16 (8, 64)[146]854–1377 MHz874–14101748–2820LPDDR4X16 GB256-bit137 GB/sMarch 2019
NX (15W)TE860M-A22, 4 or 6up to 1.4 GHz (Hexa and Quad Core) or up to 1.9 GHz (Dual Core)GV10B (Volta)384:24:16 (6, 48)[147]1100 MHz8451690LPDDR4X8 GB128-bit51.2 GB/sMarch 2020
NX (10W)2 or 4up to 1.2 GHz (Quad Core) or up to 1.5 GHz (Dual Core)800 MHz6141229LPDDR4X8 GB128-bit51.2 GB/sMarch 2020

1 Unified Shaders : Texture mapping units : Render output units (SM count, Tensor Cores)

Devices

ModelSOC VariantDevices
T194Un­knownNvidia Drive Xavier (Drive PX-series)[148]
(formerly named Xavier AI Car Supercomputer)
Un­knownNvidia Drive Pegasus (Drive PX-series)[148]
Un­knownNvidia Drive AGX Xavier Developer Kit[149]
Un­knownNvidia Jetson AGX Xavier Developer Kit[150]
Un­knownNvidia Jetson Xavier[150]
TE860M-A2Nvidia Jetson Xavier NX[151]
Un­knownNvidia Clara AGX[152] "Clara AGX is based on NVIDIA Xavier and NVIDIA Turing GPUs."[153][unreliable source?]
Un­knownBosch and Nvidia designed Self Driving System[154]
Un­knownZF ProAI[155][156]

On the Linux Kernel Mailing List, a Tegra194 based development board with type ID "P2972-0000" got reported: The board consists of the P2888 compute module and the P2822 baseboard.[157]

Orin

Nvidia announced the next-gen SoC codename Orin on March 27, 2018, at GPU Technology Conference 2018.[158] It contains 17 billion transistors and 12 ARM Hercules cores and is capable of 200 INT8 TOPs @ 65W.[159]

The Drive AGX Orin board system family was announced on December 18, 2019, at GTC China 2019. Nvidia has sent papers to the press documenting that the known (from Xavier series) clock and voltage scaling on the semiconductors and by pairing multiple such chips a wider range of application can be realized with the thus resulting board concepts.[160] In early 2021, Nvidia announced the Chinese vehicle company NIO will be using an Orin-based chip in their cars.[161]

The so far published specifications for Orin are:

  • CPU: 12× Arm Cortex-A78AE (Hercules) ARMv8.2-A (64-bit)[162][163]
  • GPU: Ampere-based, 2048[164] CUDA cores and 64 tensor cores1; "with up to 131 Sparse TOPs of INT8 Tensor compute, and up to 5.32 FP32 TFLOPs of CUDA compute."[165]
    • 5.3 CUDA TFLOPs (FP32)[166]
    • 10.6 CUDA TFLOPs (FP16)[166]
  • Samsung 8 nm process[166]
  • 275 TOPS (INT8) DL[166]
    • 170 TOPS DL (INT8) via the GPU
    • 105 TOPS DL (INT8) via the 2x NVDLA 2.0 units (DLA, Deep Learning Accelerator)
  • 85 TOPS DL (FP16)[166]
  • 5 TOPS in the PVA v2.0 unit (Programmable Vision Accelerator for Feature Tracking)
  • 1.85 GPix/s in the ISP unit (Image Signal Processor, with native full-range HDR and tile processing support)
  • Video processor for ? GPix/s encoding and ? GPix/s decode
  • 4× 10 Gbit/s Ethernet, 1× 1 Gbit/s Ethernet

1 Orin uses the double-rate tensor cores in the A100, not the standard tensor cores in consumer Ampere GPUs.

Nvidia announced the latest member of the family, "Orin Nano" in September 2022 at the GPU Technology Conference 2022.[167] The Orin product line now features SoC and SoM(System-On-Module) based on the core Orin design and scaled for different uses from 60W all the way down to 5W. While less is known about the exact SoC's that are being manufactured, Nvidia has publicly shared detailed technical specifications about the entire Jetson Orin SoM product line. These module specifications illustrate how Orin scales providing insight into future devices that contain an Orin derived SoC.

Module

(Model)

SoC VariantCPUGPUDeep LearningMemoryAdoptionTDP in watts
ProcessorCoresFrequency

(GHz)

Micro-
architecture
Core
configuration1
Frequency

(MHz)

TFLOPS
(FP32)
TFLOPS
(FP16)
TOPS

(INT8)

TypeAmountBus
width
Band-
width
Availability
AGX Orin 64 GB [168][169]Cortex-A78AE 9 MB cache[165]12up to 2.2[165]Ampere2048:64:8 (16, 8, 2)[165]up to 1300[165]5.32[165]10.649up to 275[165]LPDDR564 GB256-bit204.8 GB/s[165]Sample 2021, Kit Q1 2022, Prod Dec 2022[170]15-60[165]
AGX Orin 32 GB[170]Cortex-A78AE 6 MB cache[170]8up to 2.2[170]Ampere1792:56:7 (14, 7, 2)[170]up to 930[170]3.365[165]6.73up to 200[170]LPDDR532 GB[170]256-bit[170]204.8 GB/s[170]Oct 2022[170]15-40[170]
Orin NX 16 GB[171]TE980-M[172]Cortex-A78AE 6 MB cache[171]8up to 2[171]Ampere1024:32:4 (8, 4, 1)[171]up to 918[171]1.883.76up to 100[171]LPDDR516 GB[171]128-bit[171]102.4 GB/s[171]Dec 2022[171]10-25[171]
Orin NX 8 GB[170]TE980-M[172]Cortex-A78AE 5.5 MB cache[170]6up to 2[170]Ampere1024:32:4 (8, 4, 1)[170]up to 765[170]1.573.13up to 70[170]LPDDR58 GB[170]128-bit[170]102.4 GB/s[170]Jan 2023[170]10-20[170]
Orin Nano 8 GB[170]Cortex-A78AE 5.5 MB cache[170]6up to 1.5[170]Ampere1024:32:4 (8, 4, 1)[170]up to 625[170]1.282.56up to 40[170]LPDDR58 GB[170]128-bit[170]68 GB/s[170]Jan 2023[170]7-15[170]
Orin Nano 4 GB[170]Cortex-A78AE 5.5 MB cache[170]6up to 1.5[170]Ampere512:16:2 (4, 2, 1)[170]up to 625[170]0.641.28up to 20[170]LPDDR54 GB[170]64-bit[170]34 GB/s[170]Jan 2023[170]5-10[170]

1 CUDA cores : Tensor cores : RT cores (SMs, TPCs, GPCs)

Devices

ModelDevicesComments
T234[173]Nvidia Jetson AGX Orin[174][165]comes in 32 GB and 64 GB RAM configurations, available as standalone module or devkit;

intended for industrial robotics and/or embedded HPC applications

Un­knownNvidia Jetson Orin NX[171]mid-power SODIMM-form factor Orin-series module, available only as standalone module;

pin-compatible with Xavier NX carrier

Un­knownNvidia Jetson Orin Nano[175]low-power, cost-effective SODIMM-form factor Orin-series module, available as standalone module or devkit;

intended for entry-level usage

Un­knownNio Adam[176][177]built from 4x Nvidia Drive Orin, totals to 48 CPU cores and 8,192 CUDA cores;
for use in vehicles ET7 in March 2022 and ET5 in September 2022

Grace

The Grace CPU is an NVIDIA-developed ARM Neoverse CPU platform, targeted at large-scale AI and HPC applications, available within several NVIDIA products. The NVIDIA OVX platform combines the Grace Superchip (two Grace dies on one board) with desktop NVIDIA GPUs in a server form-factor, while the NVIDIA HGX platform is available with either the Grace Superchip or the Grace Hopper Superchip.[178] The latter is an HPC platform in of itself, combining a Grace CPU with a Hopper-based GPU, announced by NVIDIA on March 22, 2022.[179] Kernel patchsets indicate that a single Grace CPU is also known as T241, placing it under the Tegra SoC branding, despite the chip itself not including a GPU (a referenced T241 patchset cites impact to "NVIDIA server platforms that use more than two T241 chips...interconnected," pointing to the Grace Superchip design).[180]

Model
number
CPUMemoryAdoption
ProcessorCoresFrequencyCacheTFLOPS

(FP64)

TypeAmountBus
width
Band-
width
Availability
T241[181]Grace72 ARM Neoverse V2 Cores (ARM9)[182]?L1: 64 KB I-cache + 64 KB D-cache per core

L2: 1 MB per coreL3: 117 MB shared[182]

3.551[182]LPDDR5X ECC[182]Up to 480 GB1[182]?500 GB/s[182]H2 2023[183]

1Figures cut in half from full Grace Superchip specification

Atlan

Nvidia announced the next-gen SoC codename Atlan on April 12, 2021, at GPU Technology Conference 2021.[184][185]

Nvidia announced the cancellation of Atlan on September 20, 2022, and their next SoC will be Thor.[186]

Functional units known so far are:

  • Grace Next CPU[187]
  • Ada Lovelace GPU[188]
  • Bluefield DPU (Data Processing Unit)
  • other Accelerators
  • Security Engine
  • Functional Safety Island
  • On-Chip-Memory
  • External Memory Interface(s)
  • High-Speed-IO Interfaces
Model
number
CPUGPUDeep LearningMemoryAdoption
ProcessorCoresFrequencyMicro-
architecture
Core
configuration1
FrequencyGFLOPS
(FP32)
GFLOPS
(FP16)
TOPS

(INT8)

TypeAmountBus
width
Band-
width
Availability
T254?Grace-Next[187]??Ada Lovelace[189]????>1000[190]????Cancelled[191]

Thor

Nvidia announced the next-gen SoC codename Thor on September 20, 2022, at GPU Technology Conference 2022, replacing the cancelled Atlan.[186] A patchset adding support for Tegra264 to mainline Linux was submitted May 5, 2023, likely indicating initial support for Thor.[192]

Devices

Model
number
CPUGPUDeep LearningMemoryAdoption
ProcessorCoresFrequencyMicro-
architecture
Core
configuration1
FrequencyGFLOPS
(FP32)
GFLOPS
(FP16)
TOPS

(FP8)

TypeAmountBus
width
Band-
width
Availability
T264?Arm Neoverse V3AE[195]??Blackwell????2000[186]????2025[186]

Comparison

GenerationTegra 2Tegra 3Tegra 4Tegra 4iTegra K1Tegra X1Tegra X1+Tegra X2XavierOrinDrakeThor
CPUInstruction setARMv7‑A (32‑bit)ARMv8‑A (64‑bit)ARMv8.2‑A (64‑bit)ARMv9.2‑A (64‑bit)
Cores2 A94+1 A94+1 A154+1 A94+1 A152 Denver4 A53 (disabled) + 4 A574 A572 Denver2 + 4 A578 Carmel12 A78AE8 A78CNeoverse V3AE
L1 cache (I/D)32/32 KB128/64 KB32/32 KB + 64/32 KB128/64 KB + 48/32 KB128/64 KB64/64 KB?64/64 KB
L2 cache1 MB2 MB128 KB + 2 MB2 MB + 2 MB8 MB3 MB?
L3 cacheN/A4 MB6 MB?
GPUArchitectureVec4KeplerMaxwellPascalVoltaAmpereBlackwell
CUDA cores4+4*8+4*48+24*48+12*19225651220481536?
Tensor coresN/A6448?
RT coresN/A812?
RAMProtocolDDR2/LPDDR2DDR3/LPDDR2DDR3/LPDDR3LPDDR3/LPDDR4LPDDR4/LPDDR4XLPDDR5?
Max. size1 GB2 GB4 GB8 GB32 GB64 GB?
Bandwidth2.7 GB/s6.4 GB/s7.5 GB/s14.88 GB/s25.6 GB/s34.1 GB/s59.7 GB/s136.5 GB/s204.8 GB/s102.4 GB/s?
Process40 nm28 nm20 nm16 nm12 nm8 nm?4 nm

* VLIW-based Vec4: Pixel shaders + Vertex shaders. Since Kepler, Unified shaders are used.

Software support

FreeBSD

FreeBSD supports a number of different Tegra models and generations, ranging from Tegra K1,[196] to Tegra 210.[197]

Linux

Nvidia distributes proprietary device drivers for Tegra through OEMs and as part of its "Linux for Tegra" (formerly "L4T") development kit, also Nvidia provides JetPack SDK with "Linux for Tegra" and other tools with it. The newer and more powerful devices of the Tegra family are now supported by Nvidia's own Vibrante Linux distribution. Vibrante comes with a larger set of Linux tools plus several Nvidia provided libraries for acceleration in the area of data processing and especially image processing for driving safety and automated driving up to the level of deep learning and neuronal networks that make e.g. heavy use of the CUDA capable accelerator blocks, and via OpenCV can make use of the NEON vector extensions of the ARM cores.

As of April 2012, due to different "business needs" from that of their GeForce line of graphics cards, Nvidia and one of their Embedded Partners, Avionic Design GmbH from Germany, are also working on submitting open-source drivers for Tegra upstream to the mainline Linux kernel.[198][199] Nvidia co-founder & CEO laid out the Tegra processor roadmap using Ubuntu Unity in GPU Technology Conference 2013.[200][unreliable source?]

By end of 2018 it is evident that Nvidia employees have contributed substantial code parts to make the T186 and T194 models run for HDMI display and audio with the upcoming official Linux kernel 4.21 in about Q1 2019. The affected software modules are the open source Nouveau and the closed source Nvidia graphics drivers along with the Nvidia proprietary CUDA interface.[201][unreliable source?]

As of May, 2022, NVIDIA has open-sourced their GPU kernel modules for both Jetson and desktop platforms, allowing all but proprietary userspace libraries to be open-source on Tegra platforms with official NVIDIA drivers starting with T234 (Orin).[202]

QNX

The Drive PX2 board was announced with QNX RTOS support at the April 2016 GPU Technology Conference.[203]

Similar platforms

SoCs and platforms with comparable specifications (e.g. audio/video input, output and processing capability, connectivity, programmability, entertainment/embedded/automotive capabilities & certifications, power consumption) are:

See also

References

External links