In 1993, a small company called NVIDIA was born in a California home. It was founded by a Taiwanese immigrant who studied at Stanford University named Jensen Huang. Huang chose the initials of the name, i.e. NV, to denote New Version, while the rest of the word is inspired by “invidia”, meaning envy in Latin.
NVIDIA started its humble journey by working towards Huang’s dream, by converting flat computer graphics into 3D shapes. No one imagined at the time that this project would revolutionize the world of the video game industry, but rather that it would turn out to be the main operating force behind artificial intelligence programs.
Difficulties of beginnings and company philosophy
In pursuit of its goal of focusing on the development of the graphics processing unit (GPU), Nvidia faced financial and administrative difficulties in the late 1990s, and was on the brink of bankruptcy and layoffs.
In 1999, the company released what it claimed was the world’s first official Graphics Card, the GeForce 256. This release not only saved the company, but put it on the adult map in Silicon Valley. This card also came in the first Xbox game console in 2000.
From the beginning, Nvidia has been in close contact with programmers and the software community in general. She leveraged her relationships with software engineers in fields such as medicine, automotive, and game and visual entertainment makers to gain an understanding of market needs and solve pressing problems. Therefore, the company’s efforts were focused on engineering the chips rather than manufacturing them, and entrusted the task of manufacturing to a Taiwanese company called TSMC.
Amazing innovation
Many companies have entered the field of graphics processing development. With time everyone withdrew except for NVIDIA and its competitor AMD. Developers at NVIDIA knew from the start that graphics processing required massive computing power, more than that of central processing units (CPUs). So their main concern in the early years was how to develop a standalone graphics processing unit (GPU) that could be installed in a home computer.
In the year 2006, there was a qualitative breakthrough, which was the issuance of a Toolkit capable of parallel computing instead of sequential computing. This technology was known as CUDA. Later in 2007, the first CUDA GPU was released, the G80.
Turning failures into opportunities
CUDA technology opened the door wide to the possibilities of using parallel computing, not only in GeForce home computer graphics cards, but also in the use of GPUs in boards to process and store data.
In 2010, NVIDIA tried to enter the world of mobile phones, and released the Tegra processor for this purpose. This was an uncalculated adventure, not because of Tegra’s incompetence, but because of the complexities of making phones. The company immediately realized the futility of moving on in this area, but it found another task for the Tegra mobile processor, which was to run thousands of robots to move boxes at Amazon. This processor was also used in Tesla 3S cars from 2016 until 2019. NVIDIA turned its failure into success, by focusing on what it is good at doing and excluding everything that delays it.
Digital revolution
With every new release of GeForce graphics cards, Nvidia has been delivering on its promises. In the release of the GTX 680 in 2012, it became possible to notice hair ripples in video games. We remember at the time how we used to greatly appreciate the dazzling we see and talk for hours about the quality of the graphics and how “realistic” they are. Now that has become a foregone conclusion and we sometimes forget the digital revolution that brought us to this stage. We might forget, for example, that the 2009 Avatar movie would have taken decades to process its graphics without parallel computing technology. On the other hand, one of the most prominent reasons for Zuckerberg’s failure in the Metaverse project is that he did not use realistic graphics, and his virtual world looked more like children’s cartoons.
The use of GPUs has become a cornerstone for mining cryptocurrencies and solving blockchain algorithms, and although NVIDIA has released a number of mining graphics cards, the miner’s preferred options were powerful gaming cards, especially RTX classes and some older GTX classes, especially the 1660s.
As in digital currency mining, as well as in cloud storage services, data storage centers and automated driving, the GPU has emerged as a processing power to be reckoned with.
Nvidia took advantage of AlexNet’s experience in deep learning and the development of artificial neural networks (Convolutional Networks) to recognize objects and their shapes, and applied that in realistic game graphics. Later, the same principles of deep learning and object recognition were applied to automated driving of cars as well as robots.
theGPU and artificial intelligence
Last year, NVIDIA launched the 40 series of RTX cards, and despite its very high price, due to the demand of digital currency miners on these cards in the past years, it is expected that prices will decrease due to the stagnation of mining at this stage of the digital currency market, which goes up and down In successive episodes, each of which usually lasts three years.
About RTX says CEO Jensen Huang That its engineering would not have been possible without artificial intelligence, as the engineer in the development stage was processing one pixel and the artificial intelligence took care of the rest.
In parallel with the 40th generation of RTX cards, NVIDIA also launched one of its most important productions, the fastest and most computing capable of any other product in the field of deep learning, it is the DGX A100 Server board, which is eight GPUs in one circuit board, the price of which is approximately $ 200,000. This panel is considered the beating heart of artificial intelligence programs, most notably ChatGPT.
In the third quarter of the same year, the ninth generation of data processing boards was released by the GPU with the DGX H100 chip. Although it is nine times faster than its predecessor (A100), it is specialized in processing and producing images (Rendering).
Competition and geopolitical challenges
Currently, NVIDIA is preparing to release the first CPU Superchip called Grace, and it may be in use in data centers within days. The company says about its new product that it will be the most efficient central processor ever.
But just as Nvidia is expanding into CPU architecture, central processor giant Intel is heading to produce its first GPU card in 2025.
About competition with other companies such as AMD and Intel echoes Jinsen Huang The competition is always good in the creative and engineering fields, and the market needs are always greater than what these leading companies can provide. But this CEO, who has been at the helm of NVIDIA for thirty years, is disturbing his night by something else. Total reliance on chip manufacturing in Taiwan via TSMC on the one hand, and US bans on exporting some technologies to China, particularly the DGX A100 motherboards on the other, made the company vulnerable to uncomfortable political influences.
However it appears that Huang He deals with the matter with great professionalism, as he, with his commitment to the decisions and legislation of his government, is serving his huge customer base in China.
As for TSMC, it has already begun work on facilities worth $40 billion for its new factories in the US state of Arizona, in the event that China decides to invade and annex Taiwan.