The company is also attacking the machine learning market with new accelerators, and Magic Leap 2 is also being served.
At CES, in addition to processors, system chips and graphics controllers, AMD also paid attention to more professional needs, and within this, machine learning came to the fore. The company mainly highlighted the XDNA architecture, which is actually entirely a development of the previously acquired Xilinx, the new owner has only added the name, and its usability has been expanded, as it is also distributed to customers.
The idea of the company is to have a unified software environment from the clients, through the servers, to the cloud, so no matter what area a software is being developed for, it will be able to function wherever there is a component of the XDNA architecture in the hardware.
XDNA itself is extremely scalable, its basic unit is provided by so-called AI engines with local storage. Many of these can be placed in a tile, depending on how much power is required in the targeted area. What makes the system very efficient is that it can fully adapt to neural networks, because they are made up of layers that are not necessarily uniform in size. The latter is taken into consideration by XDNA and, depending on the size, can associate a different amount of subunits with each layer. In this form, the design adaptively adjusts to the neural network.
Although AMD has not yet disclosed the structure of the Ryzen AI component of the Phoenix SoC APU presented at CES, it is considered limited in performance, as it is mainly designed for the inference phase of machine learning, especially from the point of view of serving home needs.
A more serious system will be the Alveo V70, which can be plugged into the PCI Express 5.0 interface, which also uses the XDNA design, and is still aimed at the machine learning inference stage, but has much more serious performance, which does not that’s surprising, as it’s aimed at the server market to find a home. According to AMD, the hardware originally designed by Xilinx offers passive cooling with a consumption of 75 watts, and its performance is 404 and 202 TOPS with data types INT8 and BFloat16, respectively, and can simultaneously decode a maximum of 96 channels of Full HD video through its dedicated component. The Alveo V70 itself is already available for pre-order and is scheduled to launch this spring.
AMD is already targeting the training phase of machine learning with the Instinct series, including the Instinct MI300, which will arrive this year, in the second half of the year. Unlike its predecessor, this system is no longer a classic accelerator, but a combination of CPU and GPGPU, similar to the principle of operation of APUs, only not built on a single chip, but made up of chiplets.
There are 9 5nm and 4 6nm complex 3D connection chiplets on the case itself, along with 128GB of standard HBM3 memory. So far, AMD has revealed so much about the development consisting of a total of 146 billion transistors that the GPGPU uses the CDNA 3 architecture, while the CPU is built with Zen 4 cores, of which there will be a total of 24.
The main question might be why did you have to move the CPU and GPGPU into a container? The answer to this question is that the system can run much more efficiently this way, a good example being that the MI300 will offer eight times the AI performance of the MI250X and, in addition to the AI tasks, the performance/consumption ratio it will be five times better.
Finally, we should mention Magic Leap 2 which is a Introduced in 2017 by Magic Leap is considered its successor, and in the new system AMD provides the individually designed system chip, which uses Zen 2 cores and an IGP based on the RDNA 2 architecture.