This is how Tesla wants to improve its AI

San Francisco, Düsseldorf If you weren’t an expert, you needed a translator when Tesla boss Elon Musk started to cheer a few days ago about the new version of his autonomous driving program. The “FSD Beta 9.2” has an “int8 quantization” and a new “VRU model” with “12 percent improvement”, enthused Musk.

What the Tesla boss means: “int8 quantification” is a method with which neural networks learn. The new autopilot from Tesla can process even larger amounts of data even faster. “VRU” stands for “vulnerable road user”, i.e. less protected road users such as pedestrians or cyclists – who, according to Musk, will in future be recognized by the Tesla autopilot by twelve percent better.

There is a good reason why Elon Musk is switching to technical jargon: The CEO of Tesla actually wants to impress the experts and talents with his tweets. This goal is usually also served by the AI ​​day, which Tesla holds on Thursday late in the evening, German time. With the event, Musk wants to “convince the best AI talents to work for Tesla”.

For a few days now, the event has had another purpose: to reassure investors and customers. Because the US authorities are currently investigating several accidents with the autopilot. A Tesla vehicle has crashed eleven times since 2018, according to the US traffic safety authority, in police vehicles or ambulances that were at an accident site. Warning signs, flashing lights, none of that helped: the autonomous driving system misjudged the situation. After the investigation was announced, Tesla had recently lost around $ 50 billion in market value.

Top jobs of the day

Find the best jobs now and
be notified by email.

No lidar or radar

AI plays a crucial role at Tesla. Unlike other car manufacturers, the electric pioneer does without lidar and radar in automated driving. Tesla Vision relies entirely on the data from the eight cameras that each model has. “Visual data is much more precise, so it makes a lot more sense to rely a lot more on visual data than on merging different sensors,” said Musk.

Instead of capturing the three-dimensional space around the car with radar and laser, algorithms based on the two-dimensional camera images appreciate it – experts call this technology pseudo-lidar.

But this requires an extremely good neural network – a self-learning AI that evaluates visual data in order to be trained for all possible traffic situations. The Tesla autopilot team is working on this. 300 hardware and software engineers and 500 data labelers evaluate images and label them. The number of these data analysts should grow to 1,000, Musk said a year ago the Tesla fan portal “Clean Technica”. “Nobody is really their boss,” said Musk, who meets with the heads of the autopilot team once a week.

Make correct markings

“Very few of our developers still write algorithms, most collect and curate data sets,” said Tesla’s AI boss Andrej Karpathy recently in a podcast. Millions of images of traffic situations were collected in these data sets, especially those in which the autopilot was overwhelmed or made wrong decisions – for example because it had difficulty recognizing a stop sign or kept too little distance from someone else’s car door. Whenever a Tesla driver intervenes and corrects the autopilot, the Karpathys team brings new training data.

Andrej Karpathy

The head of AI at Tesla relies on neural networks.

(Foto: San Francisco Chronicle via Gett)

The developers then apply the correct labels to the images – stop sign, car door, solid line – so that the autopilot can cope better with the situation next time. “Many outsource the labeling to external companies. We have a whole internal organization with highly trained people, ”says Karpathy.

In the meantime, the neural network is independently assembling the images from all eight cameras and generating a bird’s eye view of the car and its surroundings. “As long as we keep improving these data sets and have enough computing power, there is no limit to how much better these neural networks can get,” says Karpathy.

Supercomputer news

Tesla’s supercomputer “Dojo”, which Karpathy presented in June, will take care of the computing power. Also on Tesla’s AI Day, the fifth largest supercomputer in the world with 5760 graphics processors from chip manufacturer Nvidia will probably play a leading role.

Dennis Hong, robotics professor at the University of California at Los Angeles (UCLA), tweeted a few days ago the image of a three-dimensionally integrated chip with the date of the AI ​​tag. When asked if his robotics institute at UCLA is now working with Tesla, Hong wrote tellingly, “I can’t comment on it yet,” followed by a silent and happy emoji.

The expert for autonomous driving has probably developed a server chip that is supposed to supplement or replace the Nvidia processors in Dojo. Tesla had already done a similar thing with its FSD chip that controls the autopilot. Tesla had recruited Jim Keller for its development in 2016, who among other things had developed the first iPad chip for Apple. In 2019, Tesla replaced Nvidia semiconductors in its cars with their own.

Musk had already indicated that it would be about chips for the FSD and the dojo. “We’ll talk about advances in Tesla’s AI software and hardware, both training and inference,” he wrote in a July tweet announcing AI day. Training the neural networks is the task of the chips in the supercomputer. The FSD chip in the car, on the other hand, is a so-called inference chip that puts what has been learned into practice.

Avoid phantom braking

However, it is controversial among experts whether camera data and AI are sufficient for the development of a self-driving car. The approach will “hit a glass ceiling,” said Mobileye boss Amnon Shashua in January. The Intel subsidiary, which among other things develops driver assistance systems for BMW and VW, worked with Tesla on the autopilot until 2016. Then the partners split up because Musk trusted the system more than Mobileyes developers.

Tesla’s AI boss Karpathy described the problems of the Mobileye approach at a conference a few weeks ago. With radar it would come to so-called “phantom braking”. The phenomenon has been reported by Tesla drivers: The vehicle suddenly slows down with the autopilot activated for incomprehensible reasons.

According to Karpathy, the radar sensors would constantly report objects that are in the way, which are compared with camera data. For example, if the vehicle were in an underpass, the visual data could confirm the radar message – unnecessary braking would occur. The problem is solved with the purely visual approach – provided the AI ​​recognizes the situation.

More: 5760 graphics processors: Tesla supercomputer should enable autonomous driving.

.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending