Home » today » Technology » Speech processing models are becoming more and more large

Speech processing models are becoming more and more large

However, the systems cannot solve all the problems.

DeepMind artificial intelligence researchers examined the current large-scale speech processing models created using machine learning, which are constantly growing. Experts have determined that an increase in size in the future will not be able to remedy any problems that arise.

DeepMind staff themselves created a speech processing model, Gopher, that contains about 280 billion parameters. This is an excellent basis for comparison: the GPT created by OpenAI, which is also supported by Microsoft and whose technology is also used by the Redmond Group, has about 175 billion parameters. In addition, two months ago, nVidia and Microsoft unveiled a joint model featuring about 530 parameters.

Jack Rae, a DeepMind researcher, said He told that one of the central results of their material is that the capabilities of speech processing models are still expanding. This is an area that has not reached the plateau, and there are problems with a particular model making stereotypical prejudices reinforcing or spreading untruths. Only the use of additional training routines and the possibility of direct human intervention can help the situation. This is why there is a need to create an improved speech processing model architecture that reduces training energy costs and at the same time simplifies their traceability. The latter is one of the most important functions for reviewing the prejudices learned by the model.

– .

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.