A 2018 OpenAI report found that the computing power to train the largest AI models is growing
New specialized equipment, CerebrasSystem's Wafer Scale Engine allows you to replace many powerful computers and use one small chip that is suitable for AI training. However, it cannot yet be mass produced and the researchers note that it would cost several million dollars.
“This is a fantastic achievement thatwill enable thousands of independent researchers to achieve the same powerful results as teams of huge corporations. Additionally, the computing resources typically required to conduct this type of research result in a large carbon footprint. Now we have managed to get rid of him.”
University of Southern California researchers
In this method, the AI agent is placed in a simulationan environment that provides rewards for achieving certain goals. Their model is used as feedback for further learning. It includes three main computational tasks: modeling the environment and the agent; deciding what to do next based on learned rules and using the results of these actions to update their behavior.
This led to a significant acceleration in learningcompared with other approaches. Using a single computer equipped with a 36-core processor, the researchers were able to process approximately 140,000 frames per second during their Atari and Doom video game training. In the 3D training environment of the DeepMind Lab, they provided a clock rate of 40 thousand frames per second, which is 15% better than usual.
See also:
- Look at the huge “wall” of hundreds of thousands of galaxies behind the Milky Way
- Comet NEOWISE is visible in Russia. Where to see her, where to look and how to take a photo
- NASA and ESA publish the most detailed image of the Sun