2018 OpenAI Report Shows Computing Power to Train Largest AI Models Growing
New specialized equipment, CerebrasThe System's Wafer Scale Engine allows you to replace many powerful computers and use one small chip that is suitable for AI training. However, it cannot be mass-produced yet, and the researchers note that it will cost several million dollars.
“This is a fantastic achievement that will allowthousands of independent researchers achieve the same powerful results as teams of huge corporations. In addition, the computational resources normally required for this type of research lead to a large carbon footprint. Now we have managed to get rid of him. "
University of Southern California researchers
In this method, the AI agent is placed in a simulationan environment that provides rewards for achieving certain goals. Their model is used as feedback for further learning. It includes three main computational tasks: modeling the environment and the agent; deciding what to do next based on learned rules and using the results of these actions to update their behavior.
This led to a significant acceleration in learningcompared with other approaches. Using a single computer equipped with a 36-core processor, the researchers were able to process approximately 140,000 frames per second during their Atari and Doom video game training. In the 3D training environment of the DeepMind Lab, they provided a clock rate of 40 thousand frames per second, which is 15% better than usual.
- Look at the huge “wall” of hundreds of thousands of galaxies behind the Milky Way
- Comet NEOWISE is visible in Russia. Where to see her, where to look and how to take a photo
- NASA and ESA publish the most detailed image of the Sun