
The DeepMind blog discusses their research on Gopher, a large-scale language model, focusing on its capabilities and ethical considerations. Gopher, with up to 280 billion parameters, shows promise in tasks like reading comprehension but has limitations in areas like logical reasoning. Ethical risks, such as misinformation and stereotyping, are explored in depth. The blog also introduces RETRO, an efficient model that enhances performance by integrating internet-scale retrieval mechanisms, aiming to reduce training costs and improve interpretability of AI predictions.