A Stanford study has discovered thousands of explicit images of child abuse in LAION-5B, the largest image dataset for training AI models, including Stable Diffusion. Following this revelation, LAION has temporarily taken its datasets offline to ensure they are safe before republishing.