0
VaultGemma: The world's most capable differentially private LLM
https://deepmind.google/discover/blog/vaultgemma-the-worlds-most-capable-differentially-private-llm/(deepmind.google)New research establishes scaling laws that model the compute-privacy-utility trade-offs when applying differential privacy (DP) to large language models. While DP offers a mathematically sound solution for privacy by adding noise to prevent memorization, it can reduce training stability and increase computational costs. Guided by these findings, VaultGemma has been introduced as the largest (1-billion parameter) open model trained from scratch with differential privacy. The model weights are being released on platforms like Hugging Face and Kaggle to advance privacy-preserving AI.
0 points•by chrisf•1 month ago