Introducing Gemma 3 270M: The compact model for hyper-efficient AI



This content originally appeared on Google Developers Blog and was authored by Google Developers Blog

Google’s new Gemma 3 270M is a compact, 270-million parameter model offering energy efficiency, production-ready quantization, and strong instruction-following, making it a powerful solution for task-specific fine-tuning in on-device and research settings.


This content originally appeared on Google Developers Blog and was authored by Google Developers Blog