Google Announces Gemma: Open Model Generative AI
Google announces the release of Gemma, a family of open models for generative AI. (image source: Google)

Google has announced the public release of Gemma, a family of lightweight, state-of-the-art open models. Gemma is built from the same research and technology used to create the Gemini models, developed by Google DeepMind and other teams across Google. Gemma is a large language model capable of text-only and supports only the English language today. The models were created abiding by the company’s Responsible AI guidelines and ethics.

Gemma can be deployed on Google Cloud, any Google Kubernetes Engine environment, or even on a laptop or workstation.

Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.”

Sam Witteveen walks viewers through Google Gemma open models. (source: YouTube)

Google Gemma: Open Models vs. Open Source

It’s important to note that Google is not releasing Gemma as open source but as an open model.

The distinction, as described by Google:

“Open models feature free access to the model weights, but terms of use, redistribution, and variant ownership vary according to a model’s specific terms of use, which may not be based on an open-source license. The Gemma models’ terms of use make them freely available for individual developers, researchers, and commercial users for access and redistribution. Users are also free to create and publish model variants. In using Gemma models, developers agree to avoid harmful uses, reflecting our commitment to developing AI responsibly while increasing access to this technology.”

How to access Google Gemma models

Google has made the Gemma models available on Kaggle, Hugging Face, and, of course, its own Google Cloud platform. Gemma is ready-to-use within Colab and Kaggle notebooks.

Google has released the model weights in two sizes: Gemma 2B and Gemma 7B. Each size is released with pre-trained and instruction-tuned variants to enable research and development.

Essentially, the models provide the foundational building blocks for future generative AI text use, capable of supporting many robust use cases. How you ultimately decide to train the models is out of Google’s hands, but it provides numerous methods to support training such as JAX, PyTorch, Keras 3.0, and Hugging Face Transformers.

Full documentation and quickstart guides are available on Google’s AI Developer portal. A Gemma model card is also available on Kaggle.

Google Gemma model use cases

Gemma fully supports Google’s Vertex AI platform for artificial intelligence and machine learning operations. Using Gemma with Vertex AI, developers can use Gemma for use cases such as:

  • Build generative AI apps for lightweight tasks such as text generation, summarization, and Q&A
  • Enable research and development using lightweight but customized models for exploration and experimentation
  • Support real-time generative AI use cases that require low latency, such as streaming text

Additional technical documentation on Gemma can be found in the Google DeepMind public technical report (PDF link).

Google is rapidly advancing its efforts to remain competitive with its peers, with the release of Gemini 1.5 Pro last week, Gemini 1.0 Ultra general availability, and including generative AI across its portfolio of products.

Disclaimer: The author of this article is a current employee of Google. This article does not represent the views or opinions of his employer and is not meant to be an official statement for Google, or Google Cloud.

You May Also Like

Former CISA Director Chris Krebs Warns of ‘Significant’ Election Security Risks in 2024

Chris Krebs, former director of the Cybersecurity and Infrastructure Security Agency (CISA),…

Google Cloud Security AI Workbench: Generative AI for Cybersecurity Analysts

Google is entering the cybersecurity generative AI capability race with the announcement…

Israel Cybersecurity Industry Working through Wartime

The rapidly escalating war between Israel and Hamas is forcing the Israeli…