We’re excited to announce that Google’s Gemma 3 fashions are coming to Databricks, beginning with the Gemma 3 12B mannequin, which is now natively out there throughout all clouds. Textual content capabilities can be found at this time, with multimodal help rolling out quickly.
Gemma 3 12B is optimized for enterprise workloads, placing the perfect steadiness between functionality and computational effectivity. It excels at core use instances like doc processing, content material evaluation, code era, and conversational AI, making it a powerful match for production-grade purposes.
Databricks has lengthy been the platform the place enterprises handle and analyze unstructured knowledge at scale. As enterprises join that knowledge with massive language fashions to construct AI brokers, the necessity for environment friendly, high-quality fashions with an affordable worth level has grown quickly. Gemma 3 12B fills a essential hole—providing open, high-quality multimodal capabilities that energy doc AI and visible query answering use instances. Mixed with Databricks’ unified platform for unstructured knowledge and mannequin growth, groups can construct and deploy production-grade AI quicker and extra affordably.
Enhanced high quality and multimodal capabilities
Gemma 3 12B supplies a sexy steadiness of dimension and high quality:
- Broad capabilities: Gemma 3 12B has a big 128K context window that helps lengthy paperwork as inputs. It additionally helps over 140 languages, making it preferrred for international companies.
- Top quality for complicated text-based duties: Gemma 3 12B excels in language understanding and mathematical problem-solving. This makes it well-suited for enterprise purposes that require refined textual content evaluation and logical reasoning.
- Empowers doc AI and visible query answering use instances: The extra picture modality unlocks use instances that require enter from unstructured knowledge, comparable to extracting info from tables in paperwork, receipt photographs, and extra.
Get began with Gemma 3 12B on Databricks
To get a way of whether or not Gemma 3 12B would fit your use case, strive it out in AI Playground.
You’ll be able to question the mannequin serving endpoint as effectively. For each AI Playground and the mannequin serving endpoint, multimodal capabilities are coming quickly.
Moreover, the newly launched MLflow 3 permits you to consider the mannequin extra comprehensively throughout your particular datasets.
You can too run scalable batch inference by sending a SQL question to your desk.