Google has launched its Gemma 3n AI mannequin, positioned as an development for on-device AI and bringing multimodal capabilities and better efficiency to edge units.
Previewed in Could, Gemma 3n is multimodal by design, with native assist for picture, audio, video, and textual content inputs and outputs, Google stated. Optimized for edge units reminiscent of telephones, tablets, laptops, desktops, or single cloud accelerators, Gemma 3n fashions can be found in two sizes primarily based on “efficient” parameters, E2B and E4B. Whereas the uncooked parameter counts for E2B and E4B are 5B and 8B, respectively, these fashions run with a reminiscence footprint akin to conventional 2B and 4B fashions, working with as little as 2GB and 3GB of reminiscence, Google stated.
Introduced as a manufacturing launch June 26, Gemma 3n fashions may be downloaded from Hugging Face and Kaggle. Builders can also check out Gemma 3n in Google AI Studio.