The new Gemma 4 models are out and they seem like they'd be a lot of fun to experiment with from a device local perspective.
Would it be possible to get the E2B, E4B, 26B A4B, and maybe even the 31B model (or a quantized variant) added to the supported model list?
https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/