We benchmarked Gemma 4 26B-A4B GGUFs to identify the best performing quants. Unsloth ranks first in ALL 22 of 22 model sizes on mean KL divergence, making them SOTA. GGUFs: https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF https://twitter.com/UnslothAI/status/2046242156122358168/photo/1
中文: 我们对Gemma 4 26B-A4B GGUF进行了基准测试,以确定表现最佳的量子。 Unsloth在22个型号中,在平均KL差异中排名第一,使其成为SOTA。 GGUFs:
We ran Qwen3.6-35-A3B GGUF performance benchmarks to help you choose the best quant. Unsloth ranks first in 21 of 22 model sizes on mean KL divergence, making them SOTA. GGUFs: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF https://twitter.com/UnslothAI/status/2045167861942063428/photo/1
中文: 我们运行了Qwen3.6-35-A3B GGUF性能基准,以帮助您选择最佳量子。 在22个模型尺寸中,Unsloth在平均KL差异值上排名第一,使其成为SOTA。 GGUFs:
We ran Qwen3.6-35-A3B GGUF performance benchmarks to help you choose the best quant. Unsloth ranks first in 21 of 22 model sizes on mean KL divergence, making them SOTA. GGUFs: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF https://twitter.com/UnslothAI/status/2045166994622988522/photo/1
中文: 我们运行了Qwen3.6-35-A3B GGUF性能基准,以帮助您选择最佳量子。 在22个模型尺寸中,Unsloth在平均KL差异值上排名第一,使其成为SOTA。 GGUFs:
RT @Hesamation: WAIT WHAT?! 2-bit Qwen3.6-35B-A3B is lightning fast and it only needs 13 GB RAM. “did a complete repo bug hunt with evid…
中文: RT @Hesamation:等等? 2位Qwen3.6-35B-A3B速度快,只需13GB内存。 通过 evid 进行了彻底的 repo 漏洞搜索......
2-bit Qwen3.6-35B-A3B did a complete repo bug hunt with evidence, repro, fixes, tests and a PR writeup. 🔥 Run it locally in Unsloth Studio with just 13GB RAM. 2-bit Qwen3.6 GGUF made 30+ tool calls, searched 20 sites and executed Python code. GitHub: https://github.com/unslothai/unsloth https://twitter.com/UnslothAI/status/2044858346948464743/video/1
中文: 2位Qwen3.6-35B-A3B在进行完整的仓库漏洞搜索时,提供了证据、反驳、修复、测试和公关记录。🔥 在 Unsloth Studio 本地运行,仅配备 13GB 内存。 2位Qwen3.6 GGUF进行了30多次工具调用,搜索了20个站点并执行了Python代码。 GitHub:
Qwen3.6-35B-A3B can now be run locally!💜 The model is the strongest mid-sized LLM on nearly all benchmarks. Run on 23GB RAM via Unsloth Dynamic GGUFs. GGUFs to run: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF Guide: https://unsloth.ai/docs/models/qwen3.6 https://twitter.com/UnslothAI/status/2044786492451778988/photo/1
中文: Qwen3.6-35B-A3B 现在可以本地运行!EE0 该模型是几乎所有基准基准上最强的中型LLM。 通过 Unsloth Dynamic GGUF 运行 23GB 运行。 GGUFs 将运行: 指南:
You can now train Gemma 4 with RL in our free notebook! You just need 9GB VRAM to RL Gemma 4 locally! Gemma 4 will learn to solve sodoku autonomously via GRPO. RL Guide: https://unsloth.ai/docs/get-started/reinforcement-learning-rl-guide GitHub: https://github.com/unslothai/unsloth Gemma 4 Colab: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma4_(E2B)_Reinforcement_Learning_Sudoku_Game.ipynb https://twitter.com/UnslothAI/status/2044428909635359166/photo/1
中文: 现在您可以在我们的免费笔记本中使用RL训练Gemma 4! 只需在本地使用9GB VRAM才能使用RL Gemma 4! Gemma 4 将通过 GRPO 学会自主解决 sodoku。 RL 指南: GitHub: Gemma 4 库拉布:
MiniMax 2.7 can now be run locally!🔥 MiniMax-M2.7 is a new 230B parameter open model with SOTA on SWE-Pro and Terminal Bench 2. Run the Dynamic 4-bit MoE model on 128GB Mac or RAM/VRAM setups. Guide: https://unsloth.ai/docs/models/minimax-m27 GGUF: https://huggingface.co/unsloth/MiniMax-M2.7-GGUF https://twitter.com/UnslothAI/status/2043295159941738547/photo/1
中文: MiniMax 2.7 现在可以本地运行!EE0 MiniMax-M2.7 是一款新的 230B 参数开放模型,SOTA 支持 SWE-Pro 和 Terminal Bench 2。 在128GB Mac或RAM/VRAM设置上运行动态4位MoE模型。 指南: GGUF:
Google DeepMind is hosting a Gemma 4 hackathon with a $10,000 Unsloth prize! 🦥 Show off your best fine-tuned Gemma 4 model built with Unsloth. There's $200,000 total prizes to be won. Challenge info + Notebook: https://www.kaggle.com/code/danielhanchen/gemma4-31b-unsloth https://twitter.com/UnslothAI/status/2042599142560796991/photo/1
中文: 谷歌DeepMind正在举办一场Gemma 4黑客马拉松,奖金为1万美元!🦥 展示使用 Unsloth 打造的最佳精美 Gemma 4 型号。 总奖金为20万美元。 挑战信息 + 笔记本:
You can now fine-tune Gemma 4 31B for free in our notebook! 🚀 Training the 31B parameter model is completely free with Kaggle and Unsloth. GitHub: https://github.com/unslothai/unsloth Guide: https://unsloth.ai/docs/models/gemma-4/train Gemma-4-31B Notebook: https://www.kaggle.com/code/danielhanchen/gemma4-31b-unsloth https://twitter.com/UnslothAI/status/2042269554253140048/photo/1
中文: 现在,您可以在我们的笔记本上免费为Gemma 4 31B进行微调!🚀 使用 Kaggle 和 Unsloth 训练 31B 参数模型是完全免费的。 GitHub: 指南: Gemma-4-31B 笔记本:
GLM-5.1 can now be run locally!🔥 GLM-5.1 is a new open model for SOTA agentic coding & chat. We shrank the 744B model from 1.65TB to 220GB (-86%) via Dynamic 2-bit. Runs on a 256GB Mac or RAM/VRAM setups. Guide: https://unsloth.ai/docs/models/glm-5.1 GGUF: https://huggingface.co/unsloth/GLM-5.1-GGUF https://twitter.com/UnslothAI/status/2041552121259249850/photo/1
中文: GLM-5.1 现在可以本地运行!EE0 GLM-5.1 是 SOTA 特性编码和聊天的全新开放模式。 通过 Dynamic 2 位将 744B 型号从 1.65TB 缩减至 220GB(-86%)。 运行256GB的Mac或RAM/VRAM系统。 指南: GGUF:
You can now fine-tune Gemma 4 with our free notebooks! 🔥 You just need 8GB VRAM to train Gemma 4 locally! Unsloth trains Gemma4 1.5x faster with 50% less VRAM. GitHub: https://github.com/unslothai/unsloth Guide: https://unsloth.ai/docs/models/gemma-4/train Gemma-4-E4B Colab: https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb https://twitter.com/UnslothAI/status/2041513619339575762/photo/1
中文: 现在您可以使用我们的免费笔记本对Gemma 4进行微调!🔥 只需8GB VRAM即可本地训练Gemma 4! Unsloth 的 Gemma4 运行速度为 1.5 倍,VRAM 降低了 50%。 GitHub: 指南: 杰玛-4-E4B 合作:
RT @itsPaulAi: You can now fine-tune Gemma 4 (and 500 other open source models) in a free Google Colab 🔥 1. Open the Colab notebook below…
中文: RT @itsPaulAi:现在您可以通过免费的谷歌 Colab 🔥 对 Gemma 4(以及其他 500 个开源模型)进行微调 1。打开下面的Colab笔记本......
You can now train and run 500+ models in our free notebook!✨ GitHub repo: https://github.com/unslothai/unsloth Colab Notebook: https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb https://twitter.com/UnslothAI/status/2041177756848083266/video/1
中文: 现在您可以在我们的免费笔记本中训练并运行500多种型号!EE0 GitHub 仓库: 科拉布笔记本:
You can now train and run models with a UI in our free notebook.✨ Unsloth Studio trains 500+ models 2x faster with 70% less VRAM. Run GGUFs, audio, vision models and compare. GitHub repo: https://github.com/unslothai/unsloth Colab Notebook: https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb https://twitter.com/UnslothAI/status/2041171029230539133/video/1
中文: 现在您可以在我们的免费笔记本中使用用户界面来训练和运行模型。EE0 Unsloth Studio 可运行 500 多款 2 倍,且 VRAM 低 70%。 运行GGUF、音频、视觉模型并进行比较。 GitHub 仓库: 科拉布笔记本:
You can now train and run models with a UI in our free notebook.✨ Unsloth Studio trains 500+ models 2x faster with 70% less VRAM. Run GGUFs, audio, vision models and compare. GitHub repo: https://github.com/unslothai/unsloth Notebook: https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb https://twitter.com/UnslothAI/status/2041170395513098594/video/1
中文: 现在您可以在我们的免费笔记本中使用用户界面来训练和运行模型。EE0 Unsloth Studio 可运行 500 多款 2 倍,且 VRAM 低 70%。 运行GGUF、音频、视觉模型并进行比较。 GitHub 仓库: 笔记本:
RT @ivanfioravanti: Wait, what? @UnslothAI is starting to upload MLX Dynamic Quants! I have to test them ASAP! Thanks you really rock! 🚀 h…
中文: RT @ivanfioravanti:等等,什么?@UnslothAI 开始上传 MLX 动态量子器!我得尽快测试一下! 非常感谢!🚀 嗯......
Gemma 4 E4B (4-bit) completed a full repo audit by executing Bash code and tool calls locally. Runs on just 6GB RAM. https://twitter.com/UnslothAI/status/2040161518898319728/video/1
中文: Gemma 4 E4B(4位)通过在本地执行Bash代码和工具调用,完成了完整的回购审计。 仅运行6GB内存。
Gemma 4 E4B (4-bit) completed a full repo audit by executing Bash code and tool calls locally. Runs on just 6GB RAM. It inspected files, git history, cross-checked metrics, and showcased evidence-backed candidates. https://twitter.com/UnslothAI/status/2040158945189466319/video/1
中文: Gemma 4 E4B(4位)通过在本地执行Bash代码和工具调用,完成了完整的回购审计。 只需6GB运行内存。 检查文件、查看历史记录、交叉核对指标,并展示证据支持的候选人。
RT @NVIDIA_AI_PC: .@UnslothAI supports @GoogleGemma 4 models, optimized for RTX GPUs. 🦥 Run & fine-tune locally in Unsloth Studio.
中文: RT @NVIDIA_AI_PC:@UnslothAI 支持 @GoogleGemma 4 型号,专为 RTX GPU 优化。🦥 运行与运行;在Unsloth Studio中对本地进行精细调和。
Gemma 4 E4B was able to search and cite 10+ websites, execute code to find the best answer! 🔥 You only need 6GB RAM to try this in Unsloth Studio. GitHub repo: https://github.com/unslothai/unsloth https://twitter.com/UnslothAI/status/2039791545986191562/video/1
中文: Gemma 4 E4B 能够搜索并引用 10 多个网站,执行代码以找到最佳答案!🔥 只需6GB内存,您就可以在Unsloth Studio中试用。 GitHub 仓库:
Google releases Gemma 4. ✨ Gemma 4 introduces 4 models: E2B, E4B, 26B-A4B, 31B. The multimodal reasoning models are under Apache 2.0. Run E2B and E4B on ~6GB RAM, and on phones. Run 26B-A4B and 31B on ~18GB. GGUFs: https://huggingface.co/collections/unsloth/gemma-4 Guide: https://unsloth.ai/docs/models/gemma-4 https://twitter.com/UnslothAI/status/2039739190536286313/photo/1
中文: 谷歌发布Gemma 4。✨ Gemma 4 推出 4 种型号:E2B、E4B、26B-A4B、31B。 多模态推理模型在 Apache 2.0 下。 运行E2B和E4B,运行在约6GB内存和手机上。 运行 26B-A4B 和 31B,运行 ~18GB。 GGUFs: 指南: