Turns out adding 0 helps :)
Today we’re introducing Ternary Bonsai 🌳, a family of end-to-end 1.58-bit language models in 8B, 4B, and 1.7B sizes.
Ternary Bonsai 8B is within 5% of Qwen 3 8B at 9x lower memory.
Still tiny. Noticeably smarter
中文: 事实证明,添加 0 有助于 :)
今天我们推出Ternary Bonsai 🌳,这是一款采用8B、4B和1.7B尺寸的端到端1.58位语言模型系列。
三元波西8B在9倍内存下,在Qwen 3 8B的范围内处于5%以内。
仍然很小。明显更智能