Data Golden March 13th, Google (GOOG.O) CEO Sundar Pichai announced last night that the latest multimodal large model GEMMA-3 from Open Source is designed for low cost and high performance. GEMMA-3 has four parameters: 1 billion, 4 billion, 12 billion, and 27 billion. But even with the largest 27 billion parameters, only one H100 is needed for efficient inference, which is at least 10 times less Daya Komputasi than similar models to achieve this effect, and it is currently the strongest small parameter model. According to blind test LMSYS ChatbotArena data, GEMMA-3 is second only to DeepSeek's R1-671B, higher than OpenAI's O3-MINI, Llama3-405B and other well-known models.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
谷歌Sumber TerbukaGemma-3:媲美DeepSeek,Daya Komputasi暴降
Data Golden March 13th, Google (GOOG.O) CEO Sundar Pichai announced last night that the latest multimodal large model GEMMA-3 from Open Source is designed for low cost and high performance. GEMMA-3 has four parameters: 1 billion, 4 billion, 12 billion, and 27 billion. But even with the largest 27 billion parameters, only one H100 is needed for efficient inference, which is at least 10 times less Daya Komputasi than similar models to achieve this effect, and it is currently the strongest small parameter model. According to blind test LMSYS ChatbotArena data, GEMMA-3 is second only to DeepSeek's R1-671B, higher than OpenAI's O3-MINI, Llama3-405B and other well-known models.