url: "https://arxiv.org/abs/2401.04088"
title: "[2401.04088] Mixtral of Experts open search open navigation menu contact arXiv subscribe to arXiv mailings"
date_saved: 2026-04-16
category: ai
tags: [machinelearning, nlp, language-models]
source: direct
reminder: false
cross_skills: [job-radar, ai-news-feed, github-ai-digest]
session_mention: never
url_hash: "31ff4e3be2e0"
[2401.04088] Mixtral of Experts open search open navigation menu contact arXiv subscribe to arXiv mailings
**Summary**: Mixtral of Experts language model introduction
Key Points
- Sparse Mixture of Experts (SMoE) language model
- Outperforms Llama 2 70B and GPT-3.5 on some benchmarks
- Released under Apache 2.0 license
Content
arXiv:2401.04088 (cs) [Submitted on 8 Jan 2024] Title: Mixtral of Experts Authors: Albert Q. Jiang , Alexandre Sablayrolles , Antoine Roux , Arthur Mensch , Blanche Savary , Chris Bamford , Devendra Singh Chaplot , Diego de las Casas , Emma Bou Hanna , Florian Bressand , Gianna Lengyel , Guillaume Bour , Guillaume Lample , Lélio Renard Lavaud , Lucile Saulnier , Marie-Anne Lachaux , Pierre Stock , Sandeep Subramanian , Sophia Yang , Szymon Antoniak , Teven Le Scao , Théophile Gervet , Thibaut Lavril , Thomas Wang , Timothée Lacroix , William El Sayed View a PDF of the paper titled Mixtral of Experts, by Albert Q. Jiang and 25 other authors View PDF Abstract: We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model. Mixtral has the same architecture as Mistral 7B, with the difference that each layer is composed of 8 feedforward blocks (i.e. experts). For every token, at each layer, a router network selects two experts to process the current state and combine their outputs. Even though each token only sees two experts, the selected experts can be different at each timestep. As a result, each token has access to 47B parameters, but only uses 13B active parameters during inference. Mixtral was trained with a context size of 32k tokens and it outperforms or matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks. We also provide a model fine-tuned to follow instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo, Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both the base and instruct models are released under the Apache 2.0 license. Comments: See more details at this https URL Subjects: Machine Learning (cs.LG) ; Computation and Language (cs.CL) Cite as: arXiv:2401.04088 [cs.LG] (or arXiv:2401.04088v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2401.04088 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Devendra Singh Chaplot [ view email ] [v1] Mon, 8 Jan 2024 18:47:34 UTC (2,811 KB) Full-text links: Access Paper: View a PDF of the paper titled Mixtral of Experts, by Albert Q. Jiang and 25 other authors View PDF TeX Source view license Current browse context: cs.LG < prev | next > new | recent | 2024-01 Change to browse by: cs cs.CL References & Citations NASA ADS Google Scholar Semantic Scholar 1 blog link ( what is this? ) export BibTeX citation Loading... BibTeX formatted citation × loading... Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer ( What is the Explorer? ) Connected Papers Toggle Connected Papers ( What is Connected Papers? ) Litmaps Toggle Litmaps ( What is Litmaps? ) scite.ai Toggle scite Smart Citations ( What are Smart Citations? ) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv ( What is alphaXiv? ) Links to Code Toggle CatalyzeX Code Finder for Papers ( What is CatalyzeX? ) DagsHub Toggle DagsHub ( What is DagsHub? ) GotitPub Toggle Gotit.pub ( What is GotitPub? ) Huggingface Toggle Hugging Face ( What is Huggingface? ) Links to Code Toggle Papers with Code ( What is Papers with Code? ) ScienceCast Toggle ScienceCast ( What is ScienceCast? ) Demos Demos Replicate Toggle Replicate ( What is Replicate? ) Spaces Toggle Hugging Face Spaces ( What is Spaces? ) Spaces Toggle TXYZ.AI ( What is TXYZ.AI? ) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower ( What are Influence Flowers? ) Core recommender toggle CORE Recommender ( What is CORE? ) IArxiv recommender toggle IArxiv Recommender ( What is IArxiv? ) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs . Which authors of this paper are endorsers? | Disable MathJax ( What is MathJax? )
Images







Related Skills
- [[skills/job-radar/job-radar|Job Radar]]
- [[skills/ai-news-feed/ai-news-feed|Ai News Feed]]
- [[skills/github-ai-digest/github-ai-digest|Github Ai Digest]]