diff --git a/1701.06538v1.md b/1701.06538v1.md new file mode 100644 index 0000000000000000000000000000000000000000..71650b17ac68e153737a2fe15e1ee230d78e2e87 --- /dev/null +++ b/1701.06538v1.md @@ -0,0 +1,521 @@ +# Outrageously Large Neural Networks: The Sparsely-Gated Mixture-Of-Experts Layer + +Noam Shazeer1, Azalia Mirhoseini∗†1, Krzysztof Maziarz∗2, Andy Davis1, Quoc Le1, Geoffrey Hinton1and Jeff Dean1 1Google Brain, {noam,azalia,andydavis,qvl,geoffhinton,jeff}@google.com 2Jagiellonian University, Cracow, krzysztof.maziarz@student.uj.edu.pl + +## Abstract + +The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE +to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost. + +## 1 Introduction And Related Work 1.1 Conditional Computation + +Exploiting scale in both training data and model size has been central to the success of deep learning. When datasets are sufficiently large, increasing the capacity (number of parameters) of neural networks can give much better prediction accuracy. This has been shown in domains such as text +(Sutskever et al., 2014; Bahdanau et al., 2014; Jozefowicz et al., 2016; Wu et al., 2016), images +(Krizhevsky et al., 2012; Le et al., 2012), and audio (Hinton et al., 2012; Amodei et al., 2015). For typical deep learning models, where the entire model is activated for every example, this leads to a roughly quadratic blow-up in training costs, as both the model size and the number of training examples increase. Unfortunately, the advances in computing power and distributed computation fall short of meeting such demand. + +Various forms of conditional computation have been proposed as a way to increase model capacity without a proportional increase in computational costs (Davis & Arel, 2013; Bengio et al., 2013; Eigen et al., 2013; Ludovic Denoyer, 2014; Cho & Bengio, 2014; Bengio et al., 2015; Almahairi et al., 2015). In these schemes, large parts of a network are active or inactive on a per-example basis. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. Various forms of reinforcement learning and back-propagation are proposed for trarining the gating decisions. + +![1_image_0.png](1_image_0.png) + +While these ideas are promising in theory, no work to date has yet demonstrated massive improvements in model capacity, training time, or model quality. We blame this on a combination of the following challenges: + +- Modern computing devices, especially GPUs, are much faster at arithmetic than at branching. Most of the works above recognize this and propose turning on/off large chunks of the network with each gating decision. + +- Large batch sizes are critical for performance, as they amortize the costs of parameter transfers and updates. Conditional computation reduces the batch sizes for the conditionally active chunks of the network. + +- Network bandwidth can be a bottleneck. A cluster of GPUs may have computational power thousands of times greater than the aggregate inter-device network bandwidth. To be computationally efficient, the relative computational versus network demands of an algorithm must exceed this ratio. Embedding layers, which can be seen as a form of conditional computation, are handicapped by this very problem. Since the embeddings generally need to be sent across the network, the number of (example, parameter) interactions is limited by network bandwidth instead of computational capacity. + +- Depending on the scheme, loss terms may be necessary to achieve the desired level of sparsity per-chunk and/or per example. Bengio et al. (2015) use three such terms. These issues can affect both model quality and load-balancing. + +- Model capacity is most critical for very large data sets. The existing literature on conditional computation deals with relatively small image recognition data sets consisting of up to 600,000 images. It is hard to imagine that the labels of these images provide a sufficient signal to adequately train a model with millions, let alone billions of parameters. +In this work, we for the first time address all of the above challenges and finally realize the promise of conditional computation. We obtain greater than 1000x improvements in model capacity with only minor losses in computational efficiency and significantly advance the state-of-the-art results on public language modeling and translation data sets. + +1.2 OUR APPROACH: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER +Our approach to conditional computation is to introduce a new type of general purpose neural network component: a Sparsely-Gated Mixture-of-Experts Layer (MoE). The MoE consists of a number of experts, each a simple feed-forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input (see Figure 1). All parts of the network are trained jointly by back-propagation. + +While the introduced technique is generic, in this paper we focus on language modeling and machine translation tasks, which are known to benefit from very large models. In particular, we apply a MoE +convolutionally between stacked LSTM layers (Hochreiter & Schmidhuber, 1997), as in Figure 1. + +The MoE is called once for each position in the text, selecting a potentially different combination of experts at each position. The different experts tend to become highly specialized based on syntax and semantics (see Appendix E Table 9). On both language modeling and machine translation benchmarks, we improve on best published results at a fraction of the computational cost. + +## 1.3 Related Work On Mixtures Of Experts + +Since its introduction more than two decades ago (Jacobs et al., 1991; Jordan & Jacobs, 1994), the mixture-of-experts approach has been the subject of much research. Different types of expert architectures hae been proposed such as SVMs (Collobert et al., 2002), Gaussian Processes (Tresp, 2001; Theis & Bethge, 2015; Deisenroth & Ng, 2015), Dirichlet Processes (Shahbaba & Neal, 2009), +and deep networks. Other work has focused on different expert configurations such as a hierarchical structure (Yao et al., 2009), infinite numbers of experts (Rasmussen & Ghahramani, 2002), and adding experts sequentially (Aljundi et al., 2016). Garmash & Monz (2016) suggest an ensemble model in the format of mixture of experts for machine translation. The gating network is trained on a pre-trained ensemble NMT model. The works above concern top-level mixtures of experts. The mixture of experts is the whole model. Eigen et al. (2013) introduce the idea of using multiple MoEs with their own gating networks as parts of a deep model. It is intuitive that the latter approach is more powerful, since complex problems may contain many sub-problems each requiring different experts. They also allude in their conclusion to the potential to introduce sparsity, turning MoEs into a vehicle for computational computation. + +Our work builds on this use of MoEs as a general purpose neural network component. While Eigen et al. (2013) uses two stacked MoEs allowing for two sets of gating decisions, our convolutional application of the MoE allows for different gating decisions at each position in the text. We also realize sparse gating and demonstrate its use as a practical way to massively increase model capacity. + +## 2 The Structure Of The Mixture-Of-Experts Layer + +The Mixture-of-Experts (MoE) layer consists of a set of n "expert networks" E1, · · · , En, and a +"gating network" G whose output is a sparse n-dimensional vector. Figure 1 shows an overview of the MoE module. The experts are themselves neural networks, each with their own parameters. + +Although in principle we only require that the experts accept the same sized inputs and produce the same-sized outputs, in our initial investigations in this paper, we restrict ourselves to the case where the models are feed-forward networks with identical architectures, but with separate parameters. + +Let us denote by G(x) and Ei(x) the output of the gating network and the output of the i-th expert network for a given input x. The output y of the MoE module can be written as follows: + +$$y=\sum_{i=1}^{n}G(x)_{i}E_{i}(x)$$ +$$(\mathbf{l})$$ +G(x)iEi(x) (1) +We save computation based on the sparsity of the output of G(x). Wherever G(x)i = 0, we need not compute Ei(x). In our experiments, we have up to thousands of experts, but only need to evaluate a handful of them for every example. If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of "experts", each of which is itself a secondary mixture-of-experts with its own gating network. In the following we focus on ordinary MoEs. We provide more details on hierarchical MoEs in Appendix B. Our implementation is related to other models of conditional computation. A MoE whose experts are simple weight matrices is similar to the parameterized weight matrix proposed in (Cho & Bengio, 2014). A MoE whose experts have one hidden layer is similar to the block-wise dropout described in (Bengio et al., 2015), where the dropped-out layer is sandwiched between fully-activated layers. + +## 2.1 Gating Network + +Softmax Gating: A simple choice of non-sparse gating function (Jordan & Jacobs, 1994) is to multiply the input by a trainable weight matrix Wg and then apply the *Sof tmax* function. + +$$G_{\sigma}(x)=S o f t m a x(x\cdot W_{g})$$ +$\eqref{eq:walpha}$. +Gσ(x) = *Sof tmax*(x · Wg) (2) +Noisy Top-K Gating: We add two components to the Softmax gating network: sparsity and noise. Before taking the softmax function, we add tunable Gaussian noise, then keep only the top k values, setting the rest to −∞ (which causes the corresponding gate values to equal 0). The sparsity serves to save computation, as described above. While this form of sparsity creates some theoretically scary discontinuities in the output of gating function, we have not yet observed this to be a problem in practice. The noise term helps with load balancing, as will be discussed in Appendix A. The amount of noise per component is controlled by a second trainable weight matrix W*noise*. + +$$G(x)=S o f t m a x(K e e p T o p K(H(x),k))$$ +$$(4)$$ +$$(S)$$ +G(x) = Sof tmax(*KeepT opK*(H(x), k)) (3) +$$H(x)_{i}=(x\cdot W_{g})_{i}+S t a n d a r d N o r m a l()\cdot S o f t p l u s((x\cdot W_{n o i s e})_{i})$$ +$KeepTopK(v,k)_{i}=\begin{cases}v_{i}&\text{if}v_{i}\text{is in the top}k\text{elements of}v.\\ -\infty&\text{otherwise.}\end{cases}$ +Training the Gating Network We train the gating network by simple back-propagation, along with the rest of the model. If we choose k > 1, the gate values for the top k experts have nonzero derivatives with respect to the weights of the gating network. This type of occasionally-sensitive behavior is described in (Bengio et al., 2013) with respect to noisy rectifiers. Gradients also backpropagate through the gating network to its inputs. Our method differs here from (Bengio et al., +2015) who use boolean gates and a REINFORCE-style approach to train the gating network. + +## 3 Addressing Performance Challenges 3.1 The Shrinking Batch Problem + +On modern CPUs and GPUs, large batch sizes are necessary for computational efficiency, so as to amortize the overhead of parameter loads and updates. If the gating network chooses k out of n experts for each example, then for a batch of b examples, each expert receives a much smaller batch of approximately kb n b examples. This causes a naive MoE implementation to become very inefficient as the number of experts increases. The solution to this shrinking batch problem is to make the original batch size as large as possible. However, batch size tends to be limited by the memory necessary to store activations between the forwards and backwards passes. We propose the following techniques for increasing the batch size: +Mixing Data Parallelism and Model Parallelism: In a conventional distributed training setting, multiple copies of the model on different devices asynchronously process distinct batches of data, and parameters are synchronized through a set of parameter servers. In our technique, these different batches run synchronously so that they can be combined for the MoE layer. We distribute the standard layers of the model and the gating network according to conventional data-parallel schemes, but keep only one shared copy of each expert. Each expert in the MoE layer receives a combined batch consisting of the relevant examples from all of the data-parallel input batches. The same set of devices function as data-parallel replicas (for the standard layers and the gating networks) and as model-parallel shards (each hosting a subset of the experts). If the model is distributed over d devices, and each device processes a batch of size b, each expert receives a batch of approximately kbd nexamples. Thus, we achieve a factor of d improvement in expert batch size. + +In the case of a hierarchical MoE (Section B), the primary gating network employs data parallelism, and the secondary MoEs employ model parallelism. Each secondary MoE resides on one device. + +This technique allows us to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster. The total batch size increases, keeping the batch size per expert constant. The memory and bandwidth requirements per device also remain constant, as do the step times, as does the amount of time necessary to process a number of training examples equal to the number of parameters in the model. It is our goal to train a trillionparameter model on a trillion-word corpus. We have not scaled our systems this far as of the writing of this paper, but it should be possible by adding more hardware. + +Taking Advantage of Convolutionality: In our language models, we apply the same MoE to each time step of the previous layer. If we wait for the previous layer to finish, we can apply the MoE +to all the time steps together as one big batch. Doing so increases the size of the input batch to the MoE layer by a factor of the number of unrolled time steps. + +Increasing Batch Size for a Recurrent MoE: We suspect that even more powerful models may involve applying a MoE recurrently. For example, the weight matrices of a LSTM or other RNN +could be replaced by a MoE. Sadly, such models break the convolutional trick from the last paragraph, since the input to the MoE at one timestep depends on the output of the MoE at the previous timestep. Gruslys et al. (2016) describe a technique for drastically reducing the number of stored activations in an unrolled RNN, at the cost of recomputing forward activations. This would allow for a large increase in batch size. + +## 3.2 Network Bandwidth + +Another major performance concern in distributed computing is network bandwidth. Since the experts are stationary (see above) and the number of gating parameters is small, most of the communication involves sending the inputs and outputs of the experts across the network. To maintain computational efficiency, the ratio of an expert's computation to the size of its input and output must exceed the ratio of computational to network capacity of the computing device. For GPUs, this may be thousands to one. In our experiments, we use experts with one hidden layer containing thousands of RELU-activated units. Since the weight matrices in the expert have sizes input_size×hidden_*size* and hidden_size × output_*size*, the ratio of computation to input and output is equal to the size of the hidden layer. Conveniently, we can increase computational efficiency simply by using a larger hidden layer, or more hidden layers. + +## 4 Balancing Expert Utilization + +We have observed that the gating network tends to converge to a state where it always produces large weights for the same few experts. This imbalance is self-reinforcing, as the favored experts are trained more rapidly and thus are selected even more by the gating network. Eigen et al. (2013) +describe the same phenomenon, and use a hard constraint at the beginning of training to avoid this local minimum. Bengio et al. (2015) include a soft constraint on the batch-wise average of each gate.1 We take a soft constraint approach. We define the importance of an expert relative to a batch of training examples to be the batchwise sum of the gate values for that expert. We define an additional loss L*importance*, which is added to the overall loss function for the model. This loss is equal to the square of the coefficient of variation of the set of importance values, multiplied by a hand-tuned scaling factor w*importance*. This additional loss encourages all experts to have equal importance. + +$$I m p o r t a n c e(X)=\sum_{x\in X}G(x)$$ +$$L_{importance}(X)=w_{importance}\cdot CV(Importance(X))^{2}\tag{7}$$ +$$(6)$$ + +While this loss function can ensure equal importance, experts may still receive very different numbers of examples. For example, one expert may receive a few examples with large weights, and another may receive many examples with small weights. This can cause memory and performance problems on distributed hardware. To solve this problem, we introduce a second loss function, L*load* , which ensures balanced loads. Appendix A contains the definition of this function, along with experimental results. + +## 5 Experiments 5.1 1 Billion Word Language Modeling Benchmark + +Dataset: This dataset, introduced by (Chelba et al., 2013) consists of shuffled unique sentences from news articles, totaling approximately 829 million words, with a vocabulary of 793,471 words. + +Previous State-of-the-Art: The best previously published results (Jozefowicz et al., 2016) use models consisting of one or more stacked Long Short-Term Memory (LSTM) layers (Hochreiter +& Schmidhuber, 1997; Gers et al., 2000). The number of parameters in the LSTM layers of these models vary from 2 million to 151 million. Quality increases greatly with parameter count, as do computational costs. Results for these models form the top line of Figure 2-right. MoE Models: Our models consist of two stacked LSTM layers with a MoE layer between them (see Figure 1). We vary the sizes of the layers and the number of experts. For full details on model architecture, training regimen, additional baselines and results, see Appendix C. + +Low Computation, Varied Capacity: To investigate the effects of adding capacity, we trained + +![5_image_0.png](5_image_0.png) + +a series of MoE models all with roughly equal computational costs: about 8 million multiply-andadds per training example per timestep in the forwards pass, excluding the softmax layer. We call this metric (ops/timestep). We trained models with flat MoEs containing 4, 32, and 256 experts, and models with hierarchical MoEs containing 256, 1024, and 4096 experts. Each expert had about 1 million parameters. For all the MoE layers, 4 experts were active per input. + +The results of these models are shown in Figure 2-left. The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an impressive 24% lower perplexity on the test set. Figure 2: Model comparison on 1-Billion-Word Language-Modeling Benchmark. On the left, we plot test perplexity as a function of model capacity for models with similar computational budgets of approximately 8-million-ops-per-timestep. On the right, we plot test perplexity as a function of computational budget. The top line represents the LSTM models from (Jozefowicz et al., 2016). + +The bottom line represents 4-billion parameter MoE models with different computational budgets. + +Varied Computation, High Capacity: In addition to the largest model from the previous section, we trained two more MoE models with similarly high capacity (4 billion parameters), but higher computation budgets. These models had larger LSTMs, and fewer but larger and experts. Details + +| Test | Test | #Parameters | ops/timestep | Training | TFLOPS | | +|-------------------------------------------|--------------------|---------------|---------------------------------|-------------------|-------------------|------| +| Perplexity Perplexity excluding embedding | Time | /GPU | | | | | +| 10 epochs 100 epochs | and softmax layers | 10 epochs | | | | | +| Best Published Results | 34.7 | 30.6 | 151 million | 151 million | 59 hours, 32 k40s | 1.09 | +| Low-Budget MoE Model | 34.1 | 4303 million | 8.9 million | 15 hours, 16 k40s | 0.74 | | +| Medium-Budget MoE Model | 31.3 | 4313 million | 33.8 million 17 hours, 32 k40s | 1.22 | | | +| High-Budget MoE Model | 28.0 | 4371 million | 142.7 million 47 hours, 32 k40s | 1.56 | | | + +Table 1: Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results (Jozefowicz et al., 2016). Details in Appendix C. can be found in Appendix C.2. Results of these three models form the bottom line of Figure 2-right. + +Table 1 compares the results of these models to the best previously-published result on this dataset . + +Even the fastest of these models beats the best published result (when controlling for the number of training epochs), despite requiring only 6% of the computation. + +Computational Efficiency: We trained our models using TensorFlow (Abadi et al., 2016) on clusters containing 16-32 Tesla K40 GPUs. For each of our models, we determine computational efficiency in TFLOPS/GPU by dividing the number of floating point operations required to process one training batch by the observed step time and the number of GPUs in the cluster. The operation counts used here are higher than the ones we report in our ops/timestep numbers in that we include the backwards pass, we include the importance-sampling-based training of the softmax layer, and we count a multiply-and-add as two separate operations. For all of our MoE models, the floating point operations involved in the experts represent between 37% and 46% of the total. + +For our baseline models wtih no MoE, observed computational efficiency ranged from 1.07-1.29 TFLOPS/GPU. For our low-computation MoE models, computation efficiency ranged from 0.740.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism. Our highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices. These numbers represent a significant fraction of the theoretical maximum of 4.29 TFLOPS/GPU claimed by NVIDIA. Detailed results are in Appendix C, Table 7. + +5.2 100 BILLION WORD GOOGLE NEWS CORPUS + +![6_image_0.png](6_image_0.png) + +On the 1-billion-word corpus, adding additional capacity seems to produce diminishing returns as the number of parameters in the MoE layer exceeds 1 billion, as can be seen in Figure 2-left. We hypothesized that for a larger training set, even higher capacities would produce significant quality improvements. We constructed a similar training set consisting of shuffled unique sentences from Google's internal news corpus, totalling roughly 100 billion words. Similarly to the previous section, we tested a series of models with similar computational costs of about 8 million ops/timestep. In addition to a baseline LSTM model, we trained models augmented with MoE layers containing 32, 256, 1024, 4096, 16384, 65536, and 131072 experts. This corresponds to up to 137 billion parameters in the MoE layer. Details on architecture, training, and results are given in Appendix D. + +Results: Figure 3 shows test perplexity as a function of capacity after training on 10 billion words +(top line) and 100 billion words (bottom line). When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline, but degrades at 131072 experts, possibly a result of too much sparsity. The widening gap between the two lines demonstrates (unsurprisingly) that increased model capacity helps more on larger training sets. Even at 65536 experts (99.994% layer sparsity), computational efficiency for the model stays at a respectable 0.72 TFLOPS/GPU. + +## 5.3 Machine Translation (Single Language Pair) + +Model Architecture: Our model was a modified version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decreased the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We inserted MoE layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters, adding a total of about 8 billion parameters to the models. Further details on model architecture, testing procedure and results can be found in Appendix E. + +Datasets: We benchmarked our method on the WMT'14 En→Fr and En→De corpora, whose training sets have 36M sentence pairs and 5M sentence pairs, respectively. The experimental protocols were also similar to those in (Wu et al., 2016): newstest2014 was used as the test set to compare against previous work (Luong et al., 2015a; Zhou et al., 2016; Wu et al., 2016), while the combination of newstest2012 and newstest2013 was used as the development set. We also tested the same model on a Google's Production English to French data. + +| Table 2: Results on WMT'14 En→ Fr newstest2014 (bold values represent best results). Model Test Test ops/timenstep Total Training Perplexity BLEU #Parameters Time MoE with 2048 Experts 2.69 40.35 85M 8.7B 3 days/64 k40s MoE with 2048 Experts (longer training) 2.63 40.56 85M 8.7B 6 days/64 k40s GNMT (Wu et al., 2016) 2.79 39.22 214M 278M 6 days/96 k80s GNMT+RL (Wu et al., 2016) 2.96 39.92 214M 278M 6 days/96 k80s PBMT (Durrani et al., 2014) 37.0 LSTM (6-layer) (Luong et al., 2015b) 31.5 LSTM (6-layer+PosUnk) (Luong et al., 2015b) 33.1 DeepAtt (Zhou et al., 2016) 37.7 DeepAtt+PosUnk (Zhou et al., 2016) 39.2 | +|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| + +| Table 3: Results on WMT'14 En → De newstest2014 (bold values represent best results). Model Test Test ops/timestep Total Training Perplexity BLEU #Parameters Time MoE with 2048 Experts 4.64 26.03 85M 8.7B 1 day/64 k40s GNMT (Wu et al., 2016) 5.25 24.91 214M 278M 1 day/96 k80s GNMT +RL (Wu et al., 2016) 8.08 24.66 214M 278M 1 day/96 k80s PBMT (Durrani et al., 2014) 20.7 DeepAtt (Zhou et al., 2016) 20.6 | +|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| + +| Table 4: Results on the Google Production En→ Fr dataset (bold values represent best results). Model Eval Eval Test Test ops/timestep Total Training Perplexity BLEU Perplexity BLEU #Parameters Time MoE with 2048 Experts 2.60 37.27 2.69 36.57 85M 8.7B 1 day/64 k40s GNMT (Wu et al., 2016) 2.78 35.80 2.87 35.56 214M 278M 6 days/96 k80s | +|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| + +Results: Tables 2, 3, and 4 show the results of our largest models, compared with published results. Our approach achieved BLEU scores of 40.56 and 26.03 on the WMT'14 En→Fr and En→De benchmarks. As our models did not use RL refinement, these results constitute significant gains of 1.34 and 1.12 BLEU score on top of the strong baselines in (Wu et al., 2016). The perplexity scores are also better.2 On the Google Production dataset, our model achieved 1.01 higher test BLEU +score even after training for only one sixth of the time. + +## 5.4 Multilingual Machine Translation + +Dataset: (Johnson et al., 2016) train a single GNMT (Wu et al., 2016) model on a very large combined dataset of twelve language pairs. Results are somewhat worse than those for 12 separately trained single-pair GNMT models. This is not surprising, given that the twelve models have 12 times the capacity and twelve times the aggregate training of the one model. We repeat this experiment with a single MoE-augmented model. See Appendix E for details on model architecture. + +We train our model on the same dataset as (Johnson et al., 2016) and process the same number of training examples (about 3 billion sentence pairs). Our training time was shorter due to the lower computational budget of our model. + +Results: Results for the single-pair GNMT models, the multilingual GNMT model and the multilingual MoE model are given in Table 5. The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model. On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12 language pairs. The poor performance on English +→ Korean seems to be a result of severe overtraining, as for the rarer language pairs a small number of real examples were highly oversampled in the training corpus. + +| Table 5: Multilingual Machine Translation (bold values represent best results). GNMT-Mono GNMT-Multi MoE-Multi MoE-Multi vs. GNMT-Multi Parameters 278M / model 278M 8.7B ops/timestep 212M 212M 102M training time, hardware various 21 days, 96 k20s 12 days, 64 k40s Perplexity (dev) 4.14 3.35 -19% French → English Test BLEU 36.47 34.40 37.46 +3.06 German → English Test BLEU 31.77 31.17 34.80 +3.63 Japanese → English Test BLEU 23.41 21.62 25.91 +4.29 Korean → English Test BLEU 25.42 22.87 28.71 +5.84 Portuguese → English Test BLEU 44.40 42.53 46.13 +3.60 Spanish → English Test BLEU 38.00 36.04 39.39 +3.35 English → French Test BLEU 35.37 34.00 36.59 +2.59 English → German Test BLEU 26.43 23.15 24.53 +1.38 English → Japanese Test BLEU 23.66 21.10 22.78 +1.68 English → Korean Test BLEU 19.75 18.41 16.62 -1.79 English → Portuguese Test BLEU 38.40 37.35 37.90 +0.55 English → Spanish Test BLEU 34.50 34.25 36.21 +1.96 | +|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| + +## 6 Conclusion + +This work is the first to demonstrate major wins from conditional computation in deep networks. + +We carefully identified the design considerations and challenges of conditional computing and addressed them with a combination of algorithmic and engineering solutions. While we focused on text, conditional computation may help in other domains as well, provided sufficiently large training sets. We look forward to seeing many novel implementations and applications of conditional computation in the years to come. + +## Acknowledgments + +We would like to thank all of the members of the Google Brain and Google Translate teams who helped us with this project, in particular Zhifeng Chen, Yonghui Wu, and Melvin Johnson. Thanks also to our anonymous ICLR reviewers for the helpful suggestions on making this paper better. + +2Reported perplexities relative to the tokenization used by both our models and GNMT. + +## References + +Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: +Large-scale machine learning on heterogeneous distributed systems. *CoRR*, abs/1603.04467, 2016. URL http://arxiv.org/abs/1603.04467. + +Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. *CoRR*, abs/1611.06194, 2016. URL http://arxiv.org/abs/1611. + +06194. + +A. Almahairi, N. Ballas, T. Cooijmans, Y. Zheng, H. Larochelle, and A. Courville. Dynamic Capacity Networks. *ArXiv e-prints*, November 2015. + +Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, and Zhenyao Zhu. Deep speech 2: End-to-end speech recognition in english and mandarin. *arXiv preprint arXiv:1512.02595*, 2015. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. *arXiv preprint arXiv:1409.0473*, 2014. + +Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computation in neural networks for faster models. *arXiv preprint arXiv:1511.06297*, 2015. + +Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. *arXiv preprint arXiv:1308.3432*, 2013. + +Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. + +arXiv preprint arXiv:1312.3005, 2013. + +K. Cho and Y. Bengio. Exponentially Increasing the Capacity-to-Computation Ratio for Conditional Computation in Deep Learning. *ArXiv e-prints*, June 2014. +Ronan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of SVMs for very large scale problems. *Neural Computing*, 2002. + +Andrew Davis and Itamar Arel. Low-rank approximations for conditional feedforward computation in deep neural networks. *arXiv preprint arXiv:1312.4461*, 2013. +Marc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian processes. In *ICML*, 2015. + +John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization, 2010. + +Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. Edinburgh's phrase-based machine translation systems for wmt-14. In *Proceedings of the Ninth Workshop on Statistical* Machine Translation, 2014. + +David Eigen, Marc'Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. *arXiv preprint arXiv:1312.4314*, 2013. +Ekaterina Garmash and Christof Monz. Ensemble learning for multi-source neural machine translation. In *staff.science.uva.nl/c.monz*, 2016. + +Felix A. Gers, Jürgen A. Schmidhuber, and Fred A. Cummins. Learning to forget: Continual prediction with lstm. *Neural Computation*, 2000. + +Audrunas Gruslys, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. Memory-efficient backpropagation through time. *CoRR*, abs/1606.03401, 2016. URL http://arxiv.org/ +abs/1606.03401. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. *IEEE Conference on Computer Vision and Pattern Recognition*, 2015. + +Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE +Signal Processing Magazine, 2012. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural Computation*, 1997. + +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. *arXiv preprint arXiv:1502.03167*, 2015. + +Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. *Neural Computing*, 1991. + +Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda B. Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's multilingual neural machine translation system: Enabling zero-shot translation. + +CoRR, abs/1611.04558, 2016. URL http://arxiv.org/abs/1611.04558. + +Michael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. + +Neural Computing, 1994. +Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. *arXiv preprint arXiv:1602.02410*, 2016. + +Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR*, 2015. + +Reinhard Kneser and Hermann. Ney. Improved backingoff for m-gram language modeling., 1995. + +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In *NIPS*, 2012. + +Quoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeffrey Dean, and Andrew Y. Ng. Building high-level features using large scale unsupervised learning. In *ICML*, 2012. +Patrick Gallinari Ludovic Denoyer. Deep sequential neural network. *arXiv preprint* arXiv:1410.0510, 2014. + +Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attentionbased neural machine translation. *EMNLP*, 2015a. + +Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. ACL, 2015b. + +Carl Edward Rasmussen and Zoubin Ghahramani. Infinite mixtures of Gaussian process experts. + +NIPS, 2002. + +Hasim Sak, Andrew W Senior, and Françoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In *INTERSPEECH*, pp. 338–342, 2014. + +Mike Schuster and Kaisuke Nakajima. Japanese and Korean voice search. *ICASSP*, 2012. + +Babak Shahbaba and Radford Neal. Nonlinear models using dirichlet process mixtures. *JMLR*, +2009. + +Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. + +In *NIPS*, 2014. + +Lucas Theis and Matthias Bethge. Generative image modeling using spatial LSTMs. In *NIPS*, 2015. Volker Tresp. Mixtures of Gaussian Processes. In *NIPS*, 2001. + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. + +Bangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-fei. Hierarchical mixture of classification experts uncovers interactions between brain regions. In *NIPS*. 2009. + +Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. + +arXiv preprint arXiv:1409.2329, 2014. + +Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. *arXiv preprint arXiv:1606.04199*, 2016. + +## Appendices A Load-Balancing Loss + +As discussed in section 4, for load-balancing purposes, we want to define an additional loss function to encourage experts to receive roughly equal numbers of training examples. Unfortunately, the number of examples received by an expert is a discrete quantity, so it can not be used in backpropagation. Instead, we define a smooth estimator *Load*(X) of the number of examples assigned to each expert for a batch X of inputs. The smoothness allows us to back-propagate gradients through the estimator. This is the purpose of the noise term in the gating function. We define P(*x, i*) as the probability that G(x)iis nonzero, given a new random choice of noise on element i, but keeping the already-sampled choices of noise on the other elements. To compute P(*x, i*), we note that the G(x)iis nonzero if and only if H(x)iis greater than the k th-greatest element of H(x) excluding itself. The probability works out to be: + +$$P(x,i)=P r\Big((x\cdot W_{g})_{i}+S t a n d x N o r a l()\cdot S o f t p u s((x\cdot W_{n o i s e})_{i})$$ $$>k t h\_e x c l u d i n(H(x),k,i)\Big)$$ +$$({\boldsymbol{8}})$$ + +$$(9)$$ +$$(10)$$ + (8) +Where kth_excluding(*v, k, i*) means the kth highest component of v, excluding component i. Simplifying, we get: + +$$P(x,i)=\Phi{\Big(}{\frac{(x\cdot W_{g})_{i}-k t h\_c x c l u d i n g(H(x),k,i)}{S o f t p l u s((x\cdot W_{n o i s e})_{i})}}{\Big)}$$ + +Where Φ is the CDF of the standard normal distribution. + +$$L o a d(X)_{i}=\sum_{x\in X}P(x,i)$$ + +$$(11)$$ +P(*x, i*) (10) +We can now define the load loss to be the square of the coefficient of variation of the load vector, multiplied by a hand-tuned scaling factor w*load*. + +$$L_{l o a d}(X)=w_{l o a d}\cdot C V(L o a d(X))^{2}$$ +L*load*(X) = wload · CV (*Load*(X))2(11) +Initial Load Imbalance: To avoid out-of-memory errors, we need to initialize the network in a state of approximately equal expert load (since the soft constraints need some time to work). To accomplish this, we initialize the matrices Wg and W*noise* to all zeros, which yields no signal and some noise. + +Experiments: We trained a set of models with identical architecture (the MoE-256 model described in Appendix C), using different values of w*importance* and w*load*. We trained each model for 10 epochs, then measured perplexity on the test set. We also measured the coefficients of variation in *Importance* and *Load*, as well as ratio of the load on the most overloaded expert to the average load. This last value is significant for load balancing purposes on distributed hardware. All of these metrics were averaged over several training batches. + +| wimportance wload Test Perplexity CV (Importance(X)) CV (Load(X)) | max(Load(X)) mean(Load(X)) | | | | | +|---------------------------------------------------------------------|------------------------------|------|------|------|-------| +| 0.0 | 0.0 | 39.8 | 3.04 | 3.01 | 17.80 | +| 0.2 | 0.0 | 35.6 | 0.06 | 0.17 | 1.47 | +| 0.0 | 0.2 | 35.7 | 0.22 | 0.04 | 1.15 | +| 0.1 | 0.1 | 35.6 | 0.06 | 0.05 | 1.14 | +| 0.01 | 0.01 | 35.7 | 0.48 | 0.11 | 1.37 | +| 1.0 | 1.0 | 35.7 | 0.03 | 0.02 | 1.07 | + +Table 6: Experiments with different combinations of losses. + +13 Results: Results are reported in Table 6. All the combinations containing at least one the two losses led to very similar model quality, where having no loss was much worse. Models with higher values of w*load* had lower loads on the most overloaded expert. + +## B Hierachical Mixture Of Experts + +If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted combination of "experts", each of which is itself a secondary mixture-of-experts with its own gating network.3If the hierarchical MoE consists of a groups of b experts each, we denote the primary gating network by G*primary*, the secondary gating networks by (G1, G2..Ga), and the expert networks by (E0,0, E0,1..Ea,b). The output of the MoE is given by: + +$$y_{H}=\sum_{i=1}^{a}\sum_{j=1}^{b}G_{p r i m a r y}(x)_{i}\cdot G_{i}(x)_{j}\cdot E_{i,j}(x)$$ +$$(12)$$ +$$(13)$$ +$$(14)$$ + +Our metrics of expert utilization change to the following: + +$$I m p o r t a n c e_{H}(X)_{i,j}=\sum_{x\in X}G_{p r i m a r y}(x)_{i}\cdot G_{i}(x)_{j}$$ +$$L o a d_{H}(X)_{i,j}={\frac{L o a d_{p r i m a r y}(X)_{i}\cdot L o a d_{i}(X^{(i)})_{j}}{|X^{(i)}|}}$$ + +Load*primary* and *Load*i deonte the *Load* functions for the primary gating network and i th secondary gating network respectively. X(i) denotes the subset of X for which G*primary*(x)i > 0. + +It would seem simpler to let LoadH(X)i,j = *Load*i(Xi)j , but this would not have a gradient with respect to the primary gating network, so we use the formulation above. + +C 1 BILLION WORD LANGUAGE MODELING BENCHMARK - EXPERIMENTAL DETAILS C.1 8-MILLION-OPERATIONS-PER-TIMESTEP MODELS +Model Architecture: Our model consists of five layers: a word embedding layer, a recurrent Long Short-Term Memory (LSTM) layer (Hochreiter & Schmidhuber, 1997; Gers et al., 2000), a MoE layer, a second LSTM layer, and a softmax layer. The dimensionality of the embedding layer, the number of units in each LSTM layer, and the input and output dimensionality of the MoE layer are all equal to 512. For every layer other than the softmax, we apply drouput (Zaremba et al., +2014) to the layer output, dropping each activation with probability *DropP rob*, otherwise dividing by (1 − *DropP rob*). After dropout, the output of the previous layer is added to the layer output. + +This residual connection encourages gradient flow (He et al., 2015). + +MoE Layer Architecture: Each expert in the MoE layer is a feed forward network with one ReLU-activated hidden layer of size 1024 and an output layer of size 512. Thus, each expert contains +[512 ∗ 1024] + [1024 ∗ 512] = 1M parameters. The output of the MoE layer is passed through a sigmoid function before dropout. We varied the number of experts between models, using ordinary MoE layers with 4, 32 and 256 experts and hierarchical MoE layers with 256, 1024 and 4096 experts. + +We call the resulting models MoE-4, MoE-32, MoE-256, MoE-256-h, MoE-1024-h and MoE-4096h. For the hierarchical MoE layers, the first level branching factor was 16, corresponding to the number of GPUs in our cluster. We use Noisy-Top-K Gating (see Section 2.1) with k = 4 for the ordinary MoE layers and k = 2 at each level of the hierarchical MoE layers. Thus, each example is processed by exactly 4 experts for a total of 4M ops/timestep. The two LSTM layers contribute 2M +ops/timestep each for the desired total of 8M. + +3 We have not found the need for deeper hierarchies. + +Computationally-Matched Baselines: The MoE-4 model does not employ sparsity, since all 4 experts are always used. In addition, we trained four more computationally-matched baseline models with no sparsity: + +- MoE-1-Wide: The MoE layer consists of a single "expert" containing one ReLU-activated hidden layer of size 4096. + +- MoE-1-Deep: The MoE layer consists of a single "expert" containing four ReLU-activated hidden layers, each with size 1024. + +- 4xLSTM-512: We replace the MoE layer with two additional 512-unit LSTM layers. + +- LSTM-2048-512: The model contains one 2048-unit LSTM layer (and no MoE). The output of the LSTM is projected down to 512 dimensions (Sak et al., 2014). The next timestep of the LSTM receives the projected output. This is identical to one of the models published in (Jozefowicz et al., 2016). We re-ran it to account for differences in training regimen, and obtained results very similar to the published ones. +Training: The models were trained on a cluster of 16 K40 GPUs using the synchronous method described in Section 3. Each batch consisted of a set of sentences totaling roughly 300,000 words. In the interest of time, we limited training to 10 epochs, (27,000 steps). Training took 12-16 hours for all models, except for MoE-4, which took 18 hours (since all the expert computation was performed on only 4 of 16 GPUs). We used the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the first 1000 training steps, and decreased after that so as to be proportional to the inverse square root of the step number. The Softmax output layer was trained efficiently using importance sampling similarly to the models in (Jozefowicz et al., 2016). For each model, we performed a hyper-parmeter search to find the best dropout probability, in increments of 0.1. + +To ensure balanced expert utilization we set w*importance* = 0.1 and w*load* = 0.1, as described in Section 4 and Appendix A. + +Results: We evaluate our model using perplexity on the holdout dataset, used by (Chelba et al., +2013; Jozefowicz et al., 2016). We follow the standard procedure and sum over all the words including the end of sentence symbol. Results are reported in Table 7. For each model, we report the test perplexity, the computational budget, the parameter counts, the value of *DropP rob*, and the computational efficiency. + +| Model | Test | Test | ops/timestep #Params excluding | Total | Drop- | TFLOPS | | +|-----------------------|------------|--------------------------|----------------------------------|------------|---------|----------|------| +| Perplexity Perplexity | (millions) | embed. & softmax #Params | P rob | per GPU | | | | +| 10 epochs | (final) | (millions) | (billions) | (observed) | | | | +| Kneser-Ney 5-gram* | 67.6 | 0.00001 | 1.8 | | | | | +| LSTM-512-512* | 54.1 | 2.4 | 2.4 | 0.8 | 0.1 | | | +| LSTM-1024-512* | 48.2 | 4.7 | 4.7 | 0.8 | 0.1 | | | +| LSTM-2048-512* | 45.0 | 43.7 | 9.4 | 9.4 | 0.8 | 0.1 | 0.61 | +| LSTM-2048-512 | 44.7 | 9.4 | 9.4 | 0.8 | 0.1 | 1.21 | | +| 4xLSTM-512 | 46.0 | 8.4 | 8.4 | 0.8 | 0.1 | 1.07 | | +| MoE-1-Wide | 46.1 | 8.4 | 8.4 | 0.8 | 0.1 | 1.29 | | +| MoE-1-Deep | 45.7 | 8.4 | 8.4 | 0.8 | 0.1 | 1.29 | | +| MoE-4 | 45.0 | 8.4 | 8.4 | 0.8 | 0.1 | 0.52 | | +| MoE-32 | 39.7 | 8.4 | 37.8 | 0.9 | 0.1 | 0.87 | | +| MoE-256 | 35.7 | 8.6 | 272.9 | 1.1 | 0.1 | 0.81 | | +| MoE-256-h | 36.0 | 8.4 | 272.9 | 1.1 | 0.1 | 0.89 | | +| MoE-1024-h | 34.6 | 8.5 | 1079.0 | 1.9 | 0.2 | 0.90 | | +| MoE-4096-h | 34.1 | 8.9 | 4303.4 | 5.1 | 0.2 | 0.74 | | +| 2xLSTM-8192-1024* | 34.7 | 30.6 | 151.0 | 151.0 | 1.8 | 0.25 | 1.09 | +| MoE-34M | 31.3 | 33.8 | 4313.9 | 6.0 | 0.3 | 1.22 | | +| MoE-143M | 28.0 | 142.7 | 4371.1 | 6.0 | 0.4 | 1.56 | | + +## C.2 More Expensive Models + +We ran two additional models (MoE-34M and MoE-143M) to investigate the effects of adding more computation in the presence of a large MoE layer. These models have computation budgets of 34M +and 143M ops/timestep. Similar to the models above, these models use a MoE layer between two LSTM layers. The dimensionality of the embedding layer, and the input and output dimensionality of the MoE layer are set to 1024 instead of 512. For MoE-34M, the LSTM layers have 1024 units. + +For MoE-143M, the LSTM layers have 4096 units and an output projection of size 1024 (Sak et al., 2014). MoE-34M uses a hierarchical MoE layer with 1024 experts, each with a hidden layer of size 2048. MoE-143M uses a hierarchical MoE layer with 256 experts, each with a hidden layer of size 8192. Both models have 4B parameters in the MoE layers. We searched for the best *DropP rob* for each model, and trained each model for 10 epochs. + +The two models achieved test perplexity of 31.3 and 28.0 respectively, showing that even in the presence of a large MoE, more computation is still useful. Results are reported at the bottom of Table 7. The larger of the two models has a similar computational budget to the best published model from the literature, and training times are similar. Comparing after 10 epochs, our model has a lower test perplexity by 18%. + +D 100 BILLION WORD GOOGLE NEWS CORPUS - EXPERIMENTAL DETAILS +Model Architecture: The models are similar in structure to the 8-million-operations-per-timestep models described in the previous section. We vary the number of experts between models, using an ordinary MoE layer with 32 experts and hierarchical MoE layers with 256, 1024, 4096, 16384, 65536 and 131072 experts. For the hierarchical MoE layers, the first level branching factors are 32, 32, 64, 128, 256 and 256, respectively. + +Training: Models are trained on a cluster of 32 Tesla K40 GPUs, except for the last two models, which are trained on clusters of 64 and 128 GPUs so as to have enough memory for all the parameters. For all models, training batch sizes are approximately 2.5 million words. Models are trained once-through over about 100 billion words. We implement several memory optimizations in order to fit up to 1 billion parameters per GPU. First, we do not store the activations of the hidden layers of the experts, but instead recompute them on the backwards pass. Secondly, we modify the optimizer on the expert parameters to require less auxiliary storage: +The Adam optimizer (Kingma & Ba, 2015) keeps first and second moment estimates of the perparameter gradients. This triples the required memory. To avoid keeping a first-moment estimator, we set β1 = 0. To reduce the size of the second moment estimator, we replace it with a factored approximation. For a matrix of parameters, instead of maintaining a full matrix of second-moment estimators, we maintain vectors of row-wise and column-wise averages of that matrix. At each step, the matrix of estimators is taken to be the outer product of those two vectors divided by the mean of either one. This technique could similarly be applied to Adagrad (Duchi et al., 2010). + +| Model | Test | Test | ops/timestep #Params excluding | Total | TFLOPS | | +|-----------------------|------------|--------------------------|----------------------------------|----------|----------|------| +| Perplexity Perplexity | (millions) | embed. & softmax #Params | per GPU | | | | +| .1 epochs | 1 epoch | (millions) | (billions) (observed) | | | | +| Kneser-Ney 5-gram | 67.1 | 45.3 | 0.00001 | 76.0 | | | +| 4xLSTM-512 | 54.5 | 47.0 | 8.4 | 8.4 | 0.1 | 1.23 | +| MoE-32 | 48.5 | 40.4 | 8.4 | 37.8 | 0.1 | 0.83 | +| MoE-256-h | 42.8 | 35.3 | 8.4 | 272.9 | 0.4 | 1.11 | +| MoE-1024-h | 40.3 | 32.7 | 8.5 | 1079.0 | 1.2 | 1.14 | +| MoE-4096-h | 38.9 | 30.9 | 8.6 | 4303.4 | 4.4 | 1.07 | +| MoE-16384-h | 38.2 | 29.7 | 8.8 | 17201.0 | 17.3 | 0.96 | +| MoE-65536-h | 38.2 | 28.9 | 9.2 | 68791.0 | 68.9 | 0.72 | +| MoE-131072-h | 39.8 | 29.2 | 9.7 | 137577.6 | 137.7 | 0.30 | + +Results: We evaluate our model using perplexity on a holdout dataset. Results are reported in Table 8. Perplexity after 100 billion training words is 39% lower for the 68-billion-parameter MoE +model than for the baseline model. It is notable that the measured computational efficiency of the largest model (0.30 TFLOPS/GPU) is very low compared to the other models. This is likely a result of the fact that, for purposes of comparison to the other models, we did not increase the training batch size proportionally to the number of GPUs. For comparison, we include results for a computationally matched baseline model consisting of 4 LSTMs, and for an unpruned 5-gram model with Kneser-Ney smoothing (Kneser & Ney, 1995).4 + +## E Machine Translation - Experimental Details + +Model Architecture for Single Language Pair MoE Models: Our model is a modified version of the GNMT model described in (Wu et al., 2016). To reduce computation, we decrease the number of LSTM layers in the encoder and decoder from 9 and 8 to 3 and 2 respectively. We insert MoE +layers in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). We use an attention mechanism between the encoder and decoder, with the first decoder LSTM receiving output from and providing input for the attention 5. All of the layers in our model have input and output dimensionality of 512. Our LSTM layers have 2048 hidden units, with a 512-dimensional output projection. We add residual connections around all LSTM and MoE layers to encourage gradient flow (He et al., 2015). Similar to GNMT, to effectively deal with rare words, we used subword units (also known as "wordpieces") (Schuster & Nakajima, 2012) for inputs and outputs in our system. + +We use a shared source and target vocabulary of 32K wordpieces. We also used the same beam search technique as proposed in (Wu et al., 2016). + +We train models with different numbers of experts in the MoE layers. In addition to a baseline model with no MoE layers, we train models with flat MoE layers containing 32 experts, and models with hierarchical MoE layers containing 512 and 2048 experts. The flat MoE layers use k = 4 and the hierarchical MoE models use k = 2 at each level of the gating network. Thus, each input is processed by exactly 4 experts in each MoE layer. Each expert in the MoE layer is a feed forward network with one hidden layer of size 2048 and ReLU activation. Thus, each expert contains [512 ∗ 2048] + [2048 ∗ 512] = 2M parameters. The output of the MoE layer is passed through a sigmoid function. We use the strictly-balanced gating function described in Appendix F. + +Model Architecture for Multilingual MoE Model: We used the same model architecture as for the single-language-pair models, with the following exceptions: We used noisy-top-k gating as described in Section 2.1, not the scheme from Appendix F. The MoE layers in the encoder and decoder are non-hierarchical MoEs with n = 512 experts, and k = 2. Each expert has a larger hidden layer of size 8192. This doubles the amount of computation in the MoE layers, raising the computational budget of the entire model from 85M to 102M ops/timestep. Training: We trained our networks using the Adam optimizer (Kingma & Ba, 2015). The base learning rate was increased linearly for the first 2000 training steps, held constant for an additional 8000 steps, and decreased after that so as to be proportional to the inverse square root of the step number. For the single-language-pair models, similarly to (Wu et al., 2016), we applied dropout +(Zaremba et al., 2014) to the output of all embedding, LSTM and MoE layers, using *DropP rob* = 0.4. Training was done synchronously on a cluster of up to 64 GPUs as described in section 3. Each training batch consisted of a set of sentence pairs containing roughly 16000 words per GPU. + +To ensure balanced expert utilization we set w*importance* = 0.01 and w*load* = 0.01, as described in Section 4 and Appendix A. Metrics: We evaluated our models using the perplexity and the standard BLEU score metric. We reported tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which was also used in (Luong et al., 2015a). Results: Tables 2, 3 and 4 in Section 5.3 show comparisons of our results to other published methods. Figure 4 shows test perplexity as a function of number of words in the (training data's) +source sentences processed for models with different numbers of experts. As can be seen from the Figure, as we increased the number of experts to approach 2048, the test perplexity of our model continued to improve. + +![17_image_0.png](17_image_0.png) +We found that the experts indeed become highly specialized by syntax and/or semantics, as can be seen in Table 9. For example, one expert is used when the indefinite article "a" introduces the direct object in a verb phrase indicating importance or leadership. + +| Expert 381 | Expert 752 | Expert 2004 | +|---------------------------------------|--------------------------------------|---------------------------------| +| ... with researchers , ... | ... plays a core ... | ... with rapidly growing ... | +| ... to innovation . | ... plays a critical ... | ... under static conditions ... | +| ... tics researchers . | ... provides a legislative ... | ... to swift ly ... | +| ... the generation of ... | ... play a leading ... | ... to dras tically ... | +| ... technology innovations is ... | ... assume a leadership ... | ... the rapid and ... | +| ... technological innovations , ... | ... plays a central ... | ... the fast est ... | +| ... support innovation throughout ... | ... taken a leading ... | ... the Quick Method ... | +| ... role innovation will ... | ... established a reconciliation ... | ... rec urrent ) ... | +| ... research scienti st ... | ... played a vital ... | ... provides quick access ... | +| ... promoting innovation where ... | ... have a central ... | ... of volatile organic ... | +| ... | ... | ... | + +![17_image_1.png](17_image_1.png) + +Due to some peculiarities in our infrastructure which have since been fixed, at the time we ran some of the machine translation experiments, our models ran faster if every expert received exactly the same batch size. To accommodate this, we used a different gating function which we describe below. Recall that we define the softmax gating function to be: + +$$G_{\sigma}(x)=S o f t m a x(x\cdot W_{g})$$ +Gσ(x) = *Sof tmax*(x · Wg) (15) +Sparse Gating (alternate formulation): To obtain a sparse gating vector, we multiply Gσ(x) component-wise with a sparse mask M(Gσ(x)) and normalize the output. The mask itself is a function of Gσ(x) and specifies which experts are assigned to each input example: + +$$(15)$$ +$$G(x)_{i}=\frac{G_{\sigma}(x)_{i}M(G_{\sigma}(x))_{i}}{\sum_{j=1}^{n}G_{\sigma}(x)_{j}M(G_{\sigma}(x))_{j}}\tag{1}$$ +$$(17)$$ + +$$(16)$$ + +Top-K Mask: To implement top-k gating in this formulation, we would let M(v) = T opK(*v, k*), +where: + +$$T o p K(v,k)_{i}=\begin{cases}1&{\mathrm{if~}}v_{i}{\mathrm{~is~in~the~top~}}k{\mathrm{~elements~of~}}v.\\ 0&{\mathrm{otherwise.}}\end{cases}$$ +0 otherwise. (17) +Batchwise Mask: To force each expert to receive the exact same number of examples, we introduce an alternative mask function, Mbatchwise(*X, m*), which operates over batches of input vectors. + +Instead of keeping the top k values per example, we keep the top m values per expert across the training batch, where m = +k|X| n, so that each example is sent to an average of k experts. + +$$M_{b a t c h w i s e}(X,m)_{j,i}=\begin{cases}1&\text{if}X_{j,i}\text{is in the top}m\text{values for to expert}i\\ 0&\text{otherwise}\end{cases}$$ + +As our experiments suggest and also observed in (Ioffe & Szegedy, 2015), using a batchwise function during training (such as M*batchwise*) requires modifications to the inference when we may not have a large batch of examples. Our solution to this is to train a vector T of per-expert threshold values to approximate the effects of the batchwise mask. We use the following mask at inference time: + +$$(18)$$ +$$M_{threshold}(x,T)_{i}=\begin{cases}1&\text{if}x_{i}>T_{i}\\ 0&\text{otherwise}\end{cases}\tag{1}$$ +$$(19)$$ + +To learn the threshold values, we apply an additional loss at training time which is minimized when the batchwise mask and the threshold mask are identical. + +$$L_{batchwise}(X,T,m)=\sum_{j=1}^{|X|}\sum_{i=1}^{n}(M_{threshold}(x,T)_{i}-M_{batchwise}(X,m)_{j,i})(X_{j,i}-T_{i})\tag{20}$$ + +G ATTENTION FUNCTION +The attention mechanism described in GNMT (Wu et al., 2016) involves a learned "Attention Function" A(xi, yj ) which takes a "source vector" xi and a "target vector" yj , and must be computed for every source time step i and target time step j. In GNMT, the attention function is implemented as a feed forward neural network with a hidden layer of size n. It can be expressed as: + +$$A_{G N M T}(x_{i},y_{j})=\sum_{d=1}^{n}V_{d}t a n h((x_{i}U)_{d}+(y_{j}W)_{d})$$ + +Where U and W are trainable weight matrices and V is a trainable weight vector. + +For performance reasons, in our models, we used a slightly different attention function: + +$$A(x_{i},y_{j})=\sum_{d=1}^{n}V_{d}tanh((x_{i}U)_{d})tanh((y_{j}W)_{d})\tag{22}$$ + +With our attention function, we can simultaneously compute the attention function on multiple source time steps and multiple target time steps using optimized matrix multiplications. We found little difference in quality between the two functions. + +$$(21)$$ diff --git a/2202.09368v2.md b/2202.09368v2.md new file mode 100644 index 0000000000000000000000000000000000000000..fcdfb35f880ebe4f62882ba676022448f1953617 --- /dev/null +++ b/2202.09368v2.md @@ -0,0 +1,426 @@ +# Mixture-Of-Experts With Expert Choice Routing + +Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew Dai, Zhifeng Chen, Quoc Le, and James Laudon Google, Mountain View, CA, USA +{yanqiz, taole, hanxiaol, dunan, huangyp, vzhao, adai, zhifengc, qvl, jlaudon}@google.com + +## Abstract + +Sparsely-activated Mixture-of-experts (MoE) models allow the number of parameters to greatly increase while keeping the amount of computation for a given token or a given sample unchanged. However, a poor expert routing strategy can cause certain experts to be under-trained, leading to an expert being under or over-specialized. Prior work allocates a fixed number of experts to each token using a top-k function regardless of the relative importance of different tokens. To address this, we propose a heterogeneous mixture-of-experts employing an expert choice method. Instead of letting tokens select the top-k experts, we have experts selecting the top-k tokens. As a result, each token can be routed to a variable number of experts and each expert can have a fixed bucket size. We systematically study pre-training speedups using the same computational resources of the Switch Transformer top-1 and GShard top-2 gating of prior work and find that our method improves training convergence time by more than 2×. For the same computational cost, our method demonstrates higher performance in fine-tuning 11 selected tasks in the GLUE and SuperGLUE benchmarks. For a smaller activation cost, our method outperforms the T5 dense model in 7 out of the 11 tasks. + +## 1 Introduction + +Scaling up model capacity, dataset size, and training time has demonstrated huge success in enhancing the performance of computer vision architectures [4, 11, 13, 14] as well as neural language models [2, 20, 26, 27]. The final model quality has been found to have a power-law relationship with the amount of data, model size, and compute time [16, 20]. However, training efficiency, which is defined as the total amount of computation used to achieve superior model quality than the state of the art system [21], should receive greater attention as we increase our efforts towards green AI [29]. + +Sparsely gated mixture-of-experts [31] (MoE) provides an effective way to scale model capacity given a fixed computational cost, and has recently played an important role in increasing the training efficiency of large-scale language models [10, 21]. MoE operate by adopting a number of experts, each as a sub-network, and by activating only one or a few experts for each input token. A gating network must be chosen and optimized in order to route each token to the most suited expert(s). For example, recent work has implemented sparse routing via k-means clustering [12], linear assignment to maximize token-expert affinities [22], or hashing [8, 28]. Many of the prior work use a routing strategy concerning the *token choice*, where each token selects the best one or two experts. + +We argue that the independent token choice of prior work often leads to an imbalanced load of experts, which causes training inefficiency and sub-optimal training of the model. In order to mitigate this + +![1_image_0.png](1_image_0.png) + +issue, previous sparsely gated networks introduce additional auxiliary losses as regularization to prevent too many tokens being routed to a single expert, but the effectiveness is still limited. Recent approaches [8, 22, 28] explore alternative strategies for routing, but they focus on pre-training only and do not demonstrate performance gain on downstream tasks. Moreover, none of the previous methods consider allocating a variable number of experts to each token based on importance, which can be beneficial. + +We propose a very simple yet effective routing method we are calling *expert choice*. Unlike conventional MoE where tokens select one or two top-scoring experts, our method lets each *expert* pick the top-k tokens. Our method guarantees perfect load balancing, allows a variable number of experts for each token, and achieves substantial gains in training efficiency and downstream performance as demonstrated in our experiments. Our major contributions include: + +- We identify common pitfalls in conventional MoE such as load imbalance as described in Section 3.1. We then propose a heterogeneous, expert choice method to provide a fluid allocation of model parameters based on a learnt token-to-expert importance. This method intrinsically guarantees load balance without imposing an auxiliary loss. + +- We show our method provides over 2× faster training convergence in a 8B/64E (8 billion activated parameters, 64 experts) model, compared to the top-1 and top-2 gating counterparts in Switch Transformer [10] and GShard [21]. + +- We show our method demonstrates strong scaling when increasing the number of experts from 16 to 128, evaluated in training perplexity. + +- We show our method demonstrates strong performance on downstream tasks selected from GLUE and SuperGLUE at all the evaluated scales. More specifically, our 8B/64E model outperforms a T5 11B dense model in 7 out of 11 tasks evaluated. + +## 2 Related Work + +Scaling: Various approaches have been proposed to scale up neural network capacity to improve performance. Recent works have successfully scaled models to billions of parameters via various forms of model parallelism [2, 21, 26, 27, 33]. Model parallelism [30] splits weights and tensors across multiple cores while pipeline parallelism [18, 24] splits different layers across devices with micro-batches pipelined to the different layers. To enable continued scaling of neural networks, improving model training and serving efficiency has become a critical research area. + +Conditional Computation: Computation decisions can be made dynamically based on the input [23, 25]. Conditional computation has been proposed as a way to increase the capacity of a deep neural network without increasing the amount of computation, by activating certain parameters and computation on demand, on a per-example or per-token basis [3]. Conditional convolution layers [1] +with task-specific gating has been used to combat catastrophic forgetting when a sequence of learning problems are optimized. The gating decisions may be binary or sparse and continuous, stochastic or deterministic. + +Mixture of Experts: Sparsely-gated MoE [31] is the first model to demonstrate massive improvements in model capacity, training time, or model quality with gating. Switch Transformer [10] +simplifies the gating by selecting only the top expert per token using a softmax over the hidden state and demonstrates better scaling than previous work. All the prior work requires an auxiliary loss to explicitly encourage balancing. This loss term has to be carefully weighted to not overwhelm the primary loss. However, auxiliary loss does not guarantee balancing and a hard capacity factor has to be imposed. As a result, many tokens can still be unprocessed by the MoE layer. Hard MoE [12] with a single decoding layer can be efficiently trained to good effect on large scale hashtag prediction tasks. + +Base Layers [22] formulate a linear assignment that maximizes token-expert affinities while ensuring each expert receives an equal number of tokens. Hash layers [8, 28] devise hashing techniques on input tokens. However, the evaluations are limited to pre-training perplexity. THOR [? ] randomly activates experts during training and inference and is trained with a consistency regularization loss. + +THOR has demonstrated strong performance on translation tasks. Different from these prior works, our method is a learnt method that enables heterogeneous MoE and effectively improves downstream fine-tuning performance. + +## 3 Method + +We first identify a few pitfalls in the routing method of conventional mixture-of-experts (MoE) models and then present our method using expert choice to tackle these problems. + +## 3.1 Pitfalls Of Token-Choice Routing + +MoE can be computationally advantageous compared to a dense model, a routing strategy must be used to assign each token to the most-suited experts. Conventional MoE models employ *token-choice* routing which independently selects the top-k experts for each token [10, 21, 31]. We argue that this strategy has a few pitfalls that lead to sub-optimal training. + +Load Imbalance: Token-choice routing often lead to poor load balancing across experts. That is, some experts may be trained with most tokens, leaving the remaining experts under-utilized. Experts can be under specialized because a lot of model capacity in the under-utilized experts are wasted. On the other side, some tokens will not be processed, since over-utilized experts can only take a maximum number of tokens at each step in order to avoid running out of memory. Load imbalance can also hurt step latency, thus inference time, as the step latency can be determined by the most loaded expert. Previous methods add an auxiliary loss on load balancing to mitigate the issue. However, this auxiliary loss does not guarantee a balanced load, especially during the important early stages of training. Indeed, **we empirically observe that the over-capacity ratio can reach 20%–40% for** +some experts in token choice routing, indicating that a significant portion of the tokens routed to these experts will be dropped. + +Under Specialization: Each MoE layer uses a gating network to learn token-to-expert affinity. + +Ideally, the learnt gating network should produce the affinity such that similar or relevant tokens are routed to the same expert. A sub-optimal strategy can produce redundant experts and/or experts that are not sufficiently specialized. Under specialization may result by imposing an large auxiliary loss which favors more load balanced but less effective routing. Finding the right balance on the auxiliary loss to promote both load balancing and specialization is challenging for token-choice routing. + +Same Compute for Every Token: Finally, in a token-choice strategy each token receives exactly k experts and therefore occupies the same amount of compute. We hypothesize that this is not necessary nor desired. Instead, a MoE model should flexibly allocate its compute resource based on the complexity of the input. Motivated by the aforementioned observations, we next describe a simple yet effective method which produces load balanced assignments based on *expert choice*. + +## 3.2 Heterogeneous Moe Via Expert Choice + +Different from conventional routing, an expert choice method independently selects top-k tokens for each expert, where k is a fixed expert capacity (i.e. the number of tokens each expert can take). + +Despite its simplicity, expert choice achieves perfect load balancing by design. It also enables more flexible allocation of model compute since tokens can be received by a variable number of experts. + +3 + +$$k={\frac{n\times c}{c}}$$ + +e(1) +where n is the total number of tokens in the input batch (such as batch size × sequence length), c is the capacity factor, and e is the number of experts. The capacity factor c denotes on average how many experts are utilized by a token. Given input token representations X ∈ R +n×d where d is the model hidden dimension, our method produces a token-to-expert assignment denoted by three output matrices I, G and P. The matrix I is an index matrix where I[*i, j*] specifies j-th selected token of the i-th expert. The gating matrix G ∈ R +e×k denotes the weight of expert for the selected token, and P ∈ R +e×k×n refers to an one-hot version of I that will be used to gather tokens for each expert. + +These matrices are computed using a gating function, + +$$\begin{array}{l l}{{S=\mathrm{Softmax}(X\cdot W_{g}),}}&{{S\in\mathbb{R}^{n\times e}}}\\ {{G,I=\mathrm{TopK}(S^{\top},k),P=\mathrm{Onehot}(I)}}\end{array}$$ + +$$(2)$$ + +where S denotes the token-to-expert affinity scores, Wg ∈ R +d×e denotes the expert embeddings, and T opK() selects the k largest entries for each row of S>. + +Similar to Switch Transformer [10] and GShard [21], we apply mixture of experts and the gating +function in the dense feed-forward (FFN) layer, as it is the most computationally expensive part in +a Transformer-based network. The input to the gated FFN, denoted by Xin ∈ R +e×k×d, is produced +using the permutation matrix P. Here Xin[i] ∈ R +k×d denotes the input of the i-th expert. Similarly, +let W1 and W2 denote the parameters of gated FFN in which W1[i] and W2[i] ∈ R +d×d +0denote the +parameter matrices of the i-th expert. We compute the output of each expert Xe[i] as follows, +$$X_{i n}=P\cdot X$$ $$\forall i:\ \ X_{e}[i]=\mathrm{{GeLU}}(X_{i n}[i]\cdot W_{1}[i])\cdot W_{2}[i]^{\top}$$ +> (3) +We omit the bias terms here for brevity. The finally output of the gated FFN layer Xout ∈ R +n×dcan be obtained given Xe, the permutation and gating matrices P and G, + +$$({\mathfrak{I}})$$ +$$X_{\mathrm{out}}[l,d]=\sum_{i,j}P[i,j,l]\;G[i,j]\;X_{e}[i,j,d]$$ +$$(4)$$ + +P[i, j, l] G[i, j] Xe[*i, j, d*](4) +Both Xe and Xout can be efficiently computed using Einstein summation (einsum) operations. + +## 3.3 Expert Choice With Additional Constraint + +We also consider regularizing our expert choice routing by limiting the maximum number of experts for each token. We are interested in whether adding this constraint improves pre-training and finetuning results. More importantly, it helps analyzing to what degree using a variable number of experts per token affects the model performance. + +Let A ∈ R +e×n be a positive matrix where A[*i, j*] represents whether the i-th expert selects j-th token. + +We solve the following entropy-regularized linear programming problem + +$$\begin{array}{l}{{\operatorname*{max}_{A}\;\left\langle S^{\top},A\right\rangle+\lambda H(A)}}\\ {{\forall i:\;\sum_{j^{\prime}}A[i,j^{\prime}]=k;\;\;\forall j:\;\sum_{i^{\prime}}A[i^{\prime},j]\leq b;\;\;\forall i,j:\;0\leq A[i,j]\leq1}}\end{array}$$ +$$\mathbf{s.t.}$$ +s.t. ∀i : +where < S>*, A >* denotes the inner product, H(A) is the sum of element-wise entropy1, and b > 0 is an integer that upper bounds the selection for each token. Adding a small entropy term gives a near-integer solution while enabling a fast iterative solver we can run on TPUs. Specifically, the solution space is the intersection of three convex sets each satisfying one of the linear constraints. + +We use Dykstra's algorithm [9] that alternatively projects the intermediate solution onto one of the convex sets.2 After A is computed, the routing indices I is selected using T opK(*A, k*) instead. + +1H(A) = Pij −A[*i, j*] log A[*i, j*] +2We use λ = 0.001 and a maximum of 100 iterations. + +| Model | Type | nparams | nact-params | L | M | H | nheads | dhead | E | +|-----------|--------|-----------|---------------|-----|-------|--------|----------|---------|-----| +| 0.1B | Dense | 130M | 130M | - | | | | | | +| 0.1B/16E | MoE | 548M | 145M | 16 | | | | | | +| 0.1B/32E | MoE | 1.0B | 145M | 12 | 768 | 3,072 | 12 | 64 | 32 | +| 0.1B/64E | MoE | 1.9B | 145M | 64 | | | | | | +| 0.1B/128E | MoE | 3.7B | 145M | 128 | | | | | | +| 8B | Dense | 8.7B | 8.7B | 32 | 4,096 | 16,384 | 32 | 128 | - | +| 8B/64E | MoE | 143B | 9.8B | 64 | | | | | | + +## 3.4 Model Architecture + +At the high level, we adopt the idea of sparsely activated Mixture-of-Experts (MoE) [31]. We use a Transformer architecture and replace the feed-forward component of every other Transformer layer with a MoE layer, following recent practice [10, 21]. Interleaving regular Transformer layers and MoE layers empirically improves model performance and training efficiency, probably because forcing some shared components in between MoE layers can mitigate the negative effects of skipping tokens. Several additional modifications adopted in recent work have been applied in our experiments. For example, we replace the standard positional embedding with per-layer relative positional bias [5]. + +In the non-MoE feed-forward sub-layers (only every other layers are MoE layers), we replace the first linear projection and the activation function with the Gated Linear Unit [6], which computes the component-wise product of two linear transformation of the input, followed by a Gaussian Error Linear Unit [15] activation function. + +As described earlier, each MoE layer consists of a group of independent feed-forward networks as denoted as "experts". The gating function in Eq. (2) uses a softmax activation function to model a probability distribution over these experts. This distribution denotes the preference over experts of each incoming token, which is computed similarly in a conventional gating network [10, 21, 31]. During training, each MoE layer's learnable gating network described in Eq. (2) is trained to use the input to activate the best subset of experts using a top-k function along the token dimension. An +"shuffle" stage and an "unshuffle" stage are inserted to the MoE layer, where the first stage gathers the tokens to their designated experts while the second stage permutes the tokens back to their original order in the input batch. This step is formulated in Eq. (3) and Eq. (4). + +Similar to conventional MoE method, there are more parameters in the MoE layer. However, the activated model size per token can be comparable to a dense layer because during training or inference, only a limited subset of experts is activated for any given token. For instance, Switch Transformer [10] +has only one activated expert while GShard [21] uses two experts per token. In our method, the number of activated experts can vary for each token but the overall computation is kept the same as the baseline architectures by fixing the capacity factor c in Eq. (1). Unless otherwise specified, we set c = 2 such that our method can be directly compared to the top-2 token-choice gating in GShard. + +We train several variants of our architecture at the 100M scale (i.e. 100M expert size) by increasing the number of experts to understand the scaling effects of our method. We also train a 8B scale MoE model. The large MoE model is partitioned with a 2D sharding algorithm as presented in GSPMD [36], which fully exploits the 2D topology of the TPU cluster [19]. Across different scales and setups, our method outperforms related work and demonstrates strong downstream task performance on selected tasks in GLUE and SuperGLUE. + +## 4 Experiments 4.1 Setup + +Table 1 summarizes the hyperparameter settings of different MoE models. As a reference point, we also include the respective dense model configurations with comparable numbers of activated parameters per-token during inference. To study of the effect of scaling the number of experts, we + +![5_image_0.png](5_image_0.png) + +studied varying the number of experts but fixing the per expert size to 100M parameters. For example, 0.1B/64E represents the architecture of an approximately 100M parameter dense model with every other layer replaced by a 64-expert MoE layer. The MoE model degenerates into a dense transformer architecture when each MoE layer only has one expert. While n*params* is the total number of trainable parameters, nact−*params* represents the number of activated parameters per token. L is the total number of Transformer layers, M is the model dimension, H is the hidden dimension after the projection in each transformer layer, n*heads* is the number of attention heads, and d*head* is the hidden dimension of each attention head. + +Dataset: We use the high-quality dataset from GLaM [? ] of 1.6 trillion tokens that are representative of a wide range of natural language use cases. An in-house classifier is trained to classify between a collection of curated text and other webpages and estimate the content quality of a webpage. A +high-quality filtered subset of webpages are combined with books, Wikipedia pages, conversations, forums, and news to create the final dataset. The data and mixture weights can be found in Table 3 in the GLaM paper. + +Model Training: Our model training follows the setups of GLaM [? ] where a maximum sequence length of 1024 tokens is adopted. We use an Adafactor optimizer [32] with first-moment decay β1 = 0 and second-moment decay β2 = 0.99. We keep the learning rate constant for the first 10K +training steps, and then decay it with an inverse square root schedule. Unlike most related works, we do not impose any auxiliary loss for load balance, such as described in Switch Transformer [10] and GShard [21]. We use the SentencePiece subword tokenizer with a vocabulary of size of 256K. The largest model (8B/64E) is trained on 512 TPU V4 chips. We use a dropout rate of 0 during training as the number of tokens in the training data corpus is much greater than the total number of tokens during training. + +Model Evaluation: We mainly focus on evaluating the finetuning performance on the 11 selected tasks from GLUE and SuperGLUE benchmarks [34, 35]. + +## 4.2 Training Efficiency + +We first study training efficiency and convergence. We use expert choice with a capacity factor of 2 +(EC-CF2) to match the activated model size and computational cost on a per token basis in GShard top-2 gating and run both for a fixed number of steps. The results are shown in Fig. 2 (a). Comparing to GShard top-2 gating, which showed stronger performance in both perplexity in the evaluation dataset and fine-tuning on downstream tasks compared to Switch Transformer top-1 gating, EC-CF2 converges more than 2x faster during training. More specifically, EC-CF2 reaches the same perplexity as GShard top-2 in less than half the steps, and with each GShard top-2 step being 20% slower than our method. As explained in Section 3.1, the slower step time in top-2 gating is due to load imbalance + +| 100M/128E | 100M/64E | | | | | | | | +|-------------|------------|-------|----------|----------|--------|----------|----------|--------| +| Name | Metric | Split | ST Top-1 | GS Top-2 | EC-CF2 | ST Top-1 | GS Top-2 | EC-CF2 | +| BoolQ | acc | dev | 77.4 | 76.5 | 76.9 | 73.2 | 77.5 | 79.7 | +| CB | acc | dev | 87.5 | 80.9 | 89.1 | 85.9 | 84.4 | 89.1 | +| CoLA | acc | dev | 78.9 | 84.0 | 86.7 | 64.1 | 85.2 | 88.3 | +| MNLI | acc | dev | 82.3 | 83.6 | 84.9 | 80.8 | 85.2 | 86.7 | +| MRPC | acc | dev | 82.6 | 81.0 | 83.1 | 81.3 | 81.3 | 84.4 | +| QNLI | acc | dev | 89.5 | 88.6 | 89.0 | 89.4 | 89.7 | 91.3 | +| QQP | acc | dev | 90.6 | 90.3 | 90.4 | 88.9 | 90.5 | 91.0 | +| RTE | acc | dev | 77.0 | 78.9 | 78.5 | 74.1 | 79.3 | 81.6 | +| SST2 | acc | dev | 92.0 | 94.5 | 94.6 | 91.8 | 95.1 | 95.1 | +| WiC | acc | dev | 67.8 | 65.5 | 68.1 | 64.4 | 67.8 | 65.6 | +| WNLI | acc | dev | 65.6 | 70.3 | 67.2 | 68.8 | 68.8 | 71.7 | +| Avg | - | - | 81.0 | 81.3 | 82.6 | 78.4 | 82.2 | 84.0 | +| 100M/32E | 8B/64E | | | | | | | | +| Name | Metric | Split | ST Top-1 | GS Top-2 | EC-CF2 | ST Top-1 | GS Top-2 | EC-CF2 | +| BoolQ | acc | dev | 74.5 | 79.0 | 79.3 | 89.1 | 89.5 | 89.2 | +| CB | acc | dev | 80.6 | 81.3 | 92.2 | 93.8 | 96.7 | 100 | +| CoLA | acc | dev | 87.5 | 92.2 | 93.8 | 88.3 | 87.5 | 89.1 | +| MNLI | acc | dev | 83.1 | 87.8 | 88.0 | 90.7 | 91.4 | 91.1 | +| MRPC | acc | dev | 82.3 | 85.2 | 84.4 | 89.3 | 91.7 | 90.6 | +| QNLI | acc | dev | 91.6 | 91.9 | 92.5 | 94.5 | 94.9 | 95.0 | +| QQP | acc | dev | 90.1 | 91.5 | 92.0 | 92.1 | 92.5 | 93.8 | +| RTE | acc | dev | 75.0 | 79.1 | 78.1 | 91.0 | 92.2 | 95.2 | +| SST2 | acc | dev | 93.3 | 94.4 | 95.4 | 97.1 | 98.0 | 97.7 | +| WiC | acc | dev | 62.5 | 65.9 | 69.8 | 74.5 | 76.4 | 83.8 | +| WNLI | acc | dev | 65.6 | 64.1 | 68.8 | 78.1 | 82.8 | 92.8 | +| Avg | - | - | 80.6 | 83.5 | 85.0 | 88.9 | 90.3 | 92.6 | + +Table 2: Expert choice with capacity factor of 2 (EC-CF2) outperforms Top-1 gating in Switch Transformer (ST) and top-2 gating in GShard (GS) on GLUE and SuperGLUE tasks. Note that with an expert size of 100M parameters, 100M/32E works best for our method and Ghard Top-2 while 100M/128E works better for Switch Transformer Top-1. Our method consistently outperforms the others across all the scales. + +where some experts can receive a lot more tokens than the desired capacity. As a result, the step latency will be bottlenecked by the most loaded expert. + +## 4.3 Scaling The Number Of Experts 7 + +As presented in Table 1, increasing the number of experts effectively increases model capacity without increasing activated model size. We scale the number of experts while fixing the expert size to 100M +parameters for both expert choice (EC) and GShard (Top-2) methods and find both methods work well in terms of perplexity on the evaluation dataset during pre-training. As demonstrated in Fig. 2 +(b), having more experts consistently improves training perplexity. + +## 4.4 Fine-Tuning On Glue And Superglue + +To validate whether improved perplexity directly translates to better performance in downstream tasks, we perform fine-tuning on 11 selected tasks from GLUE and SuperGLUE. We compare three MoE +methods including Switch Transformer top-1 gating (ST Top-1), GShard top-2 gating (GS Top-2) +and our method (EC-CF2) that matches the activation memory size and computational cost of GS +Top-2. Indicated by the results in Table 2, our EC-CF2 method consistently outperforms the related methods and yields more than 2% average accuracy increase in a large 8B/64E setting. Table 3 further compares our 8B/64E model against its dense counterpart. Again, our method achieves stronger fine-tuning results, increasing the average score by 3.4 point. + +Interestingly, we observe the 100M/32E model setting works the best for both GS Top-2 and EC-CF2, even though the effective model capacity is smaller than that of 100M/64E and 100M/128E. This result indicates that a good training perplexity does not always translate to better performance of downstream tasks. + +| Model | BoolQ | CB | CoLA MNLI MRPC QNLI QQP RTE SST2 WiC WNLI | Avg | | | | | | | | | +|--------------------|---------|------|---------------------------------------------|-------|------|------|------|------|------|------|------|------| +| Dense 8B | 88.2 | 100 | 86.4 | 91.3 | 86.7 | 94.7 | 91.2 | 92.2 | 97.2 | 75.6 | 78.1 | 89.2 | +| EC-CF2 8B/64E 89.2 | 100 | 89.1 | 91.1 | 90.6 | 95.0 | 93.8 | 95.2 | 97.7 | 83.8 | 92.8 | 92.6 | | + +Table 3: Comparison between Dense 8B and Expert Choice (EC-CF2) 8B/64E models: Our method significantly outperforms the dense model in downstream tasks. + +Figure 3: Distribution of the number of experts routed to per token in a 100M/64E model. + +| Layer. Method | Max # of Experts | Avg acc. | +|-----------------|--------------------|------------| +| EC-CAP2 | 2 | 83.2 ± 0.4 | +| EC-CAP3 | 3 | 84.0 ± 0.4 | +| EC-CF2 | - | 84.0 ± 0.2 | +| Hash Layer | - | 81.3 ± 0.1 | + +![7_image_0.png](7_image_0.png) + +![7_image_1.png](7_image_1.png) + +## 4.5 Heterogeneity Matters + +Capped Expert Choice: We regularized expert choice by limiting the maximum number of experts for each token, using the method described in Section 3.3. Table 4 reports the average accuracy on the 11 selected datasets. EC-CAP2 is the variant of our expert choice method by limiting the number of experts of each token to 2. This decreases the fine-tuning accuracy by 0.8 points on average. In addition, EC-CAP3 allows a maximum of 3 experts per token and achieves on par results compared to the vanilla expert choice method. This ablation study confirms that **allowing variable number of** +experts per token is indeed helpful. + +Variable Experts per Token: We compute statistics on token-to-expert routing, particularly on the ratio of tokens that have been routed to a certain number of experts. According to Fig. 3, a majority of tokens have been routed to one or two experts while 23% have been routed to three or four experts and only about 3% tokens have been routed to more than 4 experts. This plot verifies our hypothesis that our method learns to allocate a variable number experts to tokens, which can be beneficial for important tokens. + +## 4.6 Comparison With Hash Layer + +In this section, we compare our method with Hash Layers [28]. We use mod x to map a token ID +to an expert ID. This ensures load balance and generates specialized experts. The fine-tuning results are presented in the last row in Table 4. Hashing based routing performs worse than expert choice in terms of average scores and variance. **This indicates that load balancing alone does not generate** all the benefits. + +## 4.7 Ablation + +Capacity Factor: We study the capacity factor in our expert choice method and compare the training perplexity with the baseline top-1 gating method used in Switch Transformer. As described in Eq. (1), +the capacity factor determines how many experts on average each token can be routed to, thus the bucket size k of each expert. In all our previous experiments, we use a capacity factor of 2, which matches the computational footprint of the top-2 gating used in GShard method. To match the computation cost on a per-token basis fairly with top-1 gating used in Switch Transformer, we reduce the capacity factor to 1 and plot the training perplexity in Fig. 4 (a). Not surprisingly, using a smaller capacity factor yields higher perplexity, but our method still significantly outperforms top-1 gating. + +We further push the capacity factor down to 0.5, and observe that it still outperforms the top-1 gating. + +Comparison with Dense Models on Pre-training: We compare our method with dense models on pre-training. As shown in Fig. 4 (b), our method consistently outperforms the dense method in + +![8_image_0.png](8_image_0.png) + +perplexity and convergence time. For a small expert size of 100M parameters, the benefit of sparse gating is even more significant. Orthogonal to results presented in Fig. 2 (b), where scaling the number of experts improves model performance, Fig. 4 (b) shows that increasing expert capacity also significantly increases model performance. + +## 5 Conclusion + +We propose a new routing method for sparsely activated mixture-of-experts (MoE) models. This method addresses load imbalance and under-utilization of experts in conventional MoE methods, and enables selecting different numbers of experts for each token. Our model demonstrates more than 2x training efficiency improvements when compared to the state-of-the-art GShard and Switch Transformer models, and also achieves strong gains when finetuning on 11 datasets in the GLUE and SuperGLUE benchmark. + +## 6 Limitations + +The expert choice method might not immediately apply to auto-regressive text generation as our current implementation takes in the past and future tokens to perform the top-k selection. One possible solution is to collect a large batch of input sequences, dispatch tokens of the same sequence into separate groups, and perform expert choice routing for each group. Another scenario where the expert choice method does not immediately apply is when the batch size becomes very small during serving or inference. A global top-k can be selected instead and we can cap the number of times each expert or token gets selected. We leave these possible improvements for future work. + +Another long-standing issue with MoE has been the large memory footprint. Even though computational cost can be reduced using sparsely gated networks, the total number of parameters increases linearly or sub-linearly with the number of experts. Increasing the number of experts requires reservation of a large number of hardware devices. Therefore, dynamic (used) power is saved while static +(reserved) power is not. Power saving techniques such as the ability to put hardware devices into low power states while not in use [17] can help with reducing the reserved power requirements. + +## References + +[1] Davide Abati, Jakub Tomczak, Tijmen Blankevoort, Simone Calderara, Rita Cucchiara, and Babak Ehteshami Bejnordi. Conditional channel gated networks for task-aware continual learning. In *CVPR*, pages 3930–3939. Computer Vision Foundation / IEEE, 2020. + +[2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. In *Advances in Neural Information Processing Systems*. + +[3] Kyunghyun Cho and Yoshua Bengio. Exponentially increasing the capacity-to-computation ratio for conditional computation in deep learning, 2014. + +[4] Zihang Dai, Hanxiao Liu, Quoc V. Le, and Mingxing Tan. CoAtNet: Marrying convolution and attention for all data sizes. In *Advances in Neural Information Processing Systems*, 2021. + +[5] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. + +Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019. Association for Computational Linguistics. + +[6] Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 933–941. JMLR.org, 2017. + +[7] Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathy Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. Glam: Efficient scaling of language models with mixtureof-experts, 2021. + +[8] Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, and Angela Fan. + +Tricks for training sparse translation models, 2021. + +[9] Richard L Dykstra. An iterative procedure for obtaining i-projections onto the intersection of convex sets. *The annals of Probability*, pages 975–984, 1985. + +[10] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity, 2021. + +[11] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V. Le. NAS-FPN: learning scalable feature pyramid architecture for object detection. In *CVPR*, pages 7036–7045. Computer Vision Foundation / +IEEE, 2019. + +[12] Sam Gross, Marc'Aurelio Ranzato, and Arthur Szlam. Hard mixtures of experts for large scale weakly supervised vision, 2017. + +[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, +pages 770–778, 2016. + +[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, Computer Vision – +ECCV 2016, pages 630–645, Cham, 2016. Springer International Publishing. + +[15] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs), 2016. + +[16] Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable, empirically, 2017. + +[17] Ping Huang, Zuocheng Xing, Tianran Wang, Qiang Wei, Hongyan Wang, and Guitao Fu. A +brief survey on power gating design. In *2010 10th IEEE International Conference on Solid-State* and Integrated Circuit Technology, pages 788–790, 2010. + +[18] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, *Advances in Neural Information Processing Systems 32: Annual Conference on Neural* Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, +Canada, pages 103–112, 2019. + +[19] Norman P. Jouppi, Doe Hyun Yoon, George Kurian, Sheng Li, Nishant Patil, James Laudon, Cliff Young, and David A. Patterson. A domain-specific supercomputer for training deep neural networks. *Commun. ACM*, 63(7):67–78, 2020. + +[20] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020. + +[21] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. GShard: Scaling giant models with conditional computation and automatic sharding. In *International Conference on Learning Representations*, 2021. + +[22] Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. Base layers: Simplifying training of large, sparse models. In Marina Meila and Tong Zhang, editors, *Proceedings of the 38th International Conference on Machine Learning*, volume 139 of Proceedings of Machine Learning Research, pages 6265–6274. PMLR, 18–24 Jul 2021. + +[24] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. Pipedream: Generalized pipeline parallelism for dnn training. New York, NY, USA, 2019. Association for Computing Machinery. + +[25] Joan Puigcerver, Carlos Riquelme Ruiz, Basil Mustafa, Cédric Renggli, André Susano Pinto, Sylvain Gelly, Daniel Keysers, and Neil Houlsby. Scalable transfer learning with expert models. In *ICLR*. OpenReview.net, 2021. + +[26] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018. + +[27] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21:140:1–140:67, 2020. + +[28] Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, and Jason Weston. Hash layers for large sparse models, 2021. + +[30] Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake Hechtman. Mesh-tensorflow: Deep learning for supercomputers. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, page 10435–10444, Red Hook, NY, USA, 2018. Curran Associates Inc. + +[31] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. + +Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-ofexperts layer. In *ICLR (Poster)*. OpenReview.net, 2017. + +[32] Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In Jennifer Dy and Andreas Krause, editors, *Proceedings of the 35th International* Conference on Machine Learning, volume 80 of *Proceedings of Machine Learning Research*, +pages 4596–4604. PMLR, 10–15 Jul 2018. +[23] Min Lin, Jie Fu, and Yoshua Bengio. Conditional computation for continual learning, 2019. + +[29] Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green ai, 2019. + +[33] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism, 2020. + +[34] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, *Advances in Neural Information Processing Systems*. Curran Associates, Inc. + +[35] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. + +GLUE: A multi-task benchmark and analysis platform for natural language understanding. + +In *Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting* Neural Networks for NLP, Brussels, Belgium, November 2018. Association for Computational Linguistics. + +[36] Yuanzhong Xu, HyoukJoong Lee, Dehao Chen, Blake A. Hechtman, Yanping Huang, Rahul Joshi, Maxim Krikun, Dmitry Lepikhin, Andy Ly, Marcello Maggioni, Ruoming Pang, Noam Shazeer, Shibo Wang, Tao Wang, Yonghui Wu, and Zhifeng Chen. GSPMD: general and scalable parallelization for ML computation graphs. *CoRR*, abs/2105.04663, 2021. + +## 7 Checklist + +(a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? Yes +(b) Have you read the ethics review guidelines and ensured that your paper conforms to them? Yes (c) Did you discuss any potential negative societal impacts of your work? **N/A. Not any.** (d) Did you describe the limitations of your work? Yes +(a) Did you include the code, data, and instructions needed to reproduce the main experimental results? **Yes. We include details in the experiment setup to help reproduce the main results.** +(b) Did you specify all the training details? Yes (c) Did you report error bars? Yes +(d) Did you include the amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? Yes +(a) If your work uses existing assets, did you cite the creators? Yes (b) Did you mention the license of the assets? **No. The used dataset is not released yet.** +(c) Did you include any new assets either in the supplemental material or as a URL? **No. The dataset** +is not released yet. + +(d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? **No. Not using persons' data.** +(e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? **Yes. The dataset does not contain any personally identifiable** +information or offensive content. + +## A Comparison On Fine-Tuning With A Dense Model + +Our 8B MoE model achieves stronger pre-training perplexity than its dense counterpart. However, a better perplexity does not always directly translate to downstream performance as demonstrated in Section 4.4. To this end, we compare fine-tuning performance of the 8B dense model and MoE +model in Table 1. As shown in the table, our MoE model using expert choice routing consistently outperforms the dense model across the 11 tasks in GLUE and SuperGLUE. + +| Model | BoolQ | CB | CoLA MNLI MRPC QNLI QQP RTE SST2 WiC WNLI | Avg | | | | | | | | | +|--------------------|---------|------|---------------------------------------------|-------|------|------|------|------|------|------|------|------| +| Dense 8B | 88.2 | 100 | 86.4 | 91.3 | 86.7 | 94.7 | 91.2 | 92.2 | 97.2 | 75.6 | 78.1 | 89.2 | +| EC-CF2 8B/64E 89.2 | 100 | 89.1 | 91.1 | 90.6 | 95.0 | 93.8 | 95.2 | 97.7 | 83.8 | 92.8 | 92.6 | | + +Table 1: Comparison between Dense 8B and Expert Choice (EC-CF2) 8B/64E models: Our method significantly outperforms the dense model in downstream tasks. + +## B Capacity Factor + +We evaluate the downstream task fine-tuning performance by varying the capacity factors. Note that a capacity factor of n indicates on average how many experts each token can be received. EC-CF2 is our baseline expert choice, which matches GShard top-2 gating computational footprint. EC-CF1, however, matches Switch Transformer top-1 gating computational footprint. EC-CF0.5 further verifies that an aggressively lowered capacity factor can provide strong enough performance, that almost matches the top-2 gating baseline. + +| Model | BoolQ | CB | CoLA MNLI MRPC QNLI QQP RTE SST2 WiC WNLI | Avg | | | | | | | | | +|------------------|---------|------|---------------------------------------------|-------|------|------|------|------|------|------|----------|------------| +| Top-2 | 78.1 | 87.0 | 88.3 | 85.0 | 82.6 | 90.1 | 90.7 | 81.6 | 94.7 | 68.2 | 67.2 | 83.0±0.3 | +| EC-CAP2 | 78.2 | 88.0 | 88.5 | 85.7 | 83.0 | 90.8 | 91.1 | 80.0 | 95.4 | 70.4 | 64.1 | 83.2±0.4 | +| EC-CAP3 | 78.5 | 91.7 | 89.3 | 86.3 | 83.5 | 90.9 | 91.1 | 81.8 | 94.9 | 70.0 | 65.6 | 84.0±0.4 | +| EC-CF2 | 79.1 | 89.6 | 89.3 | 86.8 | 84.3 | 91.3 | 91.2 | 81.1 | 95.2 | 68.1 | 68.0 | 84.0±0.2 | +| EC-CF1 | 77.4 | 90.6 | 88.0 | 85.5 | 83.6 | 90.3 | 91.2 | 79.8 | 95.3 | 66.5 | 64.9 | 83.0±0.2 | +| EC-CF0.5 | 77.4 | 89.6 | 86.3 | 85.2 | 82.7 | 91.7 | 91.0 | 79.6 | 94.9 | 67.3 | 63.5 | 83.0 ±0.05 | +| Hash Layers 76.1 | 85.2 | 86.7 | 83.4 | 82.5 | 90.0 | 90.3 | 75.7 | 94.0 | 67.4 | 63.3 | 81.3±1.0 | | + +Table 2: Comparison between different routing methods in fine-tuning of 100M/64E models. We perform 3 independent fine-tuning runs for each method and report the average results. This gives more accurate difference between the variants of expert choice method, since they achieve close fine-tuning results. We do not report averaged results in other experiments. + +## C Capped Expert Choice + +As described in Section 4.5, the maximum number of experts each token is assigned can be capped by an entropy-regularized linear programming. Figure 1 compares the validation perplexity when training the 100M/64E models using the base expert choice method (EC-BASE), expert choice capped by two experts per token (EC-CAP2), expert choice capped by three experts per token (EC-CAP3), +and GShard top-2 gating. + +As shown in the figure, restricting the number of experts to 2 degrades the perplexity compared to the base expert choice method. This suggests that a more flexible allocation of experts (e.g. more than 2 experts for a token) can enhance model expressiveness. On the other hand, our EC-CAP2 and EC-CAP3 methods still outperform the top-2 gating method by a clear margin. We believe this confirms the effectiveness of a load balanced training, provided by our method. Finally, EC-CAP3 obtains comparable perplexity to EC-BASE. As indicated by Figure 3, only a little fraction of tokens use more than 3 experts therefore we see little or no difference between EC-BASE and EC-CAP3 variants. We present the fine-tuning results of these methods in Table 2. + +![13_image_0.png](13_image_0.png) + +## D Comparison With Hash Layer + +In this section, we compare our method with Hash Layers [? ]. We use mod x to map a token ID to an expert ID. This in some way ensures load balance and generates specialized experts. The fine-tuning results are presented in the last row in Table 2. Hashing based routing performs much worse than expert choice in terms of average scores and variance. + +## E Fine-Tuning Details + +We did a hyperparameter search for both baseline models and expert choice method. For fine-tuning of the 8B dense model, we use a constant learning rate of 0.0001 and a dropout rate of 0.1. We freeze the attention layer and feed-forward layer while leaving the embedding and layer normalization trainable. This setting has been found optimal for the 8B dense model. For MoE 8B/64E models including GShard top-2 gating and expert choice, we found continuing the learning rate from the pre-trained model while using a square root learning rate decay works better. In addition, we do not apply parameter freezing for fine-tuning MoE models. For models with 100M expert size, we use a constant learning rate of 0.0001 and no dropout is used. \ No newline at end of file diff --git a/DeepSeekMoE_2.md b/DeepSeekMoE_2.md new file mode 100644 index 0000000000000000000000000000000000000000..98a9dee12d3c748670523c812db8d89f5f77a871 --- /dev/null +++ b/DeepSeekMoE_2.md @@ -0,0 +1,868 @@ +# Deepseekmoe: Towards Ultimate Expert Specialization In + +![0_Image_0.Png](0_Image_0.Png) Mixture-Of-Experts Language Models + +Damai Dai∗1,2, Chengqi Deng1, Chenggang Zhao∗1,3, R.X. Xu1, Huazuo Gao1, Deli Chen1, Jiashi Li1, Wangding Zeng1, Xingkai Yu∗1,4, Y. Wu1, Zhenda Xie1, Y.K. Li1, Panpan Huang1, Fuli Luo1, Chong Ruan1, Zhifang Sui2, Wenfeng Liang1 1DeepSeek-AI +2**National Key Laboratory for Multimedia Information Processing, Peking University** +3**Institute for Interdisciplinary Information Sciences, Tsinghua University** +4**National Key Laboratory for Novel Software Technology, Nanjing University** +{daidamai, szf}@pku.edu.cn, **{wenfeng.liang}@deepseek.com** +https://github.com/deepseek-ai/DeepSeek-MoE + +## Abstract + +In the era of large language models, Mixture-of-Experts (MoE) is a promising architecture for managing computational costs when scaling up model parameters. However, conventional MoE +architectures like GShard, which activate the top- out of experts, face challenges in ensuring expert specialization, i.e. each expert acquires non-overlapping and focused knowledge. In response, we propose the **DeepSeekMoE** architecture towards ultimate expert specialization. It involves two principal strategies: (1) finely segmenting the experts into ones and activating from them, allowing for a more flexible combination of activated experts; (2) isolating experts as shared ones, aiming at capturing common knowledge and mitigating redundancy in routed experts. Starting from a modest scale with 2B parameters, we demonstrate that DeepSeekMoE 2B achieves comparable performance with GShard 2.9B, which has 1.5× expert parameters and computation. In addition, DeepSeekMoE 2B nearly approaches the performance of its dense counterpart with the same number of total parameters, which set the upper bound of MoE models. Subsequently, we scale up DeepSeekMoE to 16B parameters and show that it achieves comparable performance with LLaMA2 7B, with only about 40% of computations. + +Further, our preliminary efforts to scale up DeepSeekMoE to 145B parameters consistently validate its substantial advantages over the GShard architecture, and show its performance comparable with DeepSeek 67B, using only 28.5% (maybe even 18.2%) of computations. + +## 1. **Introduction** + +Recent research and practices have empirically demonstrated that, with sufficient training data available, scaling language models with increased parameters and computational budgets can yield remarkably stronger models (Brown et al., 2020; Hoffmann et al., 2022; OpenAI, 2023; Touvron et al., 2023a). It is imperative to acknowledge, however, that the endeavor to scale models to an extremely large scale is also associated with exceedingly high computational costs. Considering the substantial costs, the Mixture-of-Experts (MoE) architecture (Jacobs et al., +1991; Jordan and Jacobs, 1994; Shazeer et al., 2017) has emerged as a popular solution. It can + +![1_image_0.png](1_image_0.png) + +enable parameter scaling, while concurrently keeping computational costs at a modest level. + +Recent applications of MoE architectures in Transformers (Vaswani et al., 2017) have yielded successful attempts at scaling language models to a substantial size (Du et al., 2022; Fedus et al., +2021; Lepikhin et al., 2021; Zoph, 2022), accompanied with remarkable performance. These achievements underscore the considerable potential and promise of MoE language models. + +Despite the promising potential of MoE architectures, existing MoE architectures potentially suffer from issues of knowledge hybridity and knowledge redundancy, which limit the expert specialization, i.e., each expert acquires non-overlapping and focused knowledge. Conventional MoE architectures substitute the Feed-Forward Networks (FFNs) in a Transformer with MoE +layers. Each MoE layer consists of multiple experts, with each structurally identical to a standard FFN, and each token is assigned to one (Fedus et al., 2021) or two (Lepikhin et al., 2021) experts. + +This architecture manifests two potential issues: (1) **Knowledge Hybridity**: existing MoE +practices often employ a limited number of experts (e.g., 8 or 16), and thus tokens assigned to a specific expert will be likely to cover diverse knowledge. Consequently, the designated expert will intend to assemble vastly different types of knowledge in its parameters, which are hard to utilize simultaneously. (2) **Knowledge Redundancy**: tokens assigned to different experts may require common knowledge. As a result, multiple experts may converge in acquiring shared knowledge in their respective parameters, thereby leading to redundancy in expert parameters. + +These issues collectively hinder the expert specialization in existing MoE practices, preventing them from reaching the theoretical upper-bound performance of MoE models. + +In response to the aforementioned issues, we introduce **DeepSeekMoE**, an innovative MoE +architecture specifically designed towards ultimate expert specialization. Our architecture involves two principal strategies: (1) **Fine-Grained Expert Segmentation:** while maintaining the number of parameters constant, we segment the experts into a finer grain by splitting the FFN intermediate hidden dimension. Correspondingly, keeping a constant computational cost, we also activate more fine-grained experts to enable a more flexible and adaptable combination of activated experts. Fine-grained expert segmentation allows diverse knowledge to be decomposed more finely and be learned more precisely into different experts, where each expert will retain a higher level of specialization. In addition, the increased flexibility in combining activated experts also contributes to a more accurate and targeted knowledge acquisition. (2) +Shared Expert Isolation: we isolate certain experts to serve as shared experts that are always activated, aiming at capturing and consolidating common knowledge across varying contexts. + +Through compressing common knowledge into these shared experts, redundancy among other routed experts will be mitigated. This can enhance the parameter efficiency and ensure that each routed expert retains specialized by focusing on distinctive aspects. These architectural innovations in DeepSeekMoE offer opportunities to train a parameter-efficient MoE language model where each expert is highly specialized. + +Starting from a modest scale with 2B parameters, we validate the advantages of the DeepSeekMoE architecture. We conduct evaluations on 12 zero-shot or few-shot benchmarks spanning diverse tasks. Empirical results indicate that DeepSeekMoE 2B surpasses GShard 2B (Lepikhin et al., 2021) by a substantial margin, and even matches GShard 2.9B, a larger MoE model with 1.5× expert parameters and computation. Remarkably, we find that DeepSeekMoE 2B nearly approaches the performance of its dense counterpart with an equivalent number of parameters, which sets the strict upper bound of MoE language models. In pursuit of deeper insights, we conduct elaborate ablation studies and analysis on the expert specialization for DeepSeekMoE. + +These studies validate the effectiveness of fine-grained expert segmentation and shared expert isolation, and provide empirical evidence supporting the assertion that DeepSeekMoE can achieve a high level of expert specialization. + +Leveraging our architecture, we subsequently scale up the model parameters to 16B and train DeepSeekMoE 16B on a large-scale corpus with 2T tokens. Evaluation results reveal that with only about 40% of computations, DeepSeekMoE 16B achieves comparable performance with DeepSeek 7B (DeepSeek-AI, 2024), a dense model trained on the same 2T corpus. We also compare DeepSeekMoE with open source models and the evaluations demonstrate that DeepSeekMoE 16B consistently outperforms models with a similar number of activated parameters by a large margin, and achieves comparable performance with LLaMA2 7B (Touvron et al., +2023b), which has approximately 2.5 times the activated parameters. Figure 1 demonstrates the evaluation results on the Open LLM Leaderboard1. Additionally, we conduct supervised fine-tuning (SFT) for alignment, transforming the model into a chat model. Evaluation results show that DeepSeekMoE Chat 16B also achieves comparable performance with DeepSeek Chat 7B and LLaMA2 SFT 7B in the chat setting. Encouraged by these results, we further undertake a preliminary endeavor to scale up DeepSeekMoE to 145B. The experimental results still validate its substantial advantages over the GShard architecture consistently. In addition, it shows performance comparable with DeepSeek 67B, using only 28.5% (maybe even 18.2%) of computations. + +Our contributions are summarized as follows: +- **Architectural Innovation.** We introduce DeepSeekMoE, an innovative MoE architecture aiming at achieving ultimate expert specialization, which employs two principal strategies of fine-grained expert segmentation and shared expert isolation. + +- **Empirical Validation.** We conduct extensive experiments to empirically validate the effectiveness of the DeepSeekMoE architecture. Experimental results validate the high 1https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard level of expert specialization in DeepSeekMoE 2B, and indicate that DeepSeekMoE 2B can nearly approach the upper bound performance for MoE models +- **Scalability.** We scale up DeepSeekMoE to train a 16B model and show that with only about 40% of computations, DeepSeekMoE 16B achieves comparable performance with DeepSeek 7B and LLaMA2 7B. We also undertake a preliminary endeavor to scale up DeepSeekMoE to 145B, highlighting its consistent advantages over the GShard architecture and showing a comparable performance with DeepSeek 67B. + +- **Alignment for MoE.** We successfully perform supervised fine-tuning on DeepSeekMoE +16B to create an aligned chat model, showcasing the adaptability and versatility of DeepSeekMoE 16B. + +- **Public Release.** In the spirit of open research, we release the model checkpoint of DeepSeekMoE 16B to the public. Notably, this model can be deployed on a single GPU +with 40GB of memory without the need for quantization. + +## 2. **Preliminaries: Mixture-Of-Experts For Transformers** + +We first introduce a generic MoE architecture commonly used in Transformer language models. A standard Transformer language model is constructed by stacking layers of standard Transformer blocks, where each block can be represented as follows: + +$$\mathbf{u}_{1:T}^{l}=\text{Self-Att}\left(\mathbf{h}_{1:T}^{l-1}\right)+\mathbf{h}_{1:T}^{l-1},\tag{1}$$ $$\mathbf{h}_{t}^{l}=\text{FFN}\left(\mathbf{u}_{t}^{l}\right)+\mathbf{u}_{t}^{l},\tag{2}$$ + +where denotes the sequence length, Self-Att(·) denotes the self-attention module, FFN(·) +denotes the Feed-Forward Network (FFN), u 1: +∈ R× are the hidden states of all tokens after the -th attention module, and h + + ∈ Ris the output hidden state of the -th token after the -th Transformer block. For brevity, we omit the layer normalization in the above formulations. + +A typical practice to construct an MoE language model usually substitutes FFNs in a Transformer with MoE layers at specified intervals (Du et al., 2022; Fedus et al., 2021; Lepikhin et al., 2021; Zoph, 2022). An MoE layer is composed of multiple experts, where each expert is structurally identical to a standard FFN. Then, each token will be assigned to one (Fedus et al., +2021) or two (Lepikhin et al., 2021) experts. If the -th FFN is substituted with an MoE layer, the computation for its output hidden state h + +is expressed as: +where denotes the total number of experts, FFN(·) is the -th expert FFN, , denotes the gate value for the -th expert, , denotes the token-to-expert affinity, Topk(·, ) denotes the set comprising highest affinity scores among those calculated for the -th token and all experts, and e + +is the centroid of the -th expert in the -th layer. Note that ,is sparse, indicating that only out of gate values are nonzero. This sparsity property ensures computational efficiency within an MoE layer, i.e., each token will be assigned to and computed in only experts. Also, in the above formulations, we omit the layer normalization operation for brevity. + +$$\mathbf{h}_{t}^{l}=\sum_{i=1}^{N}\left(g_{i,t}\operatorname{FFN}_{i}\left(\mathbf{u}_{t}^{l}\right)\right)+\mathbf{u}_{t}^{l},$$ $$g_{i,t}=\begin{cases}s_{i,t},&s_{i,t}\in\operatorname{Topk}(\{s_{j,t}|1\leqslant j\leqslant N\},K),\\ 0,&\text{otherwise},\end{cases}$$ $$s_{i,t}=\operatorname{Softmax}_{i}\left(\mathbf{u}_{t}^{l}\,\mathbf{e}_{i}^{l}\right),$$ + +$$(3)$$ + +(4) $\binom{4}{5}$ . + +![4_image_0.png](4_image_0.png) + +## 3. **Deepseekmoe Architecture** + +On top of the generic MoE architecture outlined in Section 2, we introduce DeepSeekMoE, which is specifically designed to exploit the potential of expert specialization. As illustrated in Figure 2, our architecture incorporates two principal strategies: fine-grained expert segmentation and shared expert isolation. Both of these strategies are designed to elevate the level of expert specialization. + +## 3.1. **Fine-Grained Expert Segmentation** + +In scenarios where the number of experts is limited, tokens assigned to a particular expert will be more likely to cover diverse types of knowledge. As a consequence, the designated expert will intend to learn vastly different types of knowledge in its parameters, and they are hard to be simultaneously utilized. However, if each token can be routed to more experts, diverse knowledge will gain the potential to be decomposed and learned in different experts respectively. In this context, each expert can still retain a high level of expert specialization, contributing to a more focused knowledge distribution across experts. + +In pursuit of the goal, while maintaining a consistent number of expert parameters and computational cost, we segment the experts with a finer grain. The finer expert segmentation enables a more flexible and adaptable combination of activated experts. To be specific, on top of a typical MoE architecture shown in Figure 2(a), we segment each expert FFN into smaller experts by reducing the FFN intermediate hidden dimension to 1 times its original size. Since each expert becomes smaller, in response, we also increase the number of activated experts to times to keep the same computation cost, as illustrated in Figure 2(b). With the fine-grained expert segmentation, the output of an MoE layer can be expressed as: + +$$\mathbf{h}_{t}^{i}=\sum_{i=1}^{\text{mN}}\left(g_{i,t}\text{FFN}_{i}\left(\mathbf{u}_{t}^{i}\right)\right)+\mathbf{u}_{t}^{i},\tag{6}$$ $$g_{i,t}=\begin{cases}s_{i,t},&s_{i,t}\in\text{Tokr}(s_{i,t}|1\leqslant j\leqslant m\text{N}),m\text{K}),\\ 0,&\text{otherwise},\end{cases}$$ (7) $$s_{i,t}=\text{Softmax}_{i}\left(\mathbf{u}_{t}^{T}\mathbf{e}_{t}^{i}\right),\tag{8}$$ where the total number of expert parameters is equal to $N$ times the number of parameters in a + +$$(6)$$ + +(7) (8) $\frac{1}{2}$ + +standard FFN, and denotes the total number of fine-grained experts. With the fine-grained expert segmentation strategy, the number of nonzero gates will also increases to . + +From a combinatorial perspective, the fine-grained expert segmentation strategy substantially enhances the combinatorial flexibility of activated experts. As an illustrative example, we consider the case where = 16. A typical top-2 routing strategy can yield 16 2 += 120 possible combinations. By contrast, if each expert is split into 4 smaller experts, the fine-grained routing strategy can yield 64 8 += 4, 426, 165, 368 potential combinations. The surge in combinatorial flexibility enhances the potential for achieving more accurate and targeted knowledge acquisition. + +## 3.2. **Shared Expert Isolation** + +With a conventional routing strategy, tokens assigned to different experts may necessitate some common knowledge or information. As a result, multiple experts may converge in acquiring shared knowledge in their respective parameters, thereby resulting in redundancy in expert parameters. However, if there are shared experts dedicated to capturing and consolidating common knowledge across varying contexts, the parameter redundancy among other routed experts will be alleviated. This alleviation of redundancy will contribute to a more parameterefficient model with more specialized experts. + +Towards this objective, in addition to the fine-grained expert segmentation strategy, we further isolate experts to serve as shared experts. Regardless of the router module, each token will be deterministically assigned to these shared experts. In order to maintain a constant computational cost, the number of activated experts among the other routed experts will be decreased by , as depicted in Figure 2(c). With the shared expert isolation strategy integrated, an MoE layer in the complete DeepSeekMoE architecture is formulated as follows: + +$$\mathbf{h}_{t}^{l}=\sum_{i=1}^{K_{s}}\mathrm{FFN}_{i}\left(\mathbf{u}_{t}^{l}\right)+\sum_{i=K_{s}+1}^{mN}\left(g_{i,t}\;\mathrm{FFN}_{i}\left(\mathbf{u}_{t}^{l}\right)\right)+\mathbf{u}_{t}^{l},\tag{9}$$ $$g_{i,t}=\begin{cases}s_{i,t},&s_{i,t}\in\mathrm{Topk}\{s_{j,t}|K_{s}+1\leqslant j\leqslant mN\},mK-K_{s}\},\\ 0,&\text{otherwise},\end{cases}$$ (10) $$s_{i,t}=\mathrm{Softmax}_{i}\left(\mathbf{u}_{t}^{l,T}e_{i}^{l}\right).\tag{11}$$ $\mathbf{h}\mathbf{M}\mathbf{E}$ the number of closed-compact is $K_{s}$ the total number of routed currents. + +Finally, in DeepSeekMoE, the number of shared expert is , the total number of routed experts is − , and the number of nonzero gates is − . + +It is worth noting that the prototype of shared expert isolation can be credited to Rajbhandari et al. (2022). The key distinction lies in the fact that they derive this strategy from an engineering perspective, while we approach it from an algorithmic standpoint. + +## 3.3. **Load Balance Consideration** + +Automatically learned routing strategies may encounter the issue of load imbalance, which manifests two notable defects. Firstly, there is a risk of routing collapse (Shazeer et al., 2017), i.e., +the model always selects only a few experts, preventing other experts from sufficient training. + +Secondly, if experts are distributed across multiple devices, load imbalance can exacerbate computation bottlenecks. + +Expert-Level Balance Loss. In order to mitigate the risk of routing collapse, we also employ an expert-level balance loss. The computation of the balance loss is as follows: + +$$\begin{array}{l}{{{\mathcal L}_{\mathrm{ExpBal}}=\alpha_{1}\sum_{i=1}^{N^{\prime}}f_{i}P_{i},}}\\ {{f_{i}=\frac{N^{\prime}}{K^{T}}\sum_{t=1}^{T}\mathds{1}\,(\mathrm{Token}\;t\;\mathrm{s}\mathrm{lects}\;\mathrm{Expert}\;i),}}\\ {{P_{i}=\frac{1}{T}\sum_{t=1}^{T}s_{i,t},}}\end{array}$$ + +(12) $$\begin{array}{l}\small\text{(13)}\end{array}$$ = (14) $$\begin{array}{l}\small\text{(14)}\end{array}$$ . + +7 +where 1 is a hyper-parameter called expert-level balance factor, +′is equal to ( − ) and +′ +is equal to ( − ) for brevity. 1(·) denotes the indicator function. + +Device-Level Balance Loss. In addition to the expert-level balance loss, we introduce a devicelevel balance loss. When aiming to alleviate computation bottlenecks, it becomes unnecessary to enforce strict balance constraints at the expert level, because excessive constraints on load balance will compromise model performance. Instead, our primary objective is to ensure balanced computation across the devices. If we partition all routed experts into groups +{E1, E2, ..., E}, and deploy each group on a single device, the device-level balance loss is computed as follows: + +$$\mathcal{L}_{\text{DevBal}}=\alpha_{2}\sum_{i=1}^{D}f_{i}^{\prime}P_{i}^{\prime},$$ $$f_{i}^{\prime}=\frac{1}{|\mathcal{E}_{i}|}\sum_{j\in\mathcal{E}_{i}}f_{j},$$ $$P_{i}^{\prime}=\sum_{j\in\mathcal{E}_{i}}P_{j},$$ + +(15) $$\begin{array}{l}\small\text{(16)}\end{array}$$ = (17) . + +where 2 is a hyper-parameter called device-level balance factor. In practice, we set a small expert-level balance factor to mitigate the risk of routing collapse, and meanwhile set a larger device-level balance factor to promote balanced computation across the devices. + +## 4. **Validation Experiments** 4.1. **Experimental Setup** 4.1.1. Training Data And Tokenization + +Our training data is sampled from a large-scale multilingual corpus created by DeepSeek-AI. The corpus primarily focuses on English and Chinese but also encompasses other languages. It is derived from diverse sources, including web text, mathematical material, coding scripts, published literature, and various other textual materials. For the purpose of validation experiments, we sample a subset containing 100B tokens from the corpus to train our models. For tokenization, we utilize the HuggingFace Tokenizer2tools to train byte pair encoding (BPE) (Sennrich et al., +2016) tokenizers on a smaller subset of the training corpus. In the validation experiments, we prepare a tokenizer with a vocabulary size of 8K, and the vocabulary size will be scaled up when training larger models. + +## 4.1.2. Infrastructures + +We conduct experiments based on HAI-LLM (High-Flyer, 2023), an efficient and light-weight training framework which integrates multiple parallelism strategies, including tensor parallelism (Korthikanti et al., 2023; Narayanan et al., 2021; Shoeybi et al., 2019), ZeRO data parallelism (Rajbhandari et al., 2020), PipeDream pipeline parallelism (Harlap et al., 2018), and more specifically, expert parallelism (Lepikhin et al., 2021) by combining data and tensor parallelism. + +In order to optimize performance, we develop GPU kernels with CUDA and Triton (Tillet et al., +2019) for gating algorithms and fusing computations across linear layers in different experts. + +All experiments are carried out on clusters equipped with NVIDIA A100 or H800 GPUs. + +Each node in the A100 cluster contains 8 GPUs connected pairwise via the NVLink bridge. + +The H800 cluster also features 8 GPUs per node, interconnected using NVLink and NVSwitch within nodes. For both A100 and H800 clusters, InfiniBand interconnects are utilized to facilitate communication across nodes. + +## 4.1.3. Hyper-Parameters + +Model Settings. In the validation experiments, we set the number of Transformer layers to 9 and the hidden dimension to 1280. We employ the multi-head attention mechanism with a total of 10 attention heads, where each head has a dimension of 128. For initialization, all learnable parameters are randomly initialized with a standard deviation of 0.006. We substitute all FFNs with MoE layers, and ensure that the total number of expert parameters equals 16 times that of a standard FFN. Additionally, we keep the activated expert parameters, including shared expert parameters and activated routed expert parameters, as 2 times that of a standard FFN. Under this configuration, each MoE model has approximately 2B total parameters, with the number of activated parameters around 0.3B. + +Training Settings. We employ the AdamW optimizer (Loshchilov and Hutter, 2019) with hyper-parameters set to 1 = 0.9, 2 = 0.95, and weight_decay = 0.1. The learning rate is scheduled using a warmup-and-step-decay strategy. Initially, the learning rate linearly increases from 0 to the maximum value during the first 2K steps. Subsequently, the learning rate is multiplied by 0.316 at 80% of the training steps, and again by 0.316 at 90% of the training steps. + +The maximum learning rate for validation experiments is set to 1.08 × 10−3, and the gradient clipping norm is set to 1.0. The batch size is set to 2K, and with a maximum sequence length of 2K, each training batch contains 4M tokens. Correspondingly, the total number of training steps is set to 25,000 to achieve 100B training tokens. Due to the abundance of training data, we do not use dropout during training. Given the relatively small model size, all parameters, including expert parameters, are deployed on a single GPU device to avoid unbalanced computation. + +Correspondingly, we do not drop any tokens during training and do not employ the device-level balance loss. In order to prevent routing collapse, we set an expert-level balance factor of 0.01. + +For readability, we also present an overview table of hyper-parameters for DeepSeekMoE +across different sizes in Appendix A. + +## 4.1.4. Evaluation Benchmarks + +We conduct evaluations on a wide range of benchmarks covering various types of tasks. We list the benchmarks as follows. + +Language Modeling. For language modeling, we evaluate the models on the test set of Pile (Gao et al., 2020), and the evaluation metric is the cross-entropy loss. + +Language Understanding and Reasoning. For language understanding and reasoning, we consider HellaSwag (Zellers et al., 2019), PIQA (Bisk et al., 2020), ARC-challenge and ARCeasy (Clark et al., 2018). The evaluation metric for these tasks is accuracy. + +Reading Comprehension. For reading comprehension, we use RACE-high and RACE-middle Lai et al. (2017), and the evaluation metric is accuracy. + +Code Generation. For code generation, we evaluate the models on HumanEval (Chen et al., +2021) and MBPP (Austin et al., 2021). The evaluation metric is Pass@1, which represents the pass rate for only one generation attempt. + +Closed-Book Question Answering. For closed-book question answering, we consider TriviaQA (Joshi et al., 2017) and NaturalQuestions (Kwiatkowski et al., 2019). The evaluation metric is the Exactly Matching (EM) rate. + +## 4.2. **Evaluations** + +Baselines. Including DeepSeekMoE, we compare five models for validation experiments. + +Dense denotes a standard dense Transformer language model with 0.2B total parameters. **Hash** +Layer (Roller et al., 2021) is an MoE architecture based on top-1 hash routing, with 2.0B total parameters and 0.2B activated parameters, aligned with the dense baseline. **Switch Transformer** (Fedus et al., 2021) is another well-known MoE architecture based on top-1 learnable routing, with total parameters and activated parameters the same as Hash Layer. **GShard** (Lepikhin et al., 2021) employs a top-2 learnable routing strategy, with 2.0B total parameters and 0.3B +activated parameters since one more expert is activated compared to top-1 routing methods. DeepSeekMoE has 1 shared expert and 63 routed experts, where each expert is 0.25 times the size of a standard FFN. Including DeepSeekMoE, all compared models share the same training corpus and training hyper-parameters. All compared MoE models have the same number of total parameters, and GShard has the same number of activated parameters as DeepSeekMoE. + +Results. We present the evaluation results in Table 1. For all demonstrated models, we report the final evaluation results after training on 100B tokens. From the table, we make the following observations: (1) With sparse architectures and more total parameters, Hash Layer + +| Metric | # Shot | Dense | Hash Layer | Switch | GShard | DeepSeekMoE | +|-----------------------|----------|---------|--------------|----------|----------|---------------| +| # Total Params | N/A | 0.2B | 2.0B | 2.0B | 2.0B | 2.0B | +| # Activated Params | N/A | 0.2B | 0.2B | 0.2B | 0.3B | 0.3B | +| FLOPs per 2K Tokens | N/A | 2.9T | 2.9T | 2.9T | 4.3T | 4.3T | +| # Training Tokens | N/A | 100B | 100B | 100B | 100B | 100B | +| Pile (Loss) | N/A | 2.060 | 1.932 | 1.881 | 1.867 | 1.808 | +| HellaSwag (Acc.) | 0-shot | 38.8 | 46.2 | 49.1 | 50.5 | 54.8 | +| PIQA (Acc.) | 0-shot | 66.8 | 68.4 | 70.5 | 70.6 | 72.3 | +| ARC-easy (Acc.) | 0-shot | 41.0 | 45.3 | 45.9 | 43.9 | 49.4 | +| ARC-challenge (Acc.) | 0-shot | 26.0 | 28.2 | 30.2 | 31.6 | 34.3 | +| RACE-middle (Acc.) | 5-shot | 38.8 | 38.8 | 43.6 | 42.1 | 44.0 | +| RACE-high (Acc.) | 5-shot | 29.0 | 30.0 | 30.9 | 30.4 | 31.7 | +| HumanEval (Pass@1) | 0-shot | 0.0 | 1.2 | 2.4 | 3.7 | 4.9 | +| MBPP (Pass@1) | 3-shot | 0.2 | 0.6 | 0.4 | 0.2 | 2.2 | +| TriviaQA (EM) | 5-shot | 4.9 | 6.5 | 8.9 | 10.2 | 16.6 | +| NaturalQuestions (EM) | 5-shot | 1.4 | 1.4 | 2.5 | 3.2 | 5.7 | + +Table 1 | Evaluation results for validation experiments. **Bold** font indicates the best. Compared with other MoE architectures, DeepSeekMoE exhibits a substantial performance advantage. + +and Switch Transformer achieve significantly stronger performance than the dense baseline with the same number of activated parameters. (2) Compared with Hash Layer and Switch Transformer, GShard has more activated parameters and achieves slightly better performance than Switch Transformer. (3) With the same number of total parameters and activated parameters, DeepSeekMoE demonstrates overwhelming advantages over GShard. These results showcase the superiority of our DeepSeekMoE architecture within the existing landscape of MoE architectures. + +## 4.3. **Deepseekmoe Aligns Closely With The Upper Bound Of Moe Models** + +We have demonstrated that DeepSeekMoE outperforms the dense baseline and other MoE architectures. In order to provide a more precise understanding of the performance of DeepSeekMoE, +we compare it with larger baselines with more total parameters or activated parameters. The comparisons enable us to estimate the required model size of GShard or dense baselines to achieve equivalent performance to DeepSeekMoE. + +Comparison with GShard×1.5. Table 2 shows the comparison between DeepSeekMoE and a larger GShard model with 1.5 times the expert size, which results in 1.5 times both expert parameters and expert computation. Overall, we observe that DeepSeekMoE achieves comparable performance with GShard×1.5, underscoring the significant advantage inherent in the DeepSeekMoE architecture. In addition to the comparison with GShard×1.5, we also show the comparison with GShard×1.2 in Appendix B. + +Furthermore, we increase the number of total parameters of DeepSeekMoE to 13.3B and compare it with GShard×1.2 and GShard×1.5 with 15.9B and 19.8B total parameters, respectively. + +We find that at a larger scale, DeepSeekMoE can even outperform GShard×1.5 distinctly. These + +| Metric | # Shot | GShard×1.5 | Dense×16 | DeepSeekMoE | +|---------------------------|----------|--------------|------------|---------------| +| Relative Expert Size | N/A | 1.5 | 1 | 0.25 | +| # Experts | N/A | 0 + 16 | 16 + 0 | 1 + 63 | +| # Activated Experts | N/A | 0 + 2 | 16 + 0 | 1 + 7 | +| # Total Expert Params | N/A | 2.83B | 1.89B | 1.89B | +| # Activated Expert Params | N/A | 0.35B | 1.89B | 0.24B | +| FLOPs per 2K Tokens | N/A | 5.8T | 24.6T | 4.3T | +| # Training Tokens | N/A | 100B | 100B | 100B | +| Pile (Loss) | N/A | 1.808 | 1.806 | 1.808 | +| HellaSwag (Acc.) | 0-shot | 54.4 | 55.1 | 54.8 | +| PIQA (Acc.) | 0-shot | 71.1 | 71.9 | 72.3 | +| ARC-easy (Acc.) | 0-shot | 47.3 | 51.9 | 49.4 | +| ARC-challenge (Acc.) | 0-shot | 34.1 | 33.8 | 34.3 | +| RACE-middle (Acc.) | 5-shot | 46.4 | 46.3 | 44.0 | +| RACE-high (Acc.) | 5-shot | 32.4 | 33.0 | 31.7 | +| HumanEval (Pass@1) | 0-shot | 3.0 | 4.3 | 4.9 | +| MBPP (Pass@1) | 3-shot | 2.6 | 2.2 | 2.2 | +| TriviaQA (EM) | 5-shot | 15.7 | 16.5 | 16.6 | +| NaturalQuestions (EM) | 5-shot | 4.7 | 6.3 | 5.7 | + +Table 2 | Comparisons among DeepSeekMoE, larger GShard models, and larger dense models. + +In the line of "\# Experts", + denotes shared experts and routed experts. In the line of "\# Activated Experts", + denotes activated shared experts and activated routed experts. DeepSeekMoE achieves comparable performance with a GShard model containing 1.5 times expert parameters and computation. In addition, DeepSeekMoE nearly approaches the performance of a dense model with 16 times FFN parameters, which sets the upper bound for MoE models in terms of the model capacity. + +## Results Are Also Provided In Appendix B. + +Comparison with Dense×16. Table 2 also shows the comparison between DeepSeekMoE and larger dense models. For a fair comparison, we do not use the widely used ratio (1:2) between the attention and FFN parameters. Instead, we configure 16 shared experts where each expert has the same number of parameters as a standard FFN. This architecture mimics a dense model with 16 times standard FFN parameters. From the table, we find that DeepSeekMoE nearly approaches the performance of Dense×16, which sets the strict upper bound of MoE models in terms of the model capacity. These results suggest that, at least at the scale of about 2B +parameters and 100B training tokens, the performance of DeepSeekMoE aligns closely with the theoretical upper bound of MoE models. Also, we provide additional comparisons with Dense×4 in Appendix B. + +## 4.4. **Ablation Studies** + +In order to substantiate the effectiveness of the fine-grained expert segmentation and shared expert isolation strategies, we conduct ablation studies for DeepSeekMoE and present the results in Figure 3. For a fair comparison, we ensure all models included in the comparison have the + +![11_image_0.png](11_image_0.png) + +same number of total parameters and activated parameters. + +Shared Expert Isolation. In order to evaluate the influence of the shared expert isolation strategy, we isolate one expert as the shared one based on GShard. From Figure 3, we observe that compared with GShard, the intentional isolation of a shared expert yields improved performance across a majority of benchmarks. These results support the proposition that the shared expert isolation strategy contributes to a stronger model performance. + +Fine-Grained Expert Segmentation. In order to assess the effectiveness of the fine-grained expert segmentation strategy, we conduct a more detailed comparison by further segmenting the experts into a finer grain. To be specific, we segment each expert into 2 or 4 smaller experts, resulting in a total of 32 (1 shared + 31 routed) or 64 (1 shared + 63 routed) experts. Figure 3 reveals a consistent trend that the continuous refinement of expert segmentation granularity corresponds to a continuous enhancement in overall model performance. These findings provide empirical substantiation for the effectiveness of the fine-grained expert segmentation strategy. + +Ratios Between Shared and Routed Experts. In addition, we investigate the best ratio of shared experts and routed experts. Based on the finest granularity with 64 total experts and keeping the number of total experts and activated experts constant, we attempt to isolate 1, 2, and 4 experts as shared ones. We find that different ratios of the shared experts and routed experts do not significantly impact the performance, and 1, 2, and 4 shared experts achieve a Pile loss of 1.808, 1.806, and 1.811, respectively. Considering that the ratio of 1:3 yields a marginally better Pile loss, when scaling up DeepSeekMoE, we keep the ratio between shared experts and activated routed experts as 1:3. + +## 4.5. **Analysis On Expert Specialization** + +In this section, we conduct an empirical analysis on the expert specialization of DeepSeekMoE +2B. DeepSeekMoE 2B in this section refers to the model reported in Table 1, i.e., comprising 2.0B +total parameters, with 1 shared expert and 7 out of 63 routed experts being activated. + +![12_image_0.png](12_image_0.png) + +DeepSeekMoE Exhibits Lower Redundancy Among Routed Experts. In order to assess the redundancy among routed experts, we disable varying ratios of top routed experts and evaluate the Pile loss. To be specific, for each token, we mask a certain ratio of experts with the highest routing probability, and then select top-K experts from the remaining routed experts. For fairness, we compare DeepSeekMoE with GShard×1.5 since they have the same Pile loss when no experts are disabled. As shown in Figure 4, compared with GShard×1.5, DeepSeekMoE is more sensitive to the disabling of top routed experts. This sensitivity suggests a lower level of parameter redundancy in DeepSeekMoE, since each routed expert is more irreplaceable. In contrast, GShard×1.5 exhibits greater redundancy among its expert parameters, so it can buffer the performance drop when top routed experts are disabled. + +Shared Experts Are Irreplaceable by Routed Experts. In order to investigate the role of the shared expert in DeepSeekMoE, we disable it and activate one more routed expert. The evaluation on Pile shows a significant increase in the Pile loss, rising from 1.808 to 2.414, even though we maintain the same computational cost. This result highlights the crucial function of the shared expert and indicates that the shared expert captures fundamental and essential knowledge not shared with routed experts, making it irreplaceable by routed ones. + +DeepSeekMoE Acquires Knowledge More Accurately. In order to validate our claim that higher flexibility in combining activated experts contributes to a more accurate and targeted knowledge acquisition, we investigate whether DeepSeekMoE can acquire requisite knowledge with fewer activated experts. To be specific, we vary the number of activated routed experts from 3 to 7 and evaluate the resulting Pile loss. As demonstrated in Figure 5, even with only + +![13_image_0.png](13_image_0.png) + +![13_image_1.png](13_image_1.png) + +4 routed experts activated, DeepSeekMoE achieves a Pile loss comparable with GShard. This observation supports the proposition that DeepSeekMoE can acquire requisite knowledge more accurately and efficiently. + +Encouraged by these findings, in order to validate the expert specialization and accurate knowledge acquisition of DeepSeekMoE more rigorously, we train a new model from scratch. + +This model comprises 1 shared expert and 63 routed experts, where only 3 routed experts are activated. The evaluation results shown in Figure 6 demonstrate that, even with the same total expert parameters and only half of the activated expert parameters, DeepSeekMoE still outperforms GShard. This highlights the ability of DeepSeekMoE to leverage expert parameters more efficiently, i.e., the proportion of effective parameters in the activated experts is much higher than that of GShard. + +## 5. **Scaling Up To Deepseekmoe 16B** + +With the DeepSeekMoE architecture, we scale up our MoE model to a larger scale with 16B total parameters and train it on 2T tokens. Our results demonstrate that compared with LLaMA2 7B, +DeepSeekMoE 16B achieves superior performance with only about 40% of computations. + +## 5.1. **Experimental Setup** 5.1.1. Training Data And Tokenization + +We sample the training data from the same corpus as described in Section 4.1.1. Different from the validation experiments, we sample a larger amount of data with 2T tokens, aligning with the number of training tokens of LLaMA2 7B. We also use the HuggingFace Tokenizer tools to train a BPE tokenizer, but the vocabulary size is set to 100K for DeepSeekMoE 16B. + +## 5.1.2. Hyper-Parameters + +Model Settings. For DeepSeekMoE 16B, we set the number of Transformer layers to 28 and the hidden dimension to 2048. We employ the multi-head attention mechanism with a total of 16 attention heads, where each head has a dimension of 128. As for initialization, all learnable parameters are randomly initialized with a standard deviation of 0.006. We substitute all FFNs except for the first layer with MoE layers, since we observe that the load balance status converges especially slower for the first layer. Each MoE layer consists of 2 shared experts and 64 routed experts, where each expert is 0.25 times the size of a standard FFN. Each token will be routed to these 2 shared experts and 6 out of 64 routed experts. An even finer expert segmentation granularity is not employed due to the potential reduction in computational efficiency associated with excessively small expert sizes. At a larger scale over 16B, a finer granularity can still be employed. Under our configuration, DeepSeekMoE 16B has approximately 16.4B total parameters, with the number of activated parameters around 2.8B. + +Training Settings. We employ the AdamW optimizer (Loshchilov and Hutter, 2019) with hyper-parameters set to 1 = 0.9, 2 = 0.95, and weight_decay = 0.1. The learning rate is also scheduled using a warmup-and-step-decay strategy. Initially, the learning rate linearly increases from 0 to the maximum value during the first 2K steps. Subsequently, the learning rate is multiplied by 0.316 at 80% of the training steps, and again by 0.316 at 90% of the training steps. + +The maximum learning rate for DeepSeekMoE 16B is set to 4.2 × 10−4, and the gradient clipping norm is set to 1.0. The batch size is set to 4.5K, and with a maximum sequence length of 4K, each training batch contains 18M tokens. Correspondingly, the total number of training steps is set to 106,449 to achieve 2T training tokens. Due to the abundance of training data, we do not use dropout during training. We leverage pipeline parallelism to deploy different layers of a model on different devices, and for each layer, all the experts will be deployed on the same device. + +Therefore, we also do not drop any tokens during training and do not employ the device-level balance loss. In order to prevent routing collapse, we set a quite small expert-level balance factor of 0.001 because we find that under our parallelization strategy, a higher expert-level balance factor cannot increase the computation efficiency, but instead, it will compromise the model performance. + +## 5.1.3. Evaluation Benchmarks + +In addition to the benchmarks used in the validation experiments, we incorporate additional benchmarks for a more comprehensive evaluation. We introduce the distinctions from the benchmarks used in validation experiments as follows. + +Language Modeling. For language modeling, we also evaluate the models on the test set of Pile (Gao et al., 2020). Since the tokenizer used in DeepSeekMoE 16B is different from that used in LLaMA2 7B. For a fair comparison, we use bits per byte (BPB) as the evaluation metric. + +Reading Comprehension. For reading comprehension, we additionally consider DROP (Dua et al., 2019). The evaluation metric is the Exactly Matching (EM) rate. + +Math Reasoning. For math reasoning, we additionally incorporate GSM8K (Cobbe et al., 2021) +and MATH (Hendrycks et al., 2021), using EM as the evaluation metric. + +Multi-Subject Multiple-Choice. For multi-subject multiple-choice, we additionally evaluate the models on MMLU (Hendrycks et al., 2020). The evaluation metric is accuracy. + +Disambiguation. For disambiguation, we additionally consider WinoGrande (Sakaguchi et al., 2019) and the evaluation metric is accuracy. + +Chinese Benchmarks. Since DeepSeekMoE 16B is pretrained on a bilingual corpus, we also evaluate it on four Chinese benchmarks. CLUEWSC (Xu et al., 2020) is a Chinese disambiguation benchmark. CEval (Huang et al., 2023) and CMMLU (Li et al., 2023) are two Chinese multisubject multiple-choice benchmarks with a similar form to MMLU. CHID (Zheng et al., 2019) +is a Chinese idiom completion benchmark, aiming to evaluate the understanding of Chinese culture. The evaluation metrics for the aforementioned Chinese benchmarks are accuracy or EM. + +Open LLM Leaderboard. We evaluate all of the aforementioned benchmarks based on our internal evaluation framework. In order to compare DeepSeekMoE 16B with open source models fairly and conveniently, we additionally evaluate DeepSeekMoE 16B on the Open LLM Leaderboard. The Open LLM Leaderboard is a public leaderboard supported by HuggingFace, it consists of six tasks: ARC (Clark et al., 2018), HellaSwag (Zellers et al., 2019), +MMLU (Hendrycks et al., 2020), TruthfulQA (Lin et al., 2022), Winogrande (Sakaguchi et al., +2019), and GSM8K (Cobbe et al., 2021). + +## 5.2. **Evaluations** 5.2.1. Internal Comparison With Deepseek 7B + +We first conduct an internal comparison between DeepSeekMoE 16B and DeepSeek 7B (DeepSeekAI, 2024), a dense language model with 6.9B parameters. Ensuring fairness, both models are trained on the same corpus with 2T tokens. This enables an accurate assessment of the effectiveness of our MoE architecture, independent of the influence of the training data. + +| Metric | # Shot | DeepSeek 7B (Dense) | DeepSeekMoE 16B | +|-----------------------|----------|-----------------------|-------------------| +| # Total Params | N/A | 6.9B | 16.4B | +| # Activated Params | N/A | 6.9B | 2.8B | +| FLOPs per 4K Tokens | N/A | 183.5T | 74.4T | +| # Training Tokens | N/A | 2T | 2T | +| Pile (BPB) | N/A | 0.75 | 0.74 | +| HellaSwag (Acc.) | 0-shot | 75.4 | 77.1 | +| PIQA (Acc.) | 0-shot | 79.2 | 80.2 | +| ARC-easy (Acc.) | 0-shot | 67.9 | 68.1 | +| ARC-challenge (Acc.) | 0-shot | 48.1 | 49.8 | +| RACE-middle (Acc.) | 5-shot | 63.2 | 61.9 | +| RACE-high (Acc.) | 5-shot | 46.5 | 46.4 | +| DROP (EM) | 1-shot | 34.9 | 32.9 | +| GSM8K (EM) | 8-shot | 17.4 | 18.8 | +| MATH (EM) | 4-shot | 3.3 | 4.3 | +| HumanEval (Pass@1) | 0-shot | 26.2 | 26.8 | +| MBPP (Pass@1) | 3-shot | 39.0 | 39.2 | +| TriviaQA (EM) | 5-shot | 59.7 | 64.8 | +| NaturalQuestions (EM) | 5-shot | 22.2 | 25.5 | +| MMLU (Acc.) | 5-shot | 48.2 | 45.0 | +| WinoGrande (Acc.) | 0-shot | 70.5 | 70.2 | +| CLUEWSC (EM) | 5-shot | 73.1 | 72.1 | +| CEval (Acc.) | 5-shot | 45.0 | 40.6 | +| CMMLU (Acc.) | 5-shot | 47.2 | 42.5 | +| CHID (Acc.) | 0-shot | 89.3 | 89.4 | + +Table 3 | Comparison between DeepSeek 7B and DeepSeekMoE 16B. **Bold** font indicates the best or near the best. With only 40.5% of computations, DeepSeekMoE 16B achieves comparable performance with DeepSeek 7B. + +The evaluation results are presented in Table 3, yielding the following observations: (1) On the whole, with about only 40% of the computations, DeepSeekMoE 16B achieves comparable performance with DeepSeek 7B. (2) DeepSeekMoE 16B exhibits notable strengths in language modeling and knowledge-intensive tasks such as Pile, HellaSwag, TriviaQA, and NaturalQuestions. Given that in an MoE model, FFN parameters are much heavier than attention parameters, these outcomes align with the proposition that FFNs in Transformers exhibit the capability for knowledge memorization (Dai et al., 2022a). (3) Compared with the excellent performance on other tasks, DeepSeekMoE exhibits limitations in addressing multiple-choice tasks. This inadequacy stems from the limited attention parameters in DeepSeekMoE 16B (DeepSeekMoE +16B has only about 0.5B attention parameters, while DeepSeek 7B has 2.5B attention parameters). + +Our earlier investigation on DeepSeek 7B reveals a positive correlation between the attention capacity and performance on multiple-choice tasks. For example, DeepSeek 7B MQA, which is equipped with the multi-query attention mechanism (Shazeer, 2019), also struggled in MMLUlike tasks. In addition, for a more comprehensive understanding of the training process of DeepSeekMoE 16B, we also provide the benchmark curves of DeepSeekMoE 16B and DeepSeek 7B (Dense) during training in Appendix C for reference. + +Critically, due to the modest number of parameters in DeepSeekMoE 16B, it enables singledevice deployment on a GPU with 40GB of memory. With appropriate operator optimizations, it can achieve nearly 2.5 times the inference speed of a 7B dense model. + +| Metric | # Shot | LLaMA2 7B | DeepSeekMoE 16B | +|-----------------------|----------|-------------|-------------------| +| # Total Params | N/A | 6.7B | 16.4B | +| # Activated Params | N/A | 6.7B | 2.8B | +| FLOPs per 4K Tokens | N/A | 187.9T | 74.4T | +| # Training Tokens | N/A | 2T | 2T | +| Pile (BPB) | N/A | 0.76 | 0.74 | +| HellaSwag (Acc.) | 0-shot | 75.6 | 77.1 | +| PIQA (Acc.) | 0-shot | 78.0 | 80.2 | +| ARC-easy (Acc.) | 0-shot | 69.1 | 68.1 | +| ARC-challenge (Acc.) | 0-shot | 49.0 | 49.8 | +| RACE-middle (Acc.) | 5-shot | 60.7 | 61.9 | +| RACE-high (Acc.) | 5-shot | 45.8 | 46.4 | +| DROP (EM) | 1-shot | 34.0 | 32.9 | +| GSM8K (EM) | 8-shot | 15.5 | 18.8 | +| MATH (EM) | 4-shot | 2.6 | 4.3 | +| HumanEval (Pass@1) | 0-shot | 14.6 | 26.8 | +| MBPP (Pass@1) | 3-shot | 21.8 | 39.2 | +| TriviaQA (EM) | 5-shot | 63.8 | 64.8 | +| NaturalQuestions (EM) | 5-shot | 25.5 | 25.5 | +| MMLU (Acc.) | 5-shot | 45.8 | 45.0 | +| WinoGrande (Acc.) | 0-shot | 69.6 | 70.2 | +| CLUEWSC (EM) | 5-shot | 64.0 | 72.1 | +| CEval (Acc.) | 5-shot | 33.9 | 40.6 | +| CMMLU (Acc.) | 5-shot | 32.6 | 42.5 | +| CHID (Acc.) | 0-shot | 37.9 | 89.4 | + +Table 4 | Comparison between LLaMA2 7B and DeepSeekMoE 16B. With only 39.6% of computations, DeepSeekMoE 16B outperforms LLaMA2 7B on the majority of benchmarks. + +## 5.2.2. Comparison With Open Source Models + +Internal Comparison with LLaMA2 7B. In the realm of open source models, we mainly compare DeepSeekMoE 16B with LLaMA2 7B (Touvron et al., 2023b), a well-known and strong open source language model with 6.7B parameters. Both DeepSeekMoE 16B and LLaMA2 7B are pretrained on 2T tokens. Compared with LLaMA2 7B, DeepSeekMoE has 245% of total parameters but only needs 39.6% of computations. The results on our internal benchmarks are presented in Table 4, leading to the following observations. (1) Among the evaluated benchmarks, with only about 40% of computations, DeepSeekMoE 16B outperforms LLaMA2 7B on the majority of benchmarks. (2) The math reasoning and code generation capabilities of DeepSeekMoE 16B +are stronger than LLaMA2 7B, attributed to the enriched presence of mathematical and coderelated text in our pretraining corpus. (3) Given the presence of Chinese texts in our pretraining corpus, DeepSeekMoE 16B exhibits a substantial performance advantage over LLaMA2 7B +on Chinese benchmarks. (4) Despite being trained on fewer English texts, DeepSeekMoE 16B +achieves comparable or better performance compared with LLaMA2 7B on English understanding or knowledge-intensive benchmarks, which demonstrates the exceptional capabilities of DeepSeekMoE 16B. + +Evaluation on Open LLM Leaderboard. Beyond our internal evaluations, we also evaluate DeepSeekMoE 16B on the Open LLM Leaderboard and compare it with other open source models. In addition to LLaMA2 7B, we take a broader set of open source models into consideration, including LLaMA 7B (Touvron et al., 2023a), Falcon 7B (Almazrouei et al., 2023), GPT-J 6B (Wang and Komatsuzaki, 2021), RedPajama-INCITE 7B and 3B (Together-AI, 2023), Open LLaMA 7B +and 3B (Geng and Liu, 2023), OPT 2.7B (Zhang et al., 2022), Pythia 2.8B (Biderman et al., 2023), +GPT-neo 2.7B (Black et al., 2021), and BLOOM 3B (Scao et al., 2022). The evaluation results, as presented in Figure 1, show that DeepSeekMoE 16B consistently outperforms models with similar activated parameters by a large margin. Moreover, it achieves comparable performance with LLaMA2 7B, which has approximately 2.5 times the activated parameters. + +## 6. **Alignment For Deepseekmoe 16B** + +Previous research indicates that MoE models typically do not emerge significant gains from fine-tuning (Artetxe et al., 2022; Fedus et al., 2021). However, Shen et al. (2023) present findings suggesting that MoE models can indeed benefit from instruction tuning. In order to assess whether DeepSeekMoE 16B can benefit from fine-tuning, we conduct supervised fine-tuning to construct a chat model based on DeepSeekMoE 16B. The experimental results reveal that DeepSeekMoE Chat 16B also achieves comparable performance with LLaMA2 SFT 7B and DeepSeek Chat 7B. + +## 6.1. **Experimental Setup** + +Training Data. For training the chat model, we conduct supervised fine-tuning (SFT) on our in-house curated data, comprising 1.4M training examples. This dataset spans a broad range of categories including math, code, writing, question answering, reasoning, summarization, and more. The majority of our SFT training data is in English and Chinese, rendering the chat model versatile and applicable in bilingual scenarios. + +Hyper-Parameters. During supervised fine-tuning, we set the batch size to 1024 examples and conduct training over 8 epochs using the AdamW optimizer (Loshchilov and Hutter, 2019). + +We employ a maximum sequence length of 4K, and pack the training examples as densely as possible until reaching the sequence length limit. We do not use dropout for supervised fine-tuning, and simply set a constant learning rate of 10−5 without incorporating any learning rate scheduling strategy. + +Evaluation Benchmarks. For the evaluation of the chat models, we employ benchmarks similar to those used in Section 5.1.3, with the following adjustments: (1) We exclude Pile (Gao et al., 2020) since chat models are seldom employed for pure language modeling. (2) We exclude + +| Metric | # Shot | LLaMA2 | DeepSeek | DeepSeekMoE | +|-----------------------|----------|----------|------------|---------------| +| SFT 7B | Chat 7B | Chat 16B | | | +| # Total Params | N/A | 6.7B | 6.9B | 16.4B | +| # Activated Params | N/A | 6.7B | 6.9B | 2.8B | +| FLOPs per 4K Tokens | N/A | 187.9T | 183.5T | 74.4T | +| HellaSwag (Acc.) | 0-shot | 67.9 | 71.0 | 72.2 | +| PIQA (Acc.) | 0-shot | 76.9 | 78.4 | 79.7 | +| ARC-easy (Acc.) | 0-shot | 69.7 | 70.2 | 69.9 | +| ARC-challenge (Acc.) | 0-shot | 50.8 | 50.2 | 50.0 | +| BBH (EM) | 3-shot | 39.3 | 43.1 | 42.2 | +| RACE-middle (Acc.) | 5-shot | 63.9 | 66.1 | 64.8 | +| RACE-high (Acc.) | 5-shot | 49.6 | 50.8 | 50.6 | +| DROP (EM) | 1-shot | 40.0 | 41.7 | 33.8 | +| GSM8K (EM) | 0-shot | 63.4 | 62.6 | 62.2 | +| MATH (EM) | 4-shot | 13.5 | 14.7 | 15.2 | +| HumanEval (Pass@1) | 0-shot | 35.4 | 45.1 | 45.7 | +| MBPP (Pass@1) | 3-shot | 27.8 | 39.0 | 46.2 | +| TriviaQA (EM) | 5-shot | 60.1 | 59.5 | 63.3 | +| NaturalQuestions (EM) | 0-shot | 35.2 | 32.7 | 35.1 | +| MMLU (Acc.) | 0-shot | 50.0 | 49.7 | 47.2 | +| WinoGrande (Acc.) | 0-shot | 65.1 | 68.4 | 69.0 | +| CLUEWSC (EM) | 5-shot | 48.4 | 66.2 | 68.2 | +| CEval (Acc.) | 0-shot | 35.1 | 44.7 | 40.0 | +| CMMLU (Acc.) | 0-shot | 36.9 | 51.2 | 49.3 | + +CHID (Zheng et al., 2019) due to the observed instability of results, hindering the derivation of solid conclusions. (3) We additionally include BBH (Suzgun et al., 2022) to provide a more comprehensive assessment of the reasoning ability of the chat models. + +Table 5 | Comparison among LLaMA2 SFT 7B, DeepSeek Chat 7B and DeepSeekMoE Chat 16B, +with all of these three models fine-tuned on the same SFT data. Compared with both 7B dense models, DeepSeekMoE Chat 16B still achieves comparable or better performance on the majority of benchmarks with only 40% of computations. + +## 6.2. **Evaluations** + +Baselines. In order to validate the potential of DeepSeekMoE 16B after alignment, we conduct supervised fine-tuning for LLaMA2 7B, DeepSeek 7B, and DeepSeekMoE 16B, where we utilize totally the same fine-tuning data to ensure fairness. Correspondingly, we construct three chat models, including LLaMA2 SFT 7B3, DeepSeek Chat 7B, and DeepSeekMoE Chat 16B. + +Subsequently, we compare DeepSeekMoE Chat 16B with the other two dense chat models (with about 2.5 times the FLOPs) across a wide range of downstream tasks. + +Results. The evaluation results are presented in Table 5. Our key observations include: +(1) DeepSeekMoE Chat 16B, while consuming nearly 40% of computations, achieves comparable performance with 7B dense models across language understanding and reasoning +(PIQA, ARC, BBH), machine reading comprehension (RACE), mathematical (GSM8K, MATH), +and knowledge-intensive tasks (TriviaQA, NaturalQuestions). (2) On code generation tasks, DeepSeekMoE Chat 16B significantly outperforms LLaMA2 SFT 7B, demonstrating notable improvements on HumanEval and MBPP. In addition, it also surpasses DeepSeek Chat 7B. (3) +On multiple-choice question answering benchmarks including MMLU, CEval, and CMMLU, +DeepSeekMoE Chat 16B still falls behind DeepSeek Chat 7B, consistent with the observations for the base model (Section 5.2.1). However, it is worth noting that, after supervised finetuning, the performance gap between DeepSeekMoE 16B and DeepSeek 7B is narrowed. (4) +Benefiting from the pretraining on a bilingual corpus, DeepSeekMoE Chat 16B notably outperforms LLaMA2 SFT 7B on all Chinese benchmarks. These results demonstrate the balanced capabilities of DeepSeekMoE 16B in both Chinese and English, enhancing its versatility and applicability in diverse scenarios. In conclusion, the evaluation for the chat models highlights the potential of DeepSeekMoE 16B in benefiting from alignment, and validates its consistent advantages in achieving comparable performance with dense models while using only about 40% of computations. + +## 7. **Deepseekmoe 145B Ongoing** + +Encouraged by the outstanding performance of DeepSeekMoE 16B, we further undertake a preliminary endeavor to scale up DeepSeekMoE to 145B. In this initial study, DeepSeekMoE +145B is trained on 245B tokens, but it has demonstrated consistent advantages over the GShard architecture and shown promise to match or exceed the performance of DeepSeek 67B (Dense). Furthermore, upon the completion of the final version and full training of DeepSeekMoE 145B, +we also plan to make it publicly available. + +## 7.1. **Experimental Setup** + +Training Data and Tokenization. For DeepSeekMoE 145B, we employ exactly the same training corpus and tokenizer as DeepSeekMoE 16B, with the only difference being that DeepSeekMoE 145B is trained on 245B tokens for an initial study. + +Model Settings. For DeepSeekMoE 145B, we set the number of Transformer layers to 62 and the hidden dimension to 4096. We employ the multi-head attention mechanism with a total of 32 attention heads, where each head has a dimension of 128. As for initialization, all learnable parameters are randomly initialized with a standard deviation of 0.006. As in DeepSeekMoE +16B, we also substitute all FFNs except for the first layer with MoE layers. Each MoE layer consists of 4 shared experts and 128 routed experts, where each expert is 0.125 times the size of a standard FFN. Each token will be routed to these 4 shared experts and 12 out of 128 routed experts. Under this configuration, DeepSeekMoE 145 has approximately 144.6B total parameters, with the number of activated parameters around 22.2B. + +Training Settings. We employ the AdamW optimizer (Loshchilov and Hutter, 2019) with hyper-parameters set to 1 = 0.9, 2 = 0.95, and weight_decay = 0.1. For the preliminary study of DeepSeekMoE 145B, we employ a warmup-and-constant learning rate scheduler. Initially, the learning rate linearly increases from 0 to the maximum value during the first 2K steps. + +Subsequently, the learning rate keeps constant during the remaining training process. The maximum learning rate for DeepSeekMoE 145B is set to 3.0 × 10−4, and the gradient clipping norm is set to 1.0. The batch size is set to 4.5K, and with a maximum sequence length of 4K, each training batch contains 18M tokens. We train DeepSeekMoE 145B for 13,000 steps, achieving 245B training tokens. Also, we do not use dropout during training. We leverage pipeline parallelism to deploy different layers of a model on different devices, and for each layer, all the routed experts will be uniformly deployed on 4 devices (i.e., expert parallelism combined with data parallelism). Since we employ expert parallelism for DeepSeekMoE 145B, the device-level load balance should be considered to reduce the computational bottleneck. In response, we set the device-level balance factor to 0.05 to encourage balanced computation across devices. Also, we still set a small expert-level balance factor of 0.003 to prevent routing collapse. + +Evaluation Benchmarks. We evaluate DeepSeekMoE 145B on exactly the same internal benchmarks as used for DeepSeekMoE 16B (see Section 5.1.3). + +## 7.2. **Evaluations** + +Baselines. Apart from **DeepSeekMoE 145B**, we consider three additional models for comparison. **DeepSeek 67B (Dense)** is a dense model with 67.4B total parameters (refer to DeepSeek-AI +(2024) for the model and training details). **GShard 137B** shares the same hidden dimension and number of layers as DeepSeekMoE 145B, but follows the GShard architecture. Note that DeepSeekMoE 145B aligns the intermediate hidden dimension in each expert to a multiple of 64 for computation efficiency, so its model size is 6% larger than GShard 137B. **DeepSeekMoE** +142B (Half Activated) has a similar architecture to DeepSeekMoE 145B, but it contains only 2 shared experts, and only 6 out of 128 routed experts are activated. It is noteworthy that all compared models, including DeepSeekMoE 145B, share the same training corpus. In addition, all MoE models in the comparison are trained from scratch and share the same training hyper-parameters. + +Results. From the evaluation results presented in Table 6, we have the following observations: (1) Despite having comparable total parameters and computations, DeepSeekMoE 145B +significantly outperforms GShard 137B, highlighting the advantages of the DeepSeekMoE architecture again. (2) On the whole, with only 28.5% of computations, DeepSeekMoE 145B +achieves comparable performance with DeepSeek 67B (Dense). Consistent with the findings from DeepSeekMoE 16B, DeepSeekMoE 145B exhibits remarkable strengths in language modeling and knowledge-intensive tasks, but with limitations in multiple-choice tasks. (3) At a larger scale, the performance of DeepSeekMoE 142B (Half Activated) does not lag behind too much from DeepSeekMoE 145B. In addition, despite having only a half of activated expert parameters, DeepSeekMoE 142B (Half Activated) still match the performance of DeepSeek 67B (Dense), with only 18.2% of computations. It also outperforms GShard 137B, which aligns with the conclusion from Section 4.5. + +## 8. **Related Work** + +The Mixture of Experts (MoE) technique is first proposed by Jacobs et al. (1991); Jordan and Jacobs +(1994) to deal with different samples with independent expert modules. Shazeer et al. (2017) +introduce MoE into language model training and build a large-scale LSTM-based (Hochreiter and Schmidhuber, 1997) MoE models. As Transformer become the most popular architecture + +| Metric | # Shot | DeepSeek | GShard DeepSeekMoE | DeepSeekMoE 142B | | +|-----------------------|----------|------------|----------------------|--------------------|---------| +| 67B (Dense) | 137B | 145B | (Half Activated) | | | +| # Total Params | N/A | 67.4B | 136.5B | 144.6B | 142.3B | +| # Activated Params | N/A | 67.4B | 21.6B | 22.2B | 12.2B | +| Relative Expert Size | N/A | N/A | 1 | 0.125 | 0.125 | +| # Experts | N/A | N/A | 0 + 16 | 4 + 128 | 2 + 128 | +| # Activated Experts | N/A | N/A | 0 + 2 | 4 + 12 | 2 + 6 | +| FLOPs per 4K Tokens | N/A | 2057.5T | 572.7T | 585.6T | 374.6T | +| # Training Tokens | N/A | 245B | 245B | 245B | 245B | +| Pile (Loss.) | N/A | 1.905 | 1.961 | 1.876 | 1.888 | +| HellaSwag (Acc.) | 0-shot | 74.8 | 72.0 | 75.8 | 74.9 | +| PIQA (Acc.) | 0-shot | 79.8 | 77.6 | 80.7 | 80.2 | +| ARC-easy (Acc.) | 0-shot | 69.0 | 64.0 | 69.7 | 67.9 | +| ARC-challenge (Acc.) | 0-shot | 50.4 | 45.8 | 48.8 | 49.0 | +| RACE-middle (Acc.) | 5-shot | 63.2 | 59.2 | 62.1 | 59.5 | +| RACE-high (Acc.) | 5-shot | 46.9 | 43.5 | 45.5 | 42.6 | +| DROP (EM) | 1-shot | 27.5 | 21.6 | 27.8 | 28.9 | +| GSM8K (EM) | 8-shot | 11.8 | 6.4 | 12.2 | 13.8 | +| MATH (EM) | 4-shot | 2.1 | 1.6 | 3.1 | 2.8 | +| HumanEval (Pass@1) | 0-shot | 23.8 | 17.7 | 19.5 | 23.2 | +| MBPP (Pass@1) | 3-shot | 33.6 | 27.6 | 33.2 | 32.0 | +| TriviaQA (EM) | 5-shot | 57.2 | 52.5 | 61.1 | 59.8 | +| NaturalQuestions (EM) | 5-shot | 22.6 | 19.0 | 25.0 | 23.5 | +| MMLU (Acc.) | 5-shot | 45.1 | 26.3 | 39.4 | 37.5 | +| WinoGrande (Acc.) | 0-shot | 70.7 | 67.6 | 71.9 | 70.8 | +| CLUEWSC (EM) | 5-shot | 69.1 | 65.7 | 71.9 | 72.6 | +| CEval (Acc.) | 5-shot | 40.3 | 26.2 | 37.1 | 32.8 | +| CMMLU (Acc.) | 5-shot | 40.6 | 25.4 | 35.9 | 31.9 | +| CHID (Acc.) | 0-shot | 88.5 | 86.9 | 90.3 | 88.3 | + +Table 6 | Comparison among DeepSeek 67B (Dense) and MoE models at the scale of about 140B total parameters. In the lines of "\# Experts" and "\# Activated Experts", + denotes shared experts and routed experts, respectively. **Bold** font indicates the best or near the best performance excluding the last column. DeepSeekMoE 145B, and even DeepSeekMoE +142B (Half Activated) that has only a half of activated expert parameters, outperform GShard 137B by a large margin. Moreover, with 28.5% of computations, DeepSeekMoE 145B achieves comparable performance with DeepSeek 67B. + +for NLP, many attempts extend FFNs in a Transformer as MoE layers to build MoE language models. GShard (Lepikhin et al., 2021) and Switch Transformer (Fedus et al., 2021) are pioneers which employ learnable top-2 or top-1 routing strategies to scale the MoE language models to an extremely large scale. Hash Layer (Roller et al., 2021) and StableMoE (Dai et al., 2022b) +use fixed routing strategies for more stable routing and training. Zhou et al. (2022) propose an expert-choice routing strategy, where each token can be assigned to different numbers of experts. + +Zoph (2022) focus on the issues of training instability and fine-tuning difficulty in MoE models, and propose ST-MoE to overcome these challenges. In addition to research on MoE architectures and training strategies, recent years have also witnessed the emergence of numerous large-scale language or multimodal models (Du et al., 2022; Lin et al., 2021; Ren et al., 2023; Xue et al., +2023) based on existing MoE architectures. By and large, most of the previous MoE models are based on conventional top-1 or top-2 routing strategies, leaving large room for improving expert specialization. In response, our DeepSeekMoE architecture aims to improve the expert specialization to the utmost extent. + +## 9. **Conclusion** + +In this paper, we introduce the DeepSeekMoE architecture for MoE language models, with the objective of achieving ultimate expert specialization. Through fine-grained expert segmentation and shared expert isolation, DeepSeekMoE achieves significantly higher expert specialization and performance compared with prevailing MoE architectures. Starting with a modest scale of 2B parameters, we validate the advantages of DeepSeekMoE, demonstrating its capability to approach the upper bound performance for MoE models. Furthermore, we provide empirical evidence to show that DeepSeekMoE has a higher level of expert specialization than GShard. + +Scaling up to a larger scale of 16B total parameters, we train DeepSeekMoE 16B on 2T tokens and demonstrate its outstanding performance comparable with DeepSeek 7B and LLaMA2 7B, +with only about 40% of computations. Additionally, supervised fine-tuning is conducted for alignment to construct an MoE chat model based on DeepSeekMoE 16B, further showing its adaptability and versatility. Further, we perform a preliminary exploration to scale DeepSeekMoE to 145B parameters. We find that DeepSeekMoE 145B still keeps substantial advantages over the GShard architecture, and demonstrates comparable performance with DeepSeek 67B, +using only 28.5% (maybe even 18.2%) of computations. + +For research purposes, we release the model checkpoint of DeepSeekMoE 16B to the public, which can be deployed on a single GPU with 40GB of memory. We aspire for this work to provide valuable insights for both academia and industry, and contribute to the accelerated advancement of large-scale language models. + +## References + +E. Almazrouei, H. Alobeidli, A. Alshamsi, A. Cappelli, R. Cojocaru, M. Debbah, E. Goffinet, D. Heslow, J. Launay, Q. Malartic, B. Noune, B. Pannier, and G. Penedo. Falcon-40B: an open large language model with state-of-the-art performance, 2023. + +M. Artetxe, S. Bhosale, N. Goyal, T. Mihaylov, M. Ott, S. Shleifer, X. V. Lin, J. Du, S. Iyer, R. Pasunuru, G. Anantharaman, X. Li, S. Chen, H. Akin, M. Baines, L. Martin, X. Zhou, P. S. Koura, B. O'Horo, J. Wang, L. Zettlemoyer, M. T. Diab, Z. Kozareva, and V. Stoyanov. + +Efficient large scale language modeling with mixtures of experts. In Y. Goldberg, Z. Kozareva, and Y. Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 11699–11732. Association for Computational Linguistics, 2022. doi: 10.18653/V1/2022 +.EMNLP-MAIN.804. URL https://doi.org/10.18653/v1/2022.emnlp-main.804. + +J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. +S. Biderman, H. Schoelkopf, Q. G. Anthony, H. Bradley, K. O'Brien, E. Hallahan, M. A. Khan, S. Purohit, U. S. Prashanth, E. Raff, A. Skowron, L. Sutawika, and O. van der Wal. Pythia: +A suite for analyzing large language models across training and scaling. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 2397–2430. PMLR, 2023. URL https: +//proceedings.mlr.press/v202/biderman23a.html. + +Y. Bisk, R. Zellers, R. L. Bras, J. Gao, and Y. Choi. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI +2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI +2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI +2020, New York, NY, USA, February 7-12, 2020, pages 7432–7439. AAAI Press, 2020. doi: +10.1609/aaai.v34i05.6239. URL https://doi.org/10.1609/aaai.v34i05.6239. + +S. Black, L. Gao, P. Wang, C. Leahy, and S. Biderman. GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow, Mar. 2021. URL https://doi.org/10.5281/ zenodo.5297715. If you use this misc, please cite it using these metadata. + +T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020. URL +https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8 ac142f64a-Abstract.html. + +M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. + +URL https://arxiv.org/abs/2107.03374. + +P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457, 2018. URL http://arxiv.org/abs/1803.05457. + +K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. + +D. Dai, L. Dong, Y. Hao, Z. Sui, B. Chang, and F. Wei. Knowledge neurons in pretrained transformers. In S. Muresan, P. Nakov, and A. Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8493–8502. Association for Computational Linguistics, 2022a. doi: 10.18653/V1/2022.ACL-LONG.581. URL https://doi.org/10.1 8653/v1/2022.acl-long.581. + +D. Dai, L. Dong, S. Ma, B. Zheng, Z. Sui, B. Chang, and F. Wei. Stablemoe: Stable routing strategy for mixture of experts. In S. Muresan, P. Nakov, and A. Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics +(Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7085–7095. + +Association for Computational Linguistics, 2022b. doi: 10.18653/V1/2022.ACL-LONG.489. + +URL https://doi.org/10.18653/v1/2022.acl-long.489. + +DeepSeek-AI. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954, 2024. + +N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, Y. Zhou, A. W. Yu, O. Firat, B. Zoph, L. Fedus, M. P. Bosma, Z. Zhou, T. Wang, Y. E. Wang, K. Webster, M. Pellat, K. Robinson, K. S. Meier-Hellstern, T. Duke, L. Dixon, K. Zhang, Q. V. Le, Y. Wu, Z. Chen, and C. Cui. Glam: Efficient scaling of language models with mixture-of-experts. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, +volume 162 of Proceedings of Machine Learning Research, pages 5547–5569. PMLR, 2022. + +URL https://proceedings.mlr.press/v162/du22c.html. + +D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In J. Burstein, C. Doran, and T. Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2368– +2378. Association for Computational Linguistics, 2019. doi: 10.18653/V1/N19-1246. URL +https://doi.org/10.18653/v1/n19-1246. +W. Fedus, B. Zoph, and N. Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. CoRR, abs/2101.03961, 2021. URL https://arxiv.org/ +abs/2101.03961. + +L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, et al. The Pile: An 800GB dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. + +X. Geng and H. Liu. Openllama: An open reproduction of llama, May 2023. URL https: +//github.com/openlm-research/open_llama. + +A. Harlap, D. Narayanan, A. Phanishayee, V. Seshadri, N. R. Devanur, G. R. Ganger, and P. B. + +Gibbons. Pipedream: Fast and efficient pipeline parallel DNN training. CoRR, abs/1806.03377, 2018. URL http://arxiv.org/abs/1806.03377. +D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. + +D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. + +Measuring mathematical problem solving with the math dataset, 2021. + +High-Flyer. Hai-llm: An efficient and lightweight tool for training large models, 2023. URL +https://www.high-flyer.cn/en/blog/hai-llm. + +S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computing, 9(8):1735–1780, 1997. URL https://doi.org/10.1162/neco.1997.9.8.1735. + +J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hendricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, and L. Sifre. + +Training compute-optimal large language models. CoRR, abs/2203.15556, 2022. doi: 10.48550 +/arXiv.2203.15556. URL https://doi.org/10.48550/arXiv.2203.15556. + +Y. Huang, Y. Bai, Z. Zhu, J. Zhang, J. Zhang, T. Su, J. Liu, C. Lv, Y. Zhang, J. Lei, et al. C-Eval: A +multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322, 2023. + +R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixtures of local experts. + +Neural Computing, 3(1):79–87, 1991. URL https://doi.org/10.1162/neco.1991.3.1. + +79. + +M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural Computing, 6(2):181–214, 1994. URL https://doi.org/10.1162/neco.1994.6.2.181. + +M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. arXiv e-prints, art. arXiv:1705.03551, 2017. + +V. A. Korthikanti, J. Casper, S. Lym, L. McAfee, M. Andersch, M. Shoeybi, and B. Catanzaro. + +Reducing activation recomputation in large transformer models. Proceedings of Machine Learning and Systems, 5, 2023. + +T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M.-W. Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov. Natural questions: a benchmark for question answering research. + +Transactions of the Association of Computational Linguistics, 2019. + +G. Lai, Q. Xie, H. Liu, Y. Yang, and E. H. Hovy. RACE: large-scale reading comprehension dataset from examinations. In M. Palmer, R. Hwa, and S. Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 785–794. Association for Computational Linguistics, 2017. doi: 10.18653/V1/D17-1082. URL https://doi.org/10.18653/v1/d1 7-1082. + +D. Lepikhin, H. Lee, Y. Xu, D. Chen, O. Firat, Y. Huang, M. Krikun, N. Shazeer, and Z. Chen. + +Gshard: Scaling giant models with conditional computation and automatic sharding. In 9th International Conference on Learning Representations, ICLR 2021. OpenReview.net, 2021. + +URL https://openreview.net/forum?id=qrwe7XHTmYb. + +H. Li, Y. Zhang, F. Koto, Y. Yang, H. Zhao, Y. Gong, N. Duan, and T. Baldwin. CMMLU: Measuring massive multitask language understanding in Chinese. arXiv preprint arXiv:2306.09212, 2023. + +J. Lin, R. Men, A. Yang, C. Zhou, M. Ding, Y. Zhang, P. Wang, A. Wang, L. Jiang, X. Jia, J. Zhang, J. Zhang, X. Zou, Z. Li, X. Deng, J. Liu, J. Xue, H. Zhou, J. Ma, J. Yu, Y. Li, W. Lin, J. Zhou, J. Tang, and H. Yang. M6: A chinese multimodal pretrainer. CoRR, abs/2103.00823, 2021. + +URL https://arxiv.org/abs/2103.00823. + +S. Lin, J. Hilton, and O. Evans. Truthfulqa: Measuring how models mimic human falsehoods. In S. Muresan, P. Nakov, and A. Villavicencio, editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3214–3252. Association for Computational Linguistics, 2022. + +doi: 10.18653/V1/2022.ACL-LONG.229. URL https://doi.org/10.18653/v1/2022.a cl-long.229. + +I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. + +OpenReview.net, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7. + +D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro, et al. Efficient large-scale language model training on gpu clusters using megatron-lm. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–15, 2021. + +OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. doi: 10.48550/arXiv.2303.08774. + +URL https://doi.org/10.48550/arXiv.2303.08774. + +S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He. Zero: memory optimizations toward training trillion parameter models. In C. Cuicchi, I. Qualters, and W. T. Kramer, editors, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, Virtual Event / Atlanta, Georgia, USA, November 9-19, 2020, page 20. + +IEEE/ACM, 2020. doi: 10.1109/SC41405.2020.00024. URL https://doi.org/10.1109/SC +41405.2020.00024. + +S. Rajbhandari, C. Li, Z. Yao, M. Zhang, R. Y. Aminabadi, A. A. Awan, J. Rasley, and Y. He. Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation AI scale. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, and S. Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 18332–18346. PMLR, 2022. URL https://proceedings.mlr.press/v162/rajbh andari22a.html. + +X. Ren, P. Zhou, X. Meng, X. Huang, Y. Wang, W. Wang, P. Li, X. Zhang, A. Podolskiy, G. Arshinov, A. Bout, I. Piontkovskaya, J. Wei, X. Jiang, T. Su, Q. Liu, and J. Yao. Pangu-Σ: Towards trillion parameter language model with sparse heterogeneous computing. CoRR, abs/2303.10845, 2023. URL https://doi.org/10.48550/arXiv.2303.10845. + +S. Roller, S. Sukhbaatar, A. Szlam, and J. Weston. Hash layers for large sparse models. CoRR, +abs/2106.04426, 2021. URL https://arxiv.org/abs/2106.04426. + +K. Sakaguchi, R. L. Bras, C. Bhagavatula, and Y. Choi. Winogrande: An adversarial winograd schema challenge at scale, 2019. + +T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilic, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé, J. Tow, A. M. Rush, S. Biderman, A. Webson, P. S. Ammanamanchi, T. Wang, B. Sagot, N. Muennighoff, A. V. del Moral, O. Ruwase, R. Bawden, S. Bekman, A. McMillan-Major, I. Beltagy, H. Nguyen, L. Saulnier, S. Tan, P. O. Suarez, V. Sanh, H. Laurençon, Y. Jernite, J. Launay, M. Mitchell, C. Raffel, A. Gokaslan, A. Simhi, A. Soroa, A. F. Aji, A. Alfassy, A. Rogers, A. K. Nitzav, C. Xu, C. Mou, C. Emezue, C. Klamm, C. Leong, D. van Strien, D. I. Adelani, and et al. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100, 2022. doi: 10.48550/ARXIV.2211.05100. URL https: +//doi.org/10.48550/arXiv.2211.05100. + +R. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics, 2016. doi: 10.18653/V1/P16-1162. URL https: +//doi.org/10.18653/v1/p16-1162. + +N. Shazeer. Fast transformer decoding: One write-head is all you need. CoRR, abs/1911.02150, 2019. URL http://arxiv.org/abs/1911.02150. + +N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton, and J. Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In 5th International Conference on Learning Representations, ICLR 2017. OpenReview.net, 2017. URL https: +//openreview.net/forum?id=B1ckMDqlg. + +S. Shen, L. Hou, Y. Zhou, N. Du, S. Longpre, J. Wei, H. W. Chung, B. Zoph, W. Fedus, X. Chen, T. Vu, Y. Wu, W. Chen, A. Webson, Y. Li, V. Zhao, H. Yu, K. Keutzer, T. Darrell, and D. Zhou. + +Flan-moe: Scaling instruction-finetuned language models with sparse mixture of experts. + +CoRR, abs/2305.14705, 2023. doi: 10.48550/ARXIV.2305.14705. URL https://doi.org/10 .48550/arXiv.2305.14705. + +M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro. Megatron-lm: +Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019. + +M. Suzgun, N. Scales, N. Schärli, S. Gehrmann, Y. Tay, H. W. Chung, A. Chowdhery, Q. V. Le, E. H. Chi, D. Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261, 2022. + +P. Tillet, H. T. Kung, and D. Cox. Triton: An intermediate language and compiler for tiled neural network computations. In Proceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, MAPL 2019, page 10–19, New York, NY, +USA, 2019. Association for Computing Machinery. ISBN 9781450367196. doi: 10.1145/331550 8.3329973. URL https://doi.org/10.1145/3315508.3329973. + +Together-AI. Redpajama-data: An open source recipe to reproduce llama training dataset, April 2023. URL https://github.com/togethercomputer/RedPajama-Data. + +H. Touvron, T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971, 2023a. doi: 10.48550/arXiv.230 2.13971. URL https://doi.org/10.48550/arXiv.2302.13971. + +H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. Canton-Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M. + +Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and T. Scialom. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023b. doi: 10.48550/arXiv.2307.09288. URL https://doi.org/10.48550/arXiv.2307. 09288. + +A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems 30: +Annual Conference on Neural Information Processing Systems 2017, pages 5998–6008, 2017. + +URL https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd 053c1c4a845aa-Abstract.html. +B. Wang and A. Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. + +https://github.com/kingoflolz/mesh-transformer-jax, May 2021. + +L. Xu, H. Hu, X. Zhang, L. Li, C. Cao, Y. Li, Y. Xu, K. Sun, D. Yu, C. Yu, Y. Tian, Q. Dong, W. Liu, B. Shi, Y. Cui, J. Li, J. Zeng, R. Wang, W. Xie, Y. Li, Y. Patterson, Z. Tian, Y. Zhang, H. Zhou, S. Liu, Z. Zhao, Q. Zhao, C. Yue, X. Zhang, Z. Yang, K. Richardson, and Z. Lan. CLUE: A chinese language understanding evaluation benchmark. In D. Scott, N. Bel, and C. Zong, editors, Proceedings of the 28th International Conference on Computational Linguistics, COLING +2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4762–4772. International Committee on Computational Linguistics, 2020. doi: 10.18653/V1/2020.COLING-MAIN.419. URL +https://doi.org/10.18653/v1/2020.coling-main.419. + +F. Xue, Z. Zheng, Y. Fu, J. Ni, Z. Zheng, W. Zhou, and Y. You. Openmoe: Open mixture-of-experts language models. https://github.com/XueFuzhao/OpenMoE, 2023. + +R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi. HellaSwag: Can a machine really finish your sentence? In A. Korhonen, D. R. Traum, and L. Màrquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791–4800. Association for Computational Linguistics, 2019. doi: 10.18653/v1/p19-1472. URL https://doi.org/10.18653/v1/p1 9-1472. + +S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang, and L. Zettlemoyer. Opt: Open pre-trained transformer language models, 2022. + +C. Zheng, M. Huang, and A. Sun. Chid: A large-scale chinese idiom dataset for cloze test. In A. Korhonen, D. R. Traum, and L. Màrquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 778–787. Association for Computational Linguistics, 2019. + +doi: 10.18653/V1/P19-1075. URL https://doi.org/10.18653/v1/p19-1075. +Y. Zhou, T. Lei, H. Liu, N. Du, Y. Huang, V. Zhao, A. M. Dai, Z. Chen, Q. V. Le, and J. Laudon. + +Mixture-of-experts with expert choice routing. In NeurIPS, 2022. URL http://papers.nip s.cc/paper_files/paper/2022/hash/2f00ecd787b432c1d36f3de9800728eb-Abs tract-Conference.html. + +B. Zoph. Designing effective sparse expert models. In IEEE International Parallel and Distributed Processing Symposium, IPDPS Workshops 2022, Lyon, France, May 30 - June 3, 2022, page 1044. IEEE, 2022. URL https://doi.org/10.1109/IPDPSW55747.2022.0 0171. + +## Appendices A. **Overview Of Hyper-Parameters** + +We present the overview of hyper-parameters for DeepSeekMoE across various sizes in Table 7. + +| # Params # Layers Hidden # Attn # Shared | # Routed | Relative | Sequence Batch Size Learning | | | | | | | +|--------------------------------------------|------------|------------|--------------------------------|-------------|--------------------|------------|------|------|---------| +| Size | Heads | Experts | Experts | Expert Size | Length | (Sequence) | Rate | | | +| 2.0B | 9 | 1280 | 10 | 1 | 63 (7 activated) | 0.25 | 2048 | 2048 | 1.08e-3 | +| 16.4B | 28 | 2048 | 16 | 2 | 64 (6 activated) | 0.25 | 4096 | 4608 | 4.2e-4 | +| 144.6B | 62 | 4096 | 32 | 4 | 128 (12 activated) | 0.125 | 4096 | 4608 | 3.0e-4 | + +Table 7 | Overview of hyper-parameters for DeepSeekMoE across various sizes. The relative expert size is in comparison to a standard FFN. + +## B. **Comparing Deepseekmoe With Larger Models** + +Comparisons among DeepSeekMoE, GShard×1.2, and GShard×1.5 are shown in Table 8. Comparisons among DeepSeekMoE, Dense×4, and Dense×16 are shown in Table 9. + +| Metric | # Shot | GShard×1.2 | GShard×1.5 | DeepSeekMoE | +|---------------------------|----------|--------------|--------------|---------------| +| Relative Expert Size | N/A | 1.2 | 1.5 | 0.25 | +| # Experts | N/A | 0 + 16 | 0 + 16 | 1 + 63 | +| # Activated Experts | N/A | 0 + 2 | 0 + 2 | 1 + 7 | +| # Total Expert Params | N/A | 2.3B | 2.8B | 1.9B | +| # Activated Expert Params | N/A | 0.28B | 0.35B | 0.24B | +| # Training Tokens | N/A | 100B | 100B | 100B | +| Pile (Loss) | N/A | 1.824 | 1.808 | 1.808 | +| HellaSwag (Acc.) | 0-shot | 53.7 | 54.4 | 54.8 | +| PIQA (Acc.) | 0-shot | 71.8 | 71.1 | 72.3 | +| ARC-easy (Acc.) | 0-shot | 46.8 | 47.3 | 49.4 | +| ARC-challenge (Acc.) | 0-shot | 31.7 | 34.1 | 34.3 | +| RACE-middle (Acc.) | 5-shot | 43.7 | 46.4 | 44.0 | +| RACE-high (Acc.) | 5-shot | 31.9 | 32.4 | 31.7 | +| HumanEval (Pass@1) | 0-shot | 3.7 | 3.0 | 4.9 | +| MBPP (Pass@1) | 3-shot | 2.4 | 2.6 | 2.2 | +| TriviaQA (EM) | 5-shot | 15.2 | 15.7 | 16.6 | +| NaturalQuestions (EM) | 5-shot | 4.5 | 4.7 | 5.7 | + +Table 8 | Comparison between DeepSeekMoE and larger GShard models. + +At a larger scale of 13B total parameters, we also compare DeepSeekMoE with GShard×1.2 and GShard×1.5, and show results in Table 10. At a larger scale, DeepSeekMoE even outperforms GShard×1.5 distinctly. + +| Metric | # Shot | Dense×4 | Dense×16 | DeepSeekMoE | +|---------------------------|----------|-----------|------------|---------------| +| Relative Expert Size | N/A | 1 | 1 | 0.25 | +| # Experts | N/A | 4 + 0 | 16 + 0 | 1 + 63 | +| # Activated Experts | N/A | 4 + 0 | 16 + 0 | 1 + 7 | +| # Total Expert Params | N/A | 0.47B | 1.89B | 1.89B | +| # Activated Expert Params | N/A | 0.47B | 1.89B | 0.24B | +| # Training Tokens | N/A | 100B | 100B | 100B | +| Pile (Loss) | N/A | 1.908 | 1.806 | 1.808 | +| HellaSwag (Acc.) | 0-shot | 47.6 | 55.1 | 54.8 | +| PIQA (Acc.) | 0-shot | 70.0 | 71.9 | 72.3 | +| ARC-easy (Acc.) | 0-shot | 43.9 | 51.9 | 49.4 | +| ARC-challenge (Acc.) | 0-shot | 30.5 | 33.8 | 34.3 | +| RACE-middle (Acc.) | 5-shot | 42.4 | 46.3 | 44.0 | +| RACE-high (Acc.) | 5-shot | 30.7 | 33.0 | 31.7 | +| HumanEval (Pass@1) | 0-shot | 1.8 | 4.3 | 4.9 | +| MBPP (Pass@1) | 3-shot | 0.2 | 2.2 | 2.2 | +| TriviaQA (EM) | 5-shot | 9.9 | 16.5 | 16.6 | +| NaturalQuestions (EM) | 5-shot | 3.0 | 6.3 | 5.7 | + +Table 9 | Comparison between DeepSeekMoE and larger dense baselines. + +| Metric | # Shot | GShard×1.2 | GShard×1.5 | DeepSeekMoE | +|---------------------------|----------|--------------|--------------|---------------| +| Relative Expert Size | N/A | 1.2 | 1.5 | 0.25 | +| # Experts | N/A | 0 + 16 | 0 + 16 | 1 + 63 | +| # Activated Experts | N/A | 0 + 2 | 0 + 2 | 1 + 7 | +| # Total Expert Params | N/A | 15.9B | 19.8B | 13.3B | +| # Activated Expert Params | N/A | 2.37B | 2.82B | 2.05B | +| # Training Tokens | N/A | 100B | 100B | 100B | +| HellaSwag (Acc.) | 0-shot | 66.6 | 67.7 | 69.1 | +| PIQA (Acc.) | 0-shot | 75.6 | 76.0 | 75.7 | +| ARC-easy (Acc.) | 0-shot | 56.8 | 56.8 | 58.8 | +| ARC-challenge (Acc.) | 0-shot | 39.9 | 37.6 | 38.5 | +| RACE-middle (Acc.) | 5-shot | 51.6 | 50.6 | 52.4 | +| RACE-high (Acc.) | 5-shot | 37.4 | 36.3 | 38.5 | +| HumanEval (Pass@1) | 0-shot | 6.1 | 6.1 | 9.8 | +| MBPP (Pass@1) | 3-shot | 7.0 | 11.6 | 10.6 | +| TriviaQA (EM) | 5-shot | 36.5 | 36.7 | 38.2 | +| NaturalQuestions (EM) | 5-shot | 12.6 | 12.1 | 13.7 | + +Table 10 | Comparison between DeepSeekMoE and larger GShard models at a larger scale. + +## C. **Training Benchmark Curves Of Deepseekmoe 16B** + +We present the benchmark curves during training of DeepSeekMoE 16B and DeepSeek 7B +(Dense) in Figure 7 for reference. + +![32_image_0.png](32_image_0.png) + diff --git a/switch_transformers.md b/switch_transformers.md new file mode 100644 index 0000000000000000000000000000000000000000..11dbe729af42816b5116c4ee2be5bb09d8edfb97 --- /dev/null +++ b/switch_transformers.md @@ -0,0 +1,925 @@ +# Switch Transformers: Scaling To Trillion Parameter Models With Simple And Efficient Sparsity + +William Fedus∗ +liamfedus@google.com Barret Zoph∗ +barretzoph@google.com Noam Shazeer noam@google.com Google, Mountain View, CA 94043, USA +Editor: Alexander Clark + +## Abstract + +In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) models defy this and instead select *different* parameters for each incoming example. The result is a sparsely-activated model—with an outrageous number of parameters—but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs, and training instability. We address these with the introduction of the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques mitigate the instabilities, and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large (Raffel et al., 2019) to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus", and achieve a 4x speedup over the T5-XXL model.12 Keywords: mixture-of-experts, natural language processing, sparsity, large-scale machine learning, distributed computing arXiv:2101.03961v3 [cs.LG] 16 Jun 2022 Contents + +| 1 | Introduction | 3 | | +|-----|-----------------------------------------------------------|-----|----| +| 2 | Switch Transformer | 4 | | +| 2.1 | Simplifying Sparse Routing | | 5 | +| 2.2 | Efficient Sparse Routing | | 6 | +| 2.3 | Putting It All Together: The Switch Transformer | 8 | | +| 2.4 | Improved Training and Fine-Tuning Techniques | 8 | | +| 3 | Scaling Properties | 11 | | +| 3.1 | Scaling Results on a Step-Basis | | 12 | +| 3.2 | Scaling Results on a Time-Basis | 13 | | +| 3.3 | Scaling Versus a Larger Dense Model | 13 | | +| 4 | Downstream Results | 14 | | +| 4.1 | Fine-Tuning | 14 | | +| 4.2 | Distillation | | 16 | +| 4.3 | Multilingual Learning | | 17 | +| 5 | Designing Models with Data, Model, and Expert-Parallelism | 18 | | +| 5.1 | Data Parallelism | | 20 | +| 5.2 | Model Parallelism | | 20 | +| 5.3 | Model and Data Parallelism | 21 | | +| 5.4 | Expert and Data Parallelism | | 22 | +| 5.5 | Expert, Model and Data Parallelism | | 22 | +| 5.6 | Towards Trillion Parameter Models | 22 | | +| 6 | Related Work | 24 | | +| 7 | Discussion | 25 | | +| 8 | Future Work | 26 | | +| 9 | Conclusion | 27 | | +| A | Switch for Attention | 27 | | +| B | Preventing Token Dropping with No-Token-Left-Behind | 29 | | + +C Encouraging Exploration Across Experts 29 D Switch Transformers in Lower Compute Regimes 29 E Relation of Upstream to Downstream Model Performance 32 F Pseudo Code for Switch Transformers 33 + +## 1. Introduction + +Large scale training has been an effective path towards flexible and powerful neural language models (Radford et al., 2018; Kaplan et al., 2020; Brown et al., 2020). Simple architectures— backed by a generous computational budget, data set size and parameter count—surpass more complicated algorithms (Sutton, 2019). An approach followed in Radford et al. (2018); +Raffel et al. (2019); Brown et al. (2020) expands the model size of a densely-activated Transformer (Vaswani et al., 2017). While effective, it is also extremely computationally intensive (Strubell et al., 2019). Inspired by the success of model scale, but seeking greater computational efficiency, we instead propose a *sparsely-activated* expert model: the Switch Transformer. In our case the sparsity comes from activating a *subset* of the neural network weights for each incoming example. + +![2_image_0.png](2_image_0.png) + +Figure 1: Scaling and sample efficiency of Switch Transformers. Left Plot: Scaling properties for increasingly sparse (more experts) Switch Transformers. Right Plot: +Negative log perplexity comparing Switch Transformers to T5 (Raffel et al., 2019) models using the same compute budget. +Sparse training is an active area of research and engineering (Gray et al., 2017; Gale et al., 2020), but as of today, machine learning libraries and hardware accelerators still cater to dense matrix multiplications. To have an efficient sparse algorithm, we start with the Mixture-of-Expert (MoE) paradigm (Jacobs et al., 1991; Jordan and Jacobs, 1994; Shazeer et al., 2017), and simplify it to yield training stability and computational benefits. MoE +models have had notable successes in machine translation (Shazeer et al., 2017, 2018; Lepikhin et al., 2020), however, widespread adoption is hindered by complexity, communication costs, and training instabilities. + +We address these issues, and then go beyond translation, to find that these class of algorithms are broadly valuable in natural language. We measure superior scaling on a diverse set of natural language tasks and across three regimes in NLP: pre-training, finetuning and multi-task training. While this work focuses on scale, we also show that the Switch Transformer architecture not only excels in the domain of supercomputers, but is beneficial even with only a few computational cores. Further, our large sparse models can be distilled (Hinton et al., 2015) into small dense versions while preserving 30% of the sparse model quality gain. Our contributions are the following: +- The Switch Transformer architecture, which simplifies and improves over Mixture of Experts. + +- Scaling properties and a benchmark against the strongly tuned T5 model (Raffel et al., +2019) where we measure 7x+ pre-training speedups while still using the same FLOPS per token. We further show the improvements hold even with limited computational resources, using as few as two experts. +- Successful distillation of sparse pre-trained and specialized fine-tuned models into +small dense models. We reduce the model size by up to 99% while preserving 30% of the quality gains of the large sparse teacher. +- Improved pre-training and fine-tuning techniques: (1) selective precision training that +enables training with lower bfloat16 precision (2) an initialization scheme that allows for scaling to a larger number of experts and (3) increased expert regularization that improves sparse model fine-tuning and multi-task training. +- A measurement of the pre-training benefits on multilingual data where we find a +universal improvement across all 101 languages and with 91% of languages benefiting from 4x+ speedups over the mT5 baseline (Xue et al., 2020). +- An increase in the scale of neural language models achieved by efficiently combining +data, model, and expert-parallelism to create models with up to a trillion parameters. These models improve the pre-training speed of a strongly tuned T5-XXL baseline by 4x. + +## 2. Switch Transformer + +The guiding design principle for Switch Transformers is to maximize the parameter count of a Transformer model (Vaswani et al., 2017) in a simple and computationally efficient way. + +The benefit of scale was exhaustively studied in Kaplan et al. (2020) which uncovered powerlaw scaling with model size, data set size and computational budget. Importantly, this work advocates training large models on relatively small amounts of data as the computationally optimal approach. + +Heeding these results, we investigate a fourth axis: increase the *parameter count* while keeping the floating point operations (FLOPs) per example constant. Our hypothesis is that the parameter count, independent of total computation performed, is a separately important axis on which to scale. We achieve this by designing a sparsely activated model that efficiently uses hardware designed for dense matrix multiplications such as GPUs and TPUs. Our work here focuses on TPU architectures, but these class of models may be similarly trained on GPU clusters. In our distributed training setup, our sparsely activated layers split *unique* weights on different devices. Therefore, the weights of the model increase with the number of devices, all while maintaining a manageable memory and computational footprint on each device. + +4 + +![4_image_0.png](4_image_0.png) + +Figure 2: Illustration of a Switch Transformer encoder block. We replace the dense feed forward network (FFN) layer present in the Transformer with a sparse Switch FFN layer (light blue). The layer operates independently on the tokens in the sequence. We diagram two tokens (x1 = "More" and x2 = "Parameters" below) +being routed (solid lines) across four FFN experts, where the router independently routes each token. The switch FFN layer returns the output of the selected FFN +multiplied by the router gate value (dotted-line). + +## 2.1 Simplifying Sparse Routing + +Mixture of Expert Routing. Shazeer et al. (2017) proposed a natural language Mixtureof-Experts (MoE) layer which takes as an input a token representation x and then routes this to the best determined top-k experts, selected from a set {Ei(x)} +N +i=1 of N experts. + +The router variable Wr produces logits h(x) = Wr · x which are normalized via a softmax distribution over the available N experts at that layer. The gate-value for expert i is given by, + +$$p_{i}(x)=\frac{e^{h(x)_{i}}}{\sum_{j}^{N}e^{h(x)_{j}}}.\tag{1}$$ +$$(1)$$ + +$$\left(2\right)$$ + +The top-k gate values are selected for routing the token x. If T is the set of selected top-k indices then the output computation of the layer is the linearly weighted combination of each expert's computation on the token by the gate value, + +$$y=\sum_{i\in{\mathcal{T}}}p_{i}(x)E_{i}(x).$$ +pi(x)Ei(x). (2) +Switch Routing: Rethinking Mixture-of-Experts. Shazeer et al. (2017) conjectured that routing to k > 1 experts was necessary in order to have non-trivial gradients to the routing functions. The authors intuited that learning to route would not work without the ability to compare at least two experts. Ramachandran and Le (2018) went further to study the top-k decision and found that higher k-values in lower layers in the model were important for models with many routing layers. Contrary to these ideas, we instead use a simplified strategy where we route to only a *single* expert. We show this simplification preserves model quality, reduces routing computation and performs better. This k = 1 routing strategy is later referred to as a Switch layer. Note that for both MoE and Switch Routing, the gate value pi(x) in Equation 2 permits differentiability of the router. + +The benefits for the Switch layer are three-fold: (1) The router computation is reduced as we are only routing a token to a single expert. (2) The batch size (expert capacity) of each expert can be at least halved since each token is only being routed to a single expert.3 +(3) The routing implementation is simplified and communication costs are reduced. Figure 3 shows an example of routing with different expert capacity factors. + +![5_image_0.png](5_image_0.png) + +Figure 3: Illustration of token routing dynamics. Each expert processes a fixed batch-size of tokens modulated by the *capacity factor*. Each token is routed to the expert with the highest router probability, but each expert has a fixed batch size of +(total tokens / num experts) × capacity factor. If the tokens are unevenly dispatched then certain experts will overflow (denoted by dotted red lines), resulting in these tokens not being processed by this layer. A larger capacity factor alleviates this overflow issue, but also increases computation and communication costs +(depicted by padded white/empty slots). + +## 2.2 Efficient Sparse Routing + +We use Mesh-Tensorflow (MTF) (Shazeer et al., 2018) which is a library, with similar semantics and API to Tensorflow (Abadi et al., 2016) that facilitates efficient distributed data and model parallel architectures. It does so by abstracting the physical set of cores to a logical mesh of processors. Tensors and computations may then be sharded per named dimensions, facilitating easy partitioning of models across dimensions. We design our model with TPUs in mind, which require statically declared sizes. Below we describe our distributed Switch Transformer implementation. + +3. See Section 2.2 for a technical description. + +$$\left({\boldsymbol{3}}\right)$$ + +Distributed Switch Implementation. All of our tensor shapes are statically determined at compilation time, but our computation is *dynamic* due to the routing decisions at training and inference. Because of this, one important technical consideration is how to set the *expert capacity*. The expert capacity—the number of tokens each expert computes—is set by evenly dividing the number of tokens in the batch across the number of experts, and then further expanding by a *capacity factor*, + +$${\mathrm{~expert~capacity}}=\!\!\left({\frac{{\mathrm{tokens~per~batch}}}{{\mathrm{number~of~experts}}}}\right)\times{\mathrm{capacity~factor}}.$$ + +A capacity factor greater than 1.0 creates additional buffer to accommodate for when tokens are not perfectly balanced across experts. If too many tokens are routed to an expert (referred to later as dropped tokens), computation is skipped and the token representation is passed directly to the next layer through the residual connection. Increasing the expert capacity is not without drawbacks, however, since high values will result in wasted computation and memory. This trade-off is explained in Figure 3. Empirically we find ensuring lower rates of dropped tokens are important for the scaling of sparse expert-models. + +Throughout our experiments we didn't notice any dependency on the number of experts for the number of tokens dropped (typically < 1%). Using the auxiliary load balancing loss (next section) with a high enough coefficient ensured good load balancing. We study the impact that these design decisions have on model quality and speed in Table 1. + +A Differentiable Load Balancing Loss. To encourage a balanced load across experts we add an auxiliary loss (Shazeer et al., 2017, 2018; Lepikhin et al., 2020). As in Shazeer et al. (2018); Lepikhin et al. (2020), Switch Transformers simplifies the original design in Shazeer et al. (2017) which had separate load-balancing and importance-weighting losses. + +For each Switch layer, this auxiliary loss is added to the total model loss during training. + +Given N experts indexed by i = 1 to N and a batch B with T tokens, the auxiliary loss is computed as the scaled dot-product between vectors f and P, + +$$\operatorname{loss}=\alpha\cdot N\cdot\sum_{i=1}^{N}f_{i}\cdot P_{i}$$ +$$\left(4\right)$$ + +$$\left({\mathfrak{h}}\right)$$ +$$\left({\mathfrak{h}}\right)$$ +fi· Pi (4) +where $ f_{i}$ is the frac +where fiis the fraction of tokens dispatched to expert i, + +dispatched to expert $i$, . +$$f_{i}={\frac{1}{T}}\sum_{x\in{\mathcal{B}}}\mathbbm{1}\{{\mathrm{argmax}}\,p(x)=i\}$$ +1{argmax p(x) = i} (5) +and Piis the fraction of the router probability allocated for expert i, + +rt $\,i_1$ ? +$$P_{i}={\frac{1}{T}}\sum_{x\in{\mathcal{B}}}p_{i}(x).$$ +pi(x). (6) +Since we seek uniform routing of the batch of tokens across the N experts, we desire both vectors to have values of 1/N. The auxiliary loss of Equation 4 encourages uniform routing since it is minimized under a uniform distribution. The objective can also be differentiated as + +the P-vector is differentiable, but the f-vector is not. The final loss is multiplied by expert count N to keep the loss constant as the number of experts varies since under uniform routing PN +i=1(fi· Pi) = PN +i=1( 1 N +· +1 N +) = 1N +. Finally, a hyper-parameter α is a multiplicative coefficient for these auxiliary losses; throughout this work we use an α = 10−2 which was sufficiently large to ensure load balancing while small enough to not to overwhelm the primary cross-entropy objective. We swept hyper-parameter ranges of α from 10−1to 10−5 in powers of 10 and found 10−2 balanced load quickly without interfering with training loss. + +## 2.3 Putting It All Together: The Switch Transformer + +Our first test of the Switch Transformer starts with pre-training on the "Colossal Clean Crawled Corpus" (C4), introduced in (Raffel et al., 2019). For our pre-training objective, we use a masked language modeling task (Taylor, 1953; Fedus et al., 2018; Devlin et al., +2018) where the model is trained to predict missing tokens. In our pre-training setting, as determined in Raffel et al. (2019) to be optimal, we drop out 15% of tokens and then replace the masked sequence with a single sentinel token. To compare our models, we record the negative log perplexity.4 Throughout all tables in the paper, ↑ indicates that a higher value for that metric is better and vice-versa for ↓. A comparison of all the models studied in this work are in Table 9. + +A head-to-head comparison of the Switch Transformer and the MoE Transformer is presented in Table 1. Our Switch Transformer model is FLOP-matched to 'T5-Base' (Raffel et al., 2019) (same amount of computation per token is applied). The MoE Transformer, using top-2 routing, has two experts which each apply a separate FFN to each token and thus its FLOPS are larger. All models were trained for the same number of steps on identical hardware. Note that the MoE model going from capacity factor 2.0 to 1.25 actually slows down (840 to 790) in the above experiment setup, which is unexpected.5 We highlight three key findings from Table 1: (1) Switch Transformers outperform both carefully tuned dense models and MoE Transformers on a speed-quality basis. For a fixed amount of computation and wall-clock time, Switch Transformers achieve the best result. (2) The Switch Transformer has a smaller computational footprint than the MoE counterpart. If we increase its size to match the training speed of the MoE Transformer, we find this outperforms all MoE and Dense models on a per step basis as well. (3) Switch Transformers perform better at lower capacity factors (1.0, 1.25). Smaller expert capacities are indicative of the scenario in the large model regime where model memory is very scarce and the capacity factor will want to be made as small as possible. + +## 2.4 Improved Training And Fine-Tuning Techniques + +Sparse expert models may introduce training difficulties over a vanilla Transformer. Instability can result because of the hard-switching (routing) decisions at each of these layers. + +Further, low precision formats like bfloat16 (Wang and Kanwar, 2019) can exacerbate issues + +| Model | Capacity | Quality after | Time to Quality | Speed (↑) | +|------------------|----------------|-----------------|-------------------|-------------| +| Factor | 100k steps (↑) | Threshold (↓) | (examples/sec) | | +| (Neg. Log Perp.) | (hours) | | | | +| T5-Base | - | -1.731 | Not achieved† | 1600 | +| T5-Large | - | -1.550 | 131.1 | 470 | +| MoE-Base | 2.0 | -1.547 | 68.7 | 840 | +| Switch-Base | 2.0 | -1.554 | 72.8 | 860 | +| MoE-Base | 1.25 | -1.559 | 80.7 | 790 | +| Switch-Base | 1.25 | -1.553 | 65.0 | 910 | +| MoE-Base | 1.0 | -1.572 | 80.1 | 860 | +| Switch-Base | 1.0 | -1.561 | 62.8 | 1000 | +| Switch-Base+ | 1.0 | -1.534 | 67.6 | 780 | + +Table 1: Benchmarking Switch versus MoE. Head-to-head comparison measuring per step and per time benefits of the Switch Transformer over the MoE Transformer and T5 dense baselines. We measure quality by the negative log perplexity and the time to reach an arbitrary chosen quality threshold of Neg. Log Perp.=-1.50. All MoE and Switch Transformer models use 128 experts, with experts at every other feed-forward layer. For Switch-Base+, we increase the model size until it matches the speed of the MoE model by increasing the model hidden-size from 768 to 896 and the number of heads from 14 to 16. All models are trained with the same amount of computation (32 cores) and on the same hardware (TPUv3). Further note that all our models required pre-training beyond 100k steps to achieve our level threshold of -1.50. † T5-Base did not achieve this negative log perplexity in the 100k steps the models were trained. + +in the softmax computation for our router. We describe training difficulties here and the methods we use to overcome them to achieve stable and scalable training. + +Selective precision with large sparse models. Model instability hinders the ability to train using efficient bfloat16 precision, and as a result, Lepikhin et al. (2020) trains with float32 precision throughout their MoE Transformer. However, we show that by instead selectively casting to float32 precision within a localized part of the model, stability may be achieved, without incurring expensive communication cost of float32 tensors. This technique is inline with modern mixed precision training strategies where certain parts of the model and gradient updates are done in higher precision Micikevicius et al. (2017). Table 2 shows that our approach permits nearly equal speed to bfloat16 training while conferring the training stability of float32. + +To achieve this, we cast the router input to float32 precision. The router function takes the tokens as input and produces the dispatch and combine tensors used for the selection and recombination of expert computation (refer to Code Block 15 in the Appendix for details). Importantly, the float32 precision is only used *within* the body of the router function—on computations local to that device. Because the resulting dispatch and combine tensors are recast to bfloat16 precision at the end of the function, no expensive float32 tensors + +| Model | Quality | Speed | +|-----------------------------------|----------------------|--------------------| +| (precision) | (Neg. Log Perp.) (↑) | (Examples/sec) (↑) | +| Switch-Base (float32) | -1.718 | 1160 | +| Switch-Base (bfloat16) | -3.780 [diverged] | 1390 | +| Switch-Base (Selective precision) | -1.716 | 1390 | + +Table 2: Selective precision. We cast the local routing operations to float32 while preserving +bfloat16 precision elsewhere to stabilize our model while achieving nearly equal +speed to (unstable) bfloat16-precision training. We measure the quality of a 32 expert model after a fixed step count early in training its speed performance. For +both Switch-Base in float32 and with Selective prevision we notice similar learning dynamics. +are broadcast through all-to-all communication operations, but we still benefit from the increased stability of float32. + +Smaller parameter initialization for stability. Appropriate initialization is critical to successful training in deep learning and we especially observe this to be true for Switch Transformer. We initialize our weight matrices by drawing elements from a truncated normal distribution with mean µ = 0 and standard deviation σ =ps/n where s is a scale hyper-parameter and n is the number of input units in the weight tensor (e.g. fan-in).6 As an additional remedy to the instability, we recommend reducing the default Transformer initialization scale s = 1.0 by a factor of 10. This both improves quality and reduces the likelihood of destabilized training in our experiments. Table 3 measures the improvement of the model quality and reduction of the variance early in training. We find that + +| Model (Initialization scale) | Average Quality | Std. Dev. of Quality | +|--------------------------------|-------------------|------------------------| +| (Neg. Log Perp.) | (Neg. Log Perp.) | | +| Switch-Base (0.1x-init) | -2.72 | 0.01 | +| Switch-Base (1.0x-init) | -3.60 | 0.68 | + +Table 3: Reduced initialization scale improves stability. Reducing the initialization scale +results in better model quality and more stable training of Switch Transformer. Here we record the average and standard deviation of model quality, measured by the negative log perplexity, of a 32 expert model after 3.5k steps (3 random seeds each). +the average model quality, as measured by the Neg. Log Perp., is dramatically improved and there is a far reduced variance across runs. Further, this same initialization scheme is broadly effective for models spanning several orders of magnitude. We use the same approach to stably train models as small as our 223M parameter baseline to enormous models in excess of one trillion parameters. + +Regularizing large sparse models. Our paper considers the common NLP approach of pre-training on a large corpus followed by fine-tuning on smaller downstream tasks such as summarization or question answering. One issue that naturally arises is overfitting since many fine-tuning tasks have very few examples. During fine-tuning of standard Transformers, Raffel et al. (2019) use dropout (Srivastava et al., 2014) at each layer to prevent overfitting. Our Switch Transformers have significantly more parameters than the FLOP +matched dense baseline, which can lead to more severe overfitting on these smaller downstream tasks. + +| Model (dropout) | GLUE | CNNDM | SQuAD | SuperGLUE | +|-----------------------------|--------|---------|---------|-------------| +| T5-Base (d=0.1) | 82.9 | 19.6 | 83.5 | 72.4 | +| Switch-Base (d=0.1) | 84.7 | 19.1 | 83.7 | 73.0 | +| Switch-Base (d=0.2) | 84.4 | 19.2 | 83.9 | 73.2 | +| Switch-Base (d=0.3) | 83.9 | 19.6 | 83.4 | 70.7 | +| Switch-Base (d=0.1, ed=0.4) | 85.2 | 19.6 | 83.7 | 73.0 | + +Table 4: Fine-tuning regularization results. A sweep of dropout rates while fine-tuning Switch Transformer models pre-trained on 34B tokens of the C4 data set (higher numbers are better). We observe that using a lower standard dropout rate at all non-expert layer, with a much larger dropout rate on the expert feed-forward layers, to perform the best. + +We thus propose a simple way to alleviate this issue during fine-tuning: increase the dropout inside the experts, which we name as *expert dropout*. During fine-tuning we simply increase the dropout rate by a significant amount only at the interim feed-forward computation at each expert layer. Table 4 has the results for our expert dropout protocol. + +We observe that simply increasing the dropout across all layers leads to worse performance. + +However, setting a smaller dropout rate (0.1) at non-expert layers and a much larger dropout rate (0.4) at expert layers leads to performance improvements on four smaller downstream tasks. + +## 3. Scaling Properties + +We present a study of the *scaling properties* of the Switch Transformer architecture during pre-training. Per Kaplan et al. (2020), we consider a regime where the model is not bottlenecked by either the computational budget or amount of data. To avoid the data bottleneck, we use the large C4 corpus with over 180B target tokens (Raffel et al., 2019) and we train until diminishing returns are observed. + +The number of experts is the most efficient dimension for scaling our model. Increasing the experts keeps the computational cost approximately fixed since the model only selects one expert per token, regardless of the number of experts to choose from. The router must compute a probability distribution over more experts, however, this is a lightweight computation of cost O(d*model* × num experts) where d*model* is the embedding dimension of tokens passed between the layers. In this section, we consider the scaling properties on a step-basis and a time-basis with a fixed computational budget. + +## 3.1 Scaling Results On A Step-Basis + +Figure 4 demonstrates consistent scaling benefits with the number of experts when training all models for a fixed number of steps. We observe a clear trend: when keeping the FLOPS per token fixed, having more parameters (experts) speeds up training. The left Figure demonstrates consistent scaling properties (with fixed FLOPS per token) between sparse model parameters and test loss. This reveals the advantage of scaling along this additional axis of sparse model parameters. Our right Figure measures sample efficiency of a dense model variant and four FLOP-matched sparse variants. We find that increasing the number of experts leads to more sample efficient models. Our Switch-Base 64 expert model achieves the same performance of the T5-Base model at step 60k at step 450k, which is a 7.5x speedup in terms of step time. In addition, consistent with the findings of Kaplan et al. (2020), we find that larger models are also more *sample efficient*—learning more quickly for a fixed number of observed tokens. + +![11_image_0.png](11_image_0.png) + +Figure 4: Scaling properties of the Switch Transformer. Left Plot: We measure the quality improvement, as measured by perplexity, as the parameters increase by scaling the number of experts. The top-left point corresponds to the T5-Base model with 223M parameters. Moving from top-left to bottom-right, we double the number of experts from 2, 4, 8 and so on until the bottom-right point of a 256 expert model with 14.7B parameters. Despite all models using an equal computational budget, we observe consistent improvements scaling the number of experts. Right Plot: Negative log perplexity per step sweeping over the number of experts. The dense baseline is shown with the purple line and we note improved sample efficiency of our Switch-Base models. + +## 3.2 Scaling Results On A Time-Basis + +Figure 4 demonstrates that on a step basis, as we increase the number of experts, the performance consistently improves. While our models have roughly the same amount of FLOPS per token as the baseline, our Switch Transformers incurs additional communication costs across devices as well as the extra computation of the routing mechanism. Therefore, the increased sample efficiency observed on a step-basis doesn't necessarily translate to a better model quality as measured by wall-clock. This raises the question: +For a fixed training duration and computational budget, should one train a dense or a sparse model? + +![12_image_0.png](12_image_0.png) +Figure 5: Speed advantage of Switch Transformer. All models trained on 32 TPUv3 cores with equal FLOPs per example. For a fixed amount of computation and training time, Switch Transformers significantly outperform the dense Transformer baseline. Our 64 expert Switch-Base model achieves the same quality in *one-seventh* +the time of the T5-Base and continues to improve. + +Figures 5 and 6 address this question. Figure 5 measures the pre-training model quality as a function of time. For a fixed training duration and computational budget, Switch Transformers yield a substantial speed-up. In this setting, our Switch-Base 64 expert model trains in *one-seventh* the time that it would take the T5-Base to get similar perplexity. + +## 3.3 Scaling Versus A Larger Dense Model + +The above analysis shows that a computationally-matched dense model is outpaced by its Switch counterpart. Figure 6 considers a different scenario: what if we instead had allocated our resources to a larger dense model? We do so now, measuring Switch-Base against the next strong baseline, *T5-Large*. But despite T5-Large applying 3.5x more FLOPs per token, Switch-Base is still more sample efficient and yields a 2.5x speedup. Furthermore, more gains can be had simply by designing a new, larger sparse version, Switch-Large, which is FLOP-matched to T5-Large. We do this and demonstrate superior scaling and fine-tuning in the following section. + +![13_image_0.png](13_image_0.png) + +## 4. Downstream Results + +Section 3 demonstrated the superior scaling properties while pre-training, but we now validate that these gains translate to improved language learning abilities on downstream tasks. We begin by fine-tuning on a diverse set of NLP tasks. Next we study reducing the memory footprint of our sparse models by over 90% by distilling into small—and easily deployed—dense baselines. Finally, we conclude this section measuring the improvements in a multi-task, multilingual setting, where we show that Switch Transformers are strong multi-task learners, improving over the multilingual T5-base model across all 101 languages. + +## 4.1 Fine-Tuning + +Baseline and Switch models used for fine-tuning. Our baselines are the highly-tuned 223M parameter T5-Base model and the 739M parameter T5-Large model (Raffel et al., +2019). For both versions, we design a FLOP-matched Switch Transformer, with many more parameters, which is summarized in Table 9. + +7 Our baselines differ slightly from those in Raffel et al. (2019) because we pre-train on an improved C4 corpus which removes intraexample text duplication and thus increases the efficacy as a pre-training task Lee et al. + +7. FLOPS are calculated for the forward pass as done in Kaplan et al. (2020). + +(2021). In our protocol we pre-train with 220 (1,048,576) tokens per batch for 550k steps amounting to 576B total tokens. We then fine-tune across a diverse set of tasks using a dropout rate of 0.1 for all layers except the Switch layers, which use a dropout rate of 0.4 +(see Table 4). We fine-tune using a batch-size of 1M for 16k steps and for each task, we evaluate model quality every 200-steps and report the peak performance as computed on the validation set. + +Fine-tuning tasks and data sets. We select tasks probing language capabilities including question answering, summarization and knowledge about the world. The language benchmarks GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) are handled as composite mixtures with all the tasks blended in proportion to the amount of tokens present in each. These benchmarks consist of tasks requiring sentiment analysis (SST2), word sense disambiguation (WIC), sentence similarty (MRPC, STS-B, QQP), natural language inference (MNLI, QNLI, RTE, CB), question answering (MultiRC, RECORD, +BoolQ), coreference resolution (WNLI, WSC) and sentence completion (COPA) and sentence acceptability (CoLA). The CNNDM (Hermann et al., 2015) and BBC XSum (Narayan et al., 2018) data sets are used to measure the ability to summarize articles. Question answering is probed with the SQuAD data set (Rajpurkar et al., 2016) and the ARC Reasoning Challenge (Clark et al., 2018). And as in Roberts et al. (2020), we evaluate the knowledge of our models by fine-tuning on three closed-book question answering data sets: Natural Questions (Kwiatkowski et al., 2019), Web Questions (Berant et al., 2013) and Trivia QA (Joshi et al., 2017). Closed-book refers to questions posed with no supplemental reference or context material. To gauge the model's common sense reasoning we evaluate it on the Winogrande Schema Challenge (Sakaguchi et al., 2020). And finally, we test our model's natural language inference capabilities on the Adversarial NLI Benchmark (Nie et al., 2019). + +Fine-tuning metrics. The following evaluation metrics are used throughout the paper: +We report the average scores across all subtasks for GLUE and SuperGLUE. The Rouge-2 metric is used both the CNNDM and XSum. In SQuAD and the closed book tasks (Web, Natural, and Trivia Questions) we report the percentage of answers exactly matching the target (refer to Roberts et al. (2020) for further details and deficiency of this measure). Finally, in ARC Easy, ARC Challenge, ANLI, and Winogrande we report the accuracy of the generated responses. + +Fine-tuning results. We observe significant downstream improvements across many natural language tasks. Notable improvements come from SuperGLUE, where we find FLOP-matched Switch variants improve by 4.4 and 2 percentage points over the T5-Base and T5-Large baselines, respectively as well as large improvements in Winogrande, closed book Trivia QA, and XSum.8In our fine-tuning study, the only tasks where we do not observe gains are on the AI2 Reasoning Challenge (ARC) data sets where the T5-Base outperforms Switch-Base on the challenge data set and T5-Large outperforms Switch-Large on the easy data set. Taken as a whole, we observe significant improvements spanning both reasoning and knowledge-heavy tasks. This validates our architecture, not just as one that pre-trains well, but can translate quality improvements to downstream tasks via fine-tuning. + +| Model | GLUE | SQuAD | SuperGLUE | Winogrande (XL) | +|--------------|-----------|---------------|--------------|-------------------| +| T5-Base | 84.3 | 85.5 | 75.1 | 66.6 | +| Switch-Base | 86.7 | 87.2 | 79.5 | 73.3 | +| T5-Large | 87.8 | 88.1 | 82.7 | 79.1 | +| Switch-Large | 88.5 | 88.6 | 84.7 | 83.0 | +| Model | XSum | ANLI (R3) | ARC Easy | ARC Chal. | +| T5-Base | 18.7 | 51.8 | 56.7 | 35.5 | +| Switch-Base | 20.3 | 54.0 | 61.3 | 32.8 | +| T5-Large | 20.9 | 56.6 | 68.8 | 35.5 | +| Switch-Large | 22.3 | 58.6 | 66.0 | 35.5 | +| Model | CB Web QA | CB Natural QA | CB Trivia QA | | +| T5-Base | 26.6 | 25.8 | 24.5 | | +| Switch-Base | 27.4 | 26.8 | 30.7 | | +| T5-Large | 27.7 | 27.6 | 29.5 | | +| Switch-Large | 31.3 | 29.5 | 36.9 | | + +Table 5: Fine-tuning results. Fine-tuning results of T5 baselines and Switch models across a diverse set of natural language tests (validation sets; higher numbers are better). + +We compare FLOP-matched Switch models to the T5-Base and T5-Large baselines. For most tasks considered, we find significant improvements of the Switchvariants. We observe gains across both model sizes and across both reasoning and knowledge-heavy language tasks. + +## 4.2 Distillation + +Deploying massive neural networks with billions, or trillions, of parameters is inconvenient. + +To alleviate this, we study distilling (Hinton et al., 2015) large sparse models into small dense models. Future work could additionally study distilling large models into smaller sparse models. + +Distillation techniques. In Table 6 we study a variety of distillation techniques. + +These techniques are built off of Sanh et al. (2019), who study distillation methods for BERT models. We find that initializing the dense model with the non-expert weights yields a modest improvement. This is possible since all models are FLOP matched, so non-expert layers will have the same dimensions. Since expert layers are usually only added at every or every other FFN layer in a Transformer, this allows for many of the weights to be initialized with trained parameters. Furthermore, we observe a distillation improvement using a mixture of 0.25 for the teacher probabilities and 0.75 for the ground truth label. By combining both techniques we preserve ≈ 30% of the quality gains from the larger sparse models with only ≈ 1/20th of the parameters. The quality gain refers to the percent of + +| Technique | Parameters | Quality (↑) | +|---------------------------------------------------------------------------------|--------------|---------------| +| T5-Base | 223M | -1.636 | +| Switch-Base | 3,800M | -1.444 | +| Distillation | 223M | (3%) -1.631 | +| + Init. non-expert weights from teacher | 223M | (20%) -1.598 | +| + 0.75 mix of hard and soft loss | 223M | (29%) -1.580 | +| Initialization Baseline (no distillation) Init. non-expert weights from teacher | 223M | -1.639 | + +the quality difference between Switch-Base (Teacher) and T5-Base (Student). Therefore, a quality gain of 100% implies the Student equals the performance of the Teacher. + +Table 6: Distilling Switch Transformers for Language Modeling. Initializing T5-Base with the non-expert weights from Switch-Base and using a loss from a mixture of teacher and ground-truth labels obtains the best performance. We can distill 30% of the performance improvement of a large sparse model with 100x more parameters back into a small dense model. For a final baseline, we find no improvement of T5-Base initialized with the expert weights, but trained normally without distillation. + +Achievable compression rates. Using our best distillation technique described in Table 6, we distill a wide variety of sparse models into dense models. We distill SwitchBase versions, sweeping over an increasing number of experts, which corresponds to varying between 1.1B to 14.7B parameters. Through distillation, we can preserve 37% of the quality gain of the 1.1B parameter model while compressing 82%. At the extreme, where we compress the model 99%, we are still able to maintain 28% of the teacher's model quality improvement. + +Distilling a fine-tuned model. We conclude this with a study of distilling a finetuned sparse model into a dense model. Table 8 shows results of distilling a 7.4B parameter Switch-Base model, fine-tuned on the SuperGLUE task, into the 223M T5-Base. Similar to our pre-training results, we find we are able to preserve 30% of the gains of the sparse model when distilling into a FLOP matched dense variant. One potential future avenue, not considered here, may examine the specific experts being used for fine-tuning tasks and extracting them to achieve better model compression. + +## 4.3 Multilingual Learning + +In our final set of downstream experiments, we measure the model quality and speed tradeoffs while pre-training on a mixture of 101 different languages. We build and benchmark off the recent work of mT5 (Xue et al., 2020), a multilingual extension to T5. We pre-train on the multilingual variant of the Common Crawl data set (mC4) spanning 101 languages introduced in mT5, but due to script variants within certain languages, the mixture contains 107 tasks. + +In Figure 7 we plot the quality improvement in negative log perplexity for all languages of a FLOP-matched Switch model, mSwitch-Base to the T5 base variant, mT5-Base. After + +| Dense | Sparse | | | | | | +|--------------------------------|----------|--------|--------|--------|--------|--------| +| Parameters | 223M | 1.1B | 2.0B | 3.8B | 7.4B | 14.7B | +| Pre-trained Neg. Log Perp. (↑) | -1.636 | -1.505 | -1.474 | -1.444 | -1.432 | -1.427 | +| Distilled Neg. Log Perp. (↑) | - | -1.587 | -1.585 | -1.579 | -1.582 | -1.578 | +| Percent of Teacher Performance | - | 37% | 32% | 30 % | 27 % | 28 % | +| Compression Percent | - | 82 % | 90 % | 95 % | 97 % | 99 % | + +Table 7: Distillation compression rates. We measure the quality when distilling large sparse +models into a dense baseline. Our baseline, T5-Base, has a -1.636 Neg. Log Perp. +quality. In the right columns, we then distill increasingly large sparse models into this same architecture. Through a combination of weight-initialization and +a mixture of hard and soft losses, we can shrink our sparse teachers by 95%+ while preserving 30% of the quality gain. However, for significantly better and +larger pre-trained teachers, we expect larger student models would be necessary to achieve these compression rates. + +| Model | Parameters | FLOPS | SuperGLUE (↑) | +|-------------------|--------------|---------|-----------------| +| T5-Base | 223M | 124B | 74.6 | +| Switch-Base | 7410M | 124B | 81.3 | +| Distilled T5-Base | 223M | 124B | (30%) 76.6 | + +Table 8: Distilling a fine-tuned SuperGLUE model. We distill a Switch-Base model finetuned on the SuperGLUE tasks into a T5-Base model. We observe that on smaller +data sets our large sparse model can be an effective teacher for distillation. We +find that we again achieve 30% of the teacher's performance on a 97% compressed +model. +pre-training both versions for 1M steps, we find that on all 101 languages considered, Switch Transformer increases the final negative log perplexity over the baseline. In Figure 8, we present a different view and now histogram the per step *speed-up* of using Switch Transformer over the mT5-Base.9 We find a mean speed-up over mT5-Base of 5x and that 91% of languages achieve at least a 4x speedup. This presents evidence that Switch Transformers are effective multi-task and multi-lingual learners. + +## 5. Designing Models With Data, Model, And Expert-Parallelism + +Arbitrarily increasing the number of experts is subject to diminishing returns (Figure 4). Here we describe *complementary* scaling strategies. The common way to scale a Transformer is to increase dimensions in tandem, like dmodel or df f . This increases both the parameters + +![18_image_0.png](18_image_0.png) + +Figure 7: Multilingual pre-training on 101 languages. Improvements of Switch T5 Base model over dense baseline when multi-task training on 101 languages. We observe Switch Transformers to do quite well in the multi-task training setup and yield improvements on all 101 languages. + +![18_image_1.png](18_image_1.png) + +Figure 8: Multilingual pre-training on 101 languages. We histogram for each language, the step speedup of Switch Transformers over the FLOP matched T5 dense baseline to reach the same quality. Over all 101 languages, we achieve a mean step speedup over mT5-Base of 5x and, for 91% of languages, we record a 4x, or greater, speedup to reach the final perplexity of mT5-Base. + +and computation performed and is ultimately limited by the memory per accelerator. Once it exceeds the size of the accelerator's memory, single program multiple data (SPMD) modelparallelism can be employed. This section studies the trade-offs of combining data, model, and expert-parallelism. + +Reviewing the Feed-Forward Network (FFN) Layer. We use the FFN layer as an example of how data, model and expert-parallelism works in Mesh TensorFlow (Shazeer et al., 2018) and review it briefly here. We assume B tokens in the batch, each of dimension d*model*. Both the input (x) and output (y) of the FFN are of size [B, d*model*] and the intermediate (h) is of size [B, df f ] where df f is typically several times larger than d*model*. In the FFN, the intermediate is h = xWin and then the output of the layer is y = ReLU(h)Wout. + +Thus Win and Wout are applied independently to each token and have sizes [dmodel, df f ] +and [df f , d*model*]. + +We describe two aspects of partitioning: how the *weights* and *batches of data* divide over cores, depicted in Figure 9. We denote all cores available as N which Mesh Tensorflow may then remap into a logical multidimensional mesh of processors. Here we create a two-dimensional logical mesh, with one dimension representing the number of ways for data-parallel sharding (n) and the other, the model-parallel sharding (m). The total cores must equal the ways to shard across both data and model-parallelism, e.g. N = n × m. + +To shard the layer across cores, the tensors containing that batch of B tokens are sharded across n data-parallel cores, so each core contains B/n tokens. Tensors and variables with df f are then sharded across m model-parallel cores. For the variants with experts-layers, we consider E experts, each of which can process up to C tokens. + +| Term | Description | +|--------|-------------------------------------------------| +| B | Number of tokens in the batch. | +| N | Number of total cores. | +| n | Number of ways for data-parallelism sharding. | +| m | Number of ways for model-parallelism sharding. | +| E | Number of experts in Switch layers. | +| C | Expert capacity, the batch size of each expert. | + +## 5.1 Data Parallelism + +When training data parallel models, which is the standard for distributed training, then all cores are allocated to the data-parallel dimension or n = *N, m* = 1. This has the advantage that no communication is needed until the entire forward and backward pass is finished and the gradients need to be then aggregated across all cores. This corresponds to the left-most column of Figure 9. + +## 5.2 Model Parallelism + +We now consider a scenario where all cores are allocated exclusively to the model-parallel dimension and so n = 1, m = N. Now all cores must keep the full B tokens and each core will contain a unique slice of the weights. For each forward and backward pass, a communication cost is now incurred. Each core sends a tensor of [B, d*model*] to compute the second matrix multiplication ReLU(h)Wout because the df f dimension is partitioned and must be summed over. As a general rule, whenever a dimension that is partitioned across cores must be summed, then an all-reduce operation is added for both the forward and backward pass. This contrasts with pure data parallelism where an all-reduce only occurs at the end of the entire forward and backward pass. + +![20_image_0.png](20_image_0.png) + +![20_image_1.png](20_image_1.png) + +Figure 9: Data and weight partitioning strategies. Each 4×4 dotted-line grid represents 16 +cores and the shaded squares are the data contained on that core (either model weights or batch of tokens). We illustrate both how the model weights and the +data tensors are split for each strategy. First Row: illustration of how *model* weights are split across the cores. Shapes of different sizes in this row represent +larger weight matrices in the Feed Forward Network (FFN) layers (e.g larger df f +sizes). Each color of the shaded squares identifies a unique weight matrix. The +number of parameters *per core* is fixed, but larger weight matrices will apply more computation to each token. Second Row: illustration of how the data batch is split across cores. Each core holds the same number of tokens which maintains a fixed memory usage across all strategies. The partitioning strategies +have different properties of allowing each core to either have the same tokens or +different tokens across cores, which is what the different colors symbolize. + +## 5.3 Model And Data Parallelism + +It is common to mix both model and data parallelism for large scale models, which was done in the largest T5 models (Raffel et al., 2019; Xue et al., 2020) and in GPT-3 (Brown et al., +2020). With a total of N = n × m cores, now each core will be responsible for B/n tokens and df f /m of both the weights and intermediate activation. In the forward and backward pass each core communicates a tensor of size [B/n, d*model*] in an all-reduce operation. + +## 5.4 Expert And Data Parallelism + +Next we describe the partitioning strategy for expert and data parallelism. Switch Transformers will allocate all of their cores to the data partitioning dimension n, which will also correspond to the number of experts in the model. For each token per core a router locally computes assignments to the experts. The output is a binary matrix of size [n, B/n, E, C] which is partitioned across the first dimension and determines expert assignment. This binary matrix is then used to do a gather via matrix multiplication with the input tensor of [n, B/n, d*model*]. + +einsum([n, B/n, dmodel], [*n, B/n, E, C*], dimension = [B/n]) (7) +resulting in the final tensor of shape [n, E, C, d*model*], which is sharded across the first dimension. Because each core has its own expert, we do an all-to-all communication of size [E, C, d*model*] to now shard the E dimension instead of the n-dimension. There are additional communication costs of bfloat16 tensors of size E×C ×d*model* in the forward pass to analogusly receive the tokens from each expert located on different cores. See Appendix F for a detailed analysis of the expert partitioning code. + +## 5.5 Expert, Model And Data Parallelism + +In the design of our best model, we seek to balance the FLOPS per token and the parameter count. When we scale the number of experts, we increase the number of parameters, but do not change the FLOPs per token. In order to increase FLOPs, we must also increase the df f dimension (which also increases parameters, but at a slower rate). This presents a trade-off: +as we increase df f we will run out of memory per core, which then necessitates increasing m. But since we have a fixed number of cores N, and N = n × m, we must decrease n, which forces use of a smaller batch-size (in order to hold tokens per core constant). + +When combining both model and expert-parallelism, we will have all-to-all communication costs from routing the tokens to the correct experts along with the internal all-reduce communications from the model parallelism. Balancing the FLOPS, communication costs and memory per core becomes quite complex when combining all three methods where the best mapping is empirically determined. See our further analysis in section 5.6 for how the number of experts effects the downstream performance as well. + +## 5.6 Towards Trillion Parameter Models + +Combining expert, model and data parallelism, we design two large Switch Transformer models, one with 395 billion and 1.6 trillion parameters, respectively. We study how these models perform on both up-stream pre-training as language models and their downstream fine-tuning performance. The parameters, FLOPs per sequence and hyper-parameters of the two different models are listed below in Table 9. Standard hyper-parameters of the Transformer, including dmodel, df f , dkv, number of heads and number of layers are described, as well as a less common feature, F F N*GEGLU* , which refers to a variation of the FFN layer where the expansion matrix is substituted with two sets of weights which are non-linearly combined (Shazeer, 2020). + +The Switch-C model is designed using only expert-parallelism, and no model-parallelism, as described earlier in Section 5.4. As a result, the hyper-parameters controlling the width, + +| Model | Parameters | FLOPs/seq | dmodel | F F NGEGLU | df f | dkv | Num. Heads | +|--------------|--------------|-------------|-------------|----------------------|-----------------------|-------|--------------| +| T5-Base | 0.2B | 124B | 768 | X | 2048 | 64 | 12 | +| T5-Large | 0.7B | 425B | 1024 | X | 2816 | 64 | 16 | +| T5-XXL | 11B | 6.3T | 4096 | X | 10240 | 64 | 64 | +| Switch-Base | 7B | 124B | 768 | X | 2048 | 64 | 12 | +| Switch-Large | 26B | 425B | 1024 | X | 2816 | 64 | 16 | +| Switch-XXL | 395B | 6.3T | 4096 | X | 10240 | 64 | 64 | +| Switch-C | 1571B | 890B | 2080 | 6144 | 64 | 32 | | +| Model | Expert Freq. | Num. Layers | Num Experts | Neg. Log Perp. @250k | Neg. Log Perp. @ 500k | | | +| T5-Base | - | 12 | - | -1.599 | -1.556 | | | +| T5-Large | - | 24 | - | -1.402 | -1.350 | | | +| T5-XXL | - | 24 | - | -1.147 | -1.095 | | | +| Switch-Base | 1/2 | 12 | 128 | -1.370 | -1.306 | | | +| Switch-Large | 1/2 | 24 | 128 | -1.248 | -1.177 | | | +| Switch-XXL | 1/2 | 24 | 64 | -1.086 | -1.008 | | | +| Switch-C | 1 | 15 | 2048 | -1.096 | -1.043 | | | + +Table 9: Switch model design and pre-training performance. We compare the hyperparameters and pre-training performance of the T5 models to our Switch Transformer variants. The last two columns record the pre-training model quality on the C4 data set after 250k and 500k steps, respectively. We observe that the SwitchC Transformer variant is 4x faster to a fixed perplexity (with the same compute budget) than the T5-XXL model, with the gap increasing as training progresses. + +depth, number of heads, and so on, are all much smaller than the T5-XXL model. In contrast, the Switch-XXL is FLOP-matched to the T5-XXL model, which allows for larger dimensions of the hyper-parameters, but at the expense of additional communication costs induced by model-parallelism (see Section 5.5 for more details). + +Sample efficiency versus T5-XXL. In the final two columns of Table 9 we record the negative log perplexity on the C4 corpus after 250k and 500k steps, respectively. After 250k steps, we find both Switch Transformer variants to improve over the T5-XXL version's negative log perplexity by over 0.061.10 To contextualize the significance of a gap of 0.061, we note that the T5-XXL model had to train for an *additional* 250k steps to increase 0.052. The gap continues to increase with additional training, with the Switch-XXL model out-performing the T5-XXL by 0.087 by 500k steps. + +Training instability. However, as described in the introduction, large sparse models can be unstable, and as we increase the scale, we encounter some sporadic issues. We find that the larger Switch-C model, with 1.6T parameters and 2048 experts, exhibits no training instability at all. Instead, the Switch XXL version, with nearly 10x larger FLOPs per sequence, is sometimes unstable. As a result, though this is our better model on a step-basis, we do not pre-train for a full 1M steps, in-line with the final reported results of T5 (Raffel et al., 2019). + +Reasoning fine-tuning performance. As a preliminary assessment of the model quality, we use a Switch-XXL model partially pre-trained on 503B tokens, or approximately half the text used by the T5-XXL model. Using this checkpoint, we conduct multi-task training for efficiency, where all tasks are learned jointly, rather than individually fine-tuned. + +We find that SQuAD accuracy on the validation set increases to 89.7 versus state-of-the-art of 91.3. Next, the average SuperGLUE test score is recorded at 87.5 versus the T5 version obtaining a score of 89.3 compared to the state-of-the-art of 90.0 (Wang et al., 2019). On ANLI (Nie et al., 2019), Switch XXL improves over the prior state-of-the-art to get a 65.7 accuracy versus the prior best of 49.4 (Yang et al., 2020). We note that while the SwitchXXL has state-of-the-art Neg. Log Perp. on the upstream pre-training task, its gains have not yet fully translated to SOTA downstream performance. We study this issue more in Appendix E. + +Knowledge-based fine-tuning performance. Finally, we also conduct an early examination of the model's knowledge with three closed-book knowledge-based tasks: Natural Questions, WebQuestions and TriviaQA, without additional pre-training using Salient Span Masking (Guu et al., 2020). In all three cases, we observe improvements over the prior stateof-the-art T5-XXL model (without SSM). Natural Questions exact match increases to 34.4 versus the prior best of 32.8, Web Questions increases to 41.0 over 37.2, and TriviaQA increases to 47.5 versus 42.9. + +Summing up, despite training on less than half the data of other models, we already find comparable, and sometimes state-of-the-art, model quality. Currently, the Switch Transformer translates substantial upstream gains better to knowledge-based tasks, than reasoning-tasks (see Appendix E). Extracting stronger fine-tuning performance from large expert models is an active research question, and the pre-training perplexity indicates future improvements should be possible. + +## 6. Related Work + +The importance of scale in neural networks is widely recognized and several approaches have been proposed. Recent works have scaled models to billions of parameters through using model parallelism (e.g. splitting weights and tensors across multiple cores) (Shazeer et al., +2018; Rajbhandari et al., 2019; Raffel et al., 2019; Brown et al., 2020; Shoeybi et al., 2019). Alternatively, Harlap et al. (2018); Huang et al. (2019) propose using pipeline based model parallelism, where different layers are split across devices and micro-batches are *pipelined* to the different layers. Finally, Product Key networks (Lample et al., 2019) were proposed to scale up the capacity of neural networks by doing a lookup for learnable embeddings based on the incoming token representations to a given layer. + +Our work studies a specific model in a class of methods that do *conditional* computation, where computation decisions are made dynamically based on the input. Cho and Bengio (2014) proposed adaptively selecting weights based on certain bit patterns occuring in the model hidden-states. Eigen et al. (2013) built stacked expert layers with dense matrix multiplications and ReLU activations and showed promising results on jittered MNIST and monotone speech. In computer vision Puigcerver et al. (2020) manually route tokens based on semantic classes during upstream pre-training and then select the relevant experts to be used according to the downstream task. + +Mixture of Experts (MoE), in the context of modern deep learning architectures, was proven effective in Shazeer et al. (2017). That work added an MoE layer which was stacked between LSTM (Hochreiter and Schmidhuber, 1997) layers, and tokens were separately routed to combinations of experts. This resulted in state-of-the-art results in language modeling and machine translation benchmarks. The MoE layer was reintroduced into the Transformer architecture by the Mesh Tensorflow library (Shazeer et al., 2018) where MoE +layers were introduced as a substitute of the FFN layers, however, there were no accompanying NLP results. More recently, through advances in machine learning infrastructure, GShard (Lepikhin et al., 2020), which extended the XLA compiler, used the MoE Transformer to dramatically improve machine translation across 100 languages. Finally Fan et al. + +(2021) chooses a different deterministic MoE strategy to split the model parameters into non-overlapping groups of languages. + +Sparsity along the sequence length dimension (L) in the Transformer *attention patterns* +has been a successful technique to reduce the attention complexity from O(L +2) (Child et al., +2019; Correia et al., 2019; Sukhbaatar et al., 2019; Kitaev et al., 2020; Zaheer et al., 2020; Beltagy et al., 2020). This has enabled learning longer sequences than previously possible. This version of the Switch Transformer does not employ attention sparsity, but these techniques are complimentary, and, as future work, these could be combined to potentially improve learning on tasks requiring long contexts. + +## 7. Discussion + +We pose and discuss questions about the Switch Transformer, and sparse expert models generally, where sparsity refers to weights, not on attention patterns. + +Isn't Switch Transformer better due to sheer parameter count? Yes, and by design! Parameters, independent of the total FLOPs used, are a useful axis to scale neural language models. Large models have been exhaustively shown to perform better (Kaplan et al., 2020). But in this case, our model is more sample efficient and faster while using the same computational resources. + +I don't have access to a supercomputer—is this still useful for me? Though this work has focused on extremely large models, we also find that models with as few as two experts improves performance while easily fitting within memory constraints of commonly available GPUs or TPUs (details in Appendix D). We therefore believe our techniques are useful in small-scale settings. + +Do sparse models outperform dense models on the speed-accuracy Pareto curve? Yes. Across a wide variety of different models sizes, sparse models outperform dense models per step and on wall clock time. Our controlled experiments show for a fixed amount of computation and time, sparse models outperform dense models. + +I can't deploy a trillion parameter model—can we shrink these models? We cannot fully preserve the model quality, but compression rates of 10 to 100x are achievable by distilling our sparse models into dense models while achieving ≈30% of the quality gain of the expert model. + +Why use Switch Transformer instead of a model-parallel dense model? On a time basis, Switch Transformers can be far more efficient than dense-models with sharded parameters (Figure 6). Also, we point out that this decision is not mutually exclusive—we can, and do, use model-parallelism in Switch Transformers, increasing the FLOPs per token, but incurring the slowdown of conventional model-parallelism. + +Why aren't sparse models widely used already? The motivation to try sparse models has been stymied by the massive success of scaling dense models (the success of which is partially driven by co-adaptation with deep learning hardware as argued in Hooker (2020)). Further, sparse models have been subject to multiple issues including (1) model complexity, (2) training difficulties, and (3) communication costs. Switch Transformer makes strides to alleviate these issues. + +## 8. Future Work + +This paper lays out a simplified architecture, improved training procedures, and a study of how sparse models scale. However, there remain many open future directions which we briefly describe here: + +1. A significant challenge is further improving training stability for the largest models. +While our stability techniques were effective for our Switch-Base, Switch-Large and Switch-C models (no observed instability), they were not sufficient for Switch-XXL. +We have taken early steps towards stabilizing these models, which we think may be generally useful for large models, including using regularizers for improving stability +and adapted forms of gradient clipping, but this remains unsolved. +2. Generally we find that improved pre-training quality leads to better downstream results (Appendix E), though we sometimes encounter striking anomalies. For instance, +despite similar perplexities modeling the C4 data set, the 1.6T parameter Switch-C +achieves only an 87.7 exact match score in SQuAD, which compares unfavorably to +89.6 for the smaller Switch-XXL model. One notable difference is that the SwitchXXL model applies ≈10x the FLOPS per token than the Switch-C model, even though +it has ≈4x less unique parameters (395B vs 1.6T). This suggests a poorly understood +dependence between fine-tuning quality, *FLOPS per token* and *number of parameters*. +3. Perform a comprehensive study of scaling relationships to guide the design of architectures blending data, model and expert-parallelism. Ideally, given the specs of a hardware configuration (computation, memory, communication) one could more rapidly design an optimal model. And, vice versa, this may also help in the design of future hardware. + +4. Our work falls within the family of adaptive computation algorithms. Our approach always used identical, homogeneous experts, but future designs (facilitated by more flexible infrastructure) could support *heterogeneous* experts. This would enable more flexible adaptation by routing to larger experts when more computation is desired— perhaps for harder examples. + +5. Investigating expert layers outside the FFN layer of the Transformer. We find preliminary evidence that this similarly can improve model quality. In Appendix A, +we report quality improvement adding these inside Self-Attention layers, where our +layer replaces the weight matrices which produce Q, K, V. However, due to training instabilities with the bfloat16 format, we instead leave this as an area for future work. + +6. Examining Switch Transformer in new and across different modalities. We have thus +far only considered language, but we believe that model sparsity can similarly provide advantages in new modalities, as well as multi-modal networks. +This list could easily be extended, but we hope this gives a flavor for the types of challenges that we are thinking about and what we suspect are promising future directions. + +## 9. Conclusion + +Switch Transformers are scalable and effective natural language learners. We simplify Mixture of Experts to produce an architecture that is easy to understand, stable to train and vastly more sample efficient than equivalently-sized dense models. We find that these models excel across a diverse set of natural language tasks and in different training regimes, including pre-training, fine-tuning and multi-task training. These advances make it possible to train models with hundreds of billion to trillion parameters and which achieve substantial speedups relative to dense T5 baselines. We hope our work motivates sparse models as an effective architecture and that this encourages researchers and practitioners to consider these flexible models in natural language tasks, and beyond. + +## Acknowledgments + +The authors would like to thank Margaret Li who provided months of key insights into algorithmic improvements and suggestions for empirical studies. Hugo Larochelle for sage advising and clarifying comments on the draft, Irwan Bello for detailed comments and careful revisions, Colin Raffel and Adam Roberts for timely advice on neural language models and the T5 code-base, Yoshua Bengio for advising and encouragement on research in adaptive computation, Jascha Sohl-dickstein for interesting new directions for stabilizing new large scale models and paper revisions, and the Google Brain Team for useful discussions on the paper. Blake Hechtman who provided invaluable help in profiling and improving the training performance of our models. + +## A. Switch For Attention + +Shazeer et al. (2018); Lepikhin et al. (2020) designed MoE Transformers (Shazeer et al., 2017) by adding MoE layers into the dense feedfoward network (FFN) computations of the Transformer. Similarly, our work also replaced the FFN layer in the Transformer, but we briefly explore here an alternate design. We add Switch layers into the Transformer Self-Attention layers. To do so, we replace the trainable weight matrices that produce the queries, keys and values with Switch layers as seen in Figure 10. + +Table 10 records the quality after a fixed number of steps as well as training time for several variants. Though we find improvements, we also found these layers to be more unstable when using bfloat16 precision and thus we did not include them in the final variant. + +![27_image_0.png](27_image_0.png) + +| Model | Precision | Quality | Quality | Speed | +|------------------------|-------------|--------------|------------|---------| +| @100k Steps (↑) | @16H (↑) | (ex/sec) (↑) | | | +| Experts FF | float32 | -1.548 | -1.614 | 1480 | +| Expert Attention | float32 | -1.524 | -1.606 | 1330 | +| Expert Attention | bfloat16 | [diverges] | [diverges] | - | +| Experts FF + Attention | float32 | -1.513 | -1.607 | 1240 | +| Expert FF + Attention | bfloat16 | [diverges] | [diverges] | - | + +However, when these layers do train stably, we believe the preliminary positive results suggests a future promising direction. + +Table 10: Switch attention layer results. All models have 32 experts and train with 524k tokens per batch. Experts FF is when experts replace the FFN in the Transformer, which is our standard setup throughout the paper. Experts FF + Attention is when experts are used to replace both the FFN and the Self-Attention layers. When training with bfloat16 precision the models that have experts attention diverge. + +## B. Preventing Token Dropping With No-Token-Left-Behind + +Due to software constraints on TPU accelerators, the shapes of our Tensors must be statically sized. As a result, each expert has a finite and fixed capacity to process token representations. This, however, presents an issue for our model which dynamically routes tokens at run-time that may result in an uneven distribution over experts. If the number of tokens sent to an expert is less than the expert capacity, then the computation may simply be padded - an inefficient use of the hardware, but mathematically correct. However, when the number of tokens sent to an expert is larger than its capacity (expert overflow), a protocol is needed to handle this. Lepikhin et al. (2020) adapts a Mixture-of-Expert model and addresses expert overflow by passing its representation to the next layer without processing through a residual connection which we also follow. + +We suspected that having no computation applied to tokens could be very wasteful, especially since if there is overflow on one expert, that means another expert will have extra capacity. With this intuition we create *No-Token-Left-Behind*, which iteratively reroutes any tokens that are at first routed to an expert that is overflowing. Figure 11 shows a graphical description of this method, which will allow us to guarantee almost no tokens will be dropped during training and inference. We hypothesised that this could improve performance and further stabilize training, but we found no empirical benefits. We suspect that once the network learns associations between different tokens and experts, if this association is changed (e.g. sending a token to its second highest expert) then performance could be degraded. + +## C. Encouraging Exploration Across Experts + +At each expert-layer, the router determines to which expert to send the token. This is a discrete decision over the available experts, conditioned on information about the token's representation. Based on the incoming token representation, the router determines the best expert, however, it receives no counterfactual information about how well it would have done selecting an alternate expert. As in reinforcement learning, a classic explorationexploitation dilemma arises (Sutton and Barto, 2018). These issues have been similarly noted and addressed differently by Rosenbaum et al. (2017) which demonstrated success in multi-task learning. This particular setting most closely matches that of a contextual bandit (Robbins, 1952). Deterministically selecting the top expert always amounts to an exploitative strategy - we consider balancing exploration to seek better expert assignment. + +To introduce exploration, we consider several approaches: 1) deterministic or argmax 2) +sampling from the softmax distribution 3) input dropout on the incoming representation 4) multiplicative jitter noise on the incoming representation. The resulting impact on model quality is reported in Table 11. Throughout this work, we use input jitter to inject noise as we have found it to empirically perform the best. + +## D. Switch Transformers In Lower Compute Regimes + +Switch Transformer is also an effective architecture at small scales as well as in regimes with thousands of cores and trillions of parameters. Many of our prior experiments were + +![29_image_0.png](29_image_0.png) + +Figure 11: Diagram of the *No-Token-Left-Behind Routing*. Stage 1 is equivalent to Switch routing where tokens are routed to the expert with the highest probability from the router. In Stage 2 we look at all tokens that have overflowed and route them to the expert with which has the second highest probability. Tokens can still be overflowed if their second highest expert has too many tokens, but this allows most of the tokens to be routed. This process can be iterated to guarantee virtually no tokens are dropped at all. + +| Model | Quality (Neg. Log Perp.) (↑) | +|----------------|--------------------------------| +| Argmax | -1.471 | +| Sample softmax | -1.570 | +| Input dropout | -1.480 | +| Input jitter | -1.468 | + +at the scale of 10B+ parameter models, but we show in Figure 12 as few as 2 experts produce compelling gains over a FLOP-matched counterpart. Even if a super computer is not readily available, training Switch Transformers with 2, 4, or 8 experts (as we typically recommend one expert per core) results in solid improvements over T5 dense baselines. + +![30_image_0.png](30_image_0.png) + +## E. Relation Of Upstream To Downstream Model Performance + +There is no guarantee that a model's quality on a pre-training objective will translate to downstream task results. Figure 13 presents the correlation of the upstream model quality, for both dense and Switch models, on the C4 pre-training task with two downstream task measures: average SuperGLUE performance and TriviaQA score. We choose these two tasks as one probes the model's reasoning and the other factual knowledge. + +![31_image_0.png](31_image_0.png) + +Figure 13: Upstream pre-trained quality to downstream model quality. We correlate the upstream performance with downstream quality on both SuperGLUE and TriviaQA (SOTA recorded without SSM), reasoning and knowledge-heavy benchmarks, respectively (validation sets). We find that, as with the baseline, the Switch model scales with improvements in the upstream pre-training task. For SuperGLUE, we find a loosely linear relation between negative log perplexity and the average SuperGLUE score. However, the dense model often performs better for a fixed perplexity, particularly in the large-scale regime. Conversely, on the knowledge-heavy task, TriviaQA, we find that the Switch Transformer may follow an improved scaling relationship - for a given upstream perplexity, it does better than a dense counterpart. Further statistics (expensive to collect and left to future work) would be necessary to confirm these observations. + +We find a consistent correlation, indicating that for both baseline and Switch models, improved pre-training leads to better downstream results. Additionally, for a fixed upstream perplexity we find that both Switch and dense models perform similarly in the small to medium model size regime. However, in the largest model regime (T5-11B/T5-XXL) +our largest Switch models, as mentioned in Section 5.6, do not always translate their upstream perplexity well to downstream fine-tuning on the SuperGLUE task. This warrants future investigation and study to fully realize the potential of sparse models. Understanding the fine-tuning dynamics with expert-models is very complicated and is dependent on regularization, load-balancing, and fine-tuning hyper-parameters. + +## F. Pseudo Code For Switch Transformers + +Pseudocode for Switch Transformers in Mesh Tensorflow (Shazeer et al., 2018). No model parallelism is being used for the below code (see 5.4 for more details). + +import mesh tensorflow as mtf + +``` +def load balance loss(router probs, expert mask): + """Calculate load−balancing loss to ensure diverse expert routing.""" + # router probs is the probability assigned for each expert per token. + # router probs shape: [num cores, tokens per core, num experts] + # expert index contains the expert with the highest router probability in one−hot format. + # expert mask shape: [num cores, tokens per core, num experts] + # For each core, get the fraction of tokens routed to each expert. + # density 1 shape: [num cores, num experts] + density 1 = mtf.reduce mean(expert mask, reduced dim=tokens per core) + # For each core, get fraction of probability mass assigned to each expert + # from the router across all tokens. + # density 1 proxy shape: [num cores, num experts] + density 1 proxy = mtf.reduce mean(router probs, reduced dim=tokens per core) + # density l for a single core: vector of length num experts that sums to 1. + # density l proxy for a single core: vector of length num experts that sums to 1. + # Want both vectors to have uniform allocation (1/num experts) across all num expert elements. + # The two vectors will be pushed towards uniform allocation when the dot product is minimized. + loss = mtf.reduce mean(density 1 proxy ∗ density 1) ∗ (num experts ˆ 2) + return loss + +``` + +Figure 14: Pseudo code for the load balance loss for Switch Transformers in Mesh Tensorflow. + +import mesh tensorflow as mtf + +``` +def router(inputs, capacity factor): + """Produce the combine and dispatch tensors used for sending and + receiving tokens from their highest probability expert. """ + # Core layout is split across num cores for all tensors and operations. + # inputs shape: [num cores, tokens per core, d model] + router weights = mtf.Variable(shape=[d model, num experts]) + # router logits shape: [num cores, tokens per core, num experts] + router logits = mtf.einsum([inputs, router weights], reduced dim=d model) + if is training: + # Add noise for exploration across experts. + router logits += mtf.random uniform(shape=router logits.shape, minval=1−eps, maxval=1+eps) + # Convert input to softmax operation from bfloat16 to float32 for stability. + router logits = mtf.to float32(router logits) + # Probabilities for each token of what expert it should be sent to. + router probs = mtf.softmax(router logits, axis=−1) + # Get the top−1 expert for each token. expert gate is the top−1 probability + # from the router for each token. expert index is what expert each token + # is going to be routed to. + # expert gate shape: [num cores, tokens per core] + # expert index shape: [num cores, tokens per core] + expert gate, expert index = mtf.top 1(router probs, reduced dim=num experts) + # expert mask shape: [num cores, tokens per core, num experts] + expert mask = mtf.one hot(expert index, dimension=num experts) + # Compute load balancing loss. + aux loss = load balance loss(router probs, expert mask) + # Experts have a fixed capacity, ensure we do not exceed it. Construct + # the batch indices, to each expert, with position in expert + # make sure that not more that expert capacity examples can be routed to + # each expert. + position in expert = mtf.cumsum(expert mask, dimension=tokens per core) ∗ expert mask + # Keep only tokens that fit within expert capacity. + expert mask ∗= mtf.less(position in expert, expert capacity) + expert mask flat = mtf.reduce sum(expert mask, reduced dim=experts dim) + # Mask out the experts that have overflowed the expert capacity. + expert gate ∗= expert mask flat + # combine tensor used for combining expert outputs and scaling with router probability. + # combine tensor shape: [num cores, tokens per core, num experts, expert capacity] + combine tensor = ( + expert gate ∗ expert mask flat ∗ + mtf.one hot(expert index, dimension=num experts) ∗ + mtf.one hot(position in expert, dimension=expert capacity)) + # Cast back outputs to bfloat16 for the rest of the layer. + combine tensor = mtf.to bfloat16(combine tensor) + # Create binary dispatch tensor that is 1 if the token gets routed to the corresponding expert. + # dispatch tensor shape: [num cores, tokens per core, num experts, expert capacity] + dispatch tensor = mtf.cast(combine tensor, tf.bool) + return dispatch tensor, combine tensor, aux loss + +``` + +Figure 15: Pseudo code for the router for Switch Transformers in Mesh Tensorflow. + +import mesh tensorflow as mtf + +``` +def switch layer(inputs, n, capacity factor, num experts): + """Distributed switch transformer feed−forward layer.""" + # num cores (n) = total cores for training the model (scalar). + # d model = model hidden size (scalar). + # num experts = total number of experts. + # capacity factor = extra buffer for each expert. + # inputs shape: [batch, seq len, d model] + batch, seq len, d model = inputs.get shape() + # Each core will route tokens per core tokens to the correct experts. + tokens per core = batch ∗ seq len / num cores + # Each expert will have shape [num cores, expert capacity, d model]. + # Each core is responsible for sending expert capacity tokens + # to each expert. + expert capacity = tokens per core ∗ capacity factor / num experts + # Reshape to setup per core expert dispatching. + # shape: [batch, seq len, d model] −> [num cores, tokens per core, d model] + # Core layout: [n, 1, 1] −> [n, 1, 1] + inputs = mtf.reshape(inputs, [num cores, tokens per core, d model]) + # Core Layout: [n, 1, 1] −> [n, 1, 1, 1], [n, 1, 1, 1] + # dispatch tensor (boolean) shape: [num cores, tokens per core, num experts, expert capacity] + # dispatch tensor is used for routing tokens to the correct expert. + # combine tensor (float) shape: [num cores, tokens per core, num experts, expert capacity] + # combine tensor used for combining expert outputs and scaling with router + # probability. + dispatch tensor, combine tensor, aux loss = router(inputs, expert capacity) + # Matmul with large boolean tensor to assign tokens to the correct expert. + # Core Layout: [n, 1, 1], −> [1, n, 1, 1] + # expert inputs shape: [num experts, num cores, expert capacity, d model] + expert inputs = mtf.einsum([inputs, dispatch tensor], reduce dims=[tokens per core]) + # All−to−All communication. Cores split across num cores and now we want to split + # across num experts. This sends tokens, routed locally, to the correct expert now + # split across different cores. + # Core layout: [1, n, 1, 1] −> [n, 1, 1, 1] + expert inputs = mtf.reshape(expert inputs, [num experts, num cores, expert capacity, d model]) + # Standard feed forward computation, where each expert will have its own + # unique set of parameters. + # Total unique parameters created: num experts ∗ (d model ∗ d ff ∗ 2). + # expert outputs shape: [num experts, num cores, expert capacity, d model] + expert outputs = feed forward(expert inputs) + # All−to−All communication. Cores are currently split across the experts + # dimension, which needs to be switched back to being split across num cores. + # Core Layout: [n, 1, 1, 1] −> [1, n, 1, 1] + expert outputs = mtf.reshape(expert outputs, [num experts, num cores, expert capacity, d model]) + # Convert back to input shape and multiply outputs of experts by the routing probability. + # expert outputs shape: [num experts, num cores, tokens per core, d model] + # expert outputs combined shape: [num cores, tokens per core, d model] + # Core Layout: [1, n, 1, 1] −> [n, 1, 1] + expert outputs combined = mtf.einsum([expert outputs, combine tensor], reduce dims=[tokens per core]) + # Remove tokens per core shapes used for local routing dispatching to match input shape. + # Core Layout: [n, 1, 1] −> [n, 1, 1] + outputs = mtf.reshape(expert outputs combined, [batch, seq len, d model]) + return outputs, aux loss + +``` + +Figure 16: Pseudo code of the Switch Transformer layer in Mesh Tensorflow. + +## References + +Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: +A system for large-scale machine learning. In 12th {USENIX} *symposium on operating* +systems design and implementation ({OSDI} 16), pages 265–283, 2016. + +Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. *arXiv preprint arXiv:2004.05150*, 2020. + +Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533–1544, 2013. + +Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *arXiv preprint arXiv:2005.14165*, 2020. + +Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. *arXiv preprint arXiv:1904.10509*, 2019. + +Kyunghyun Cho and Yoshua Bengio. Exponentially increasing the capacity-to-computation ratio for conditional computation in deep learning. *arXiv preprint arXiv:1406.7362*, 2014. + +Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa +Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, +the ai2 reasoning challenge. *arXiv preprint arXiv:1803.05457*, 2018. +Gon¸calo M Correia, Vlad Niculae, and Andr´e FT Martins. Adaptively sparse transformers. + +arXiv preprint arXiv:1909.00015, 2019. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. *arXiv preprint* +arXiv:1810.04805, 2018. + +David Eigen, Marc'Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. *arXiv preprint arXiv:1312.4314*, 2013. + +Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, +Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. Beyond english-centric multilingual machine translation. *Journal of Machine Learning Research*, 22(107):1–48, 2021. +William Fedus, Ian Goodfellow, and Andrew M Dai. Maskgan: Better text generation via filling in the . *arXiv preprint arXiv:1801.07736*, 2018. + +Trevor Gale, Matei Zaharia, Cliff Young, and Erich Elsen. Sparse gpu kernels for deep learning. *arXiv preprint arXiv:2006.10901*, 2020. + +Scott Gray, Alec Radford, and Diederik P Kingma. Gpu kernels for block-sparse weights. + +https://openai.com/blog/block-sparse-gpu-kernels/, 2017. + +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Realm: +Retrieval-augmented language model pre-training. *arXiv preprint arXiv:2002.08909*, +2020. +Aaron Harlap, Deepak Narayanan, Amar Phanishayee, Vivek Seshadri, Nikhil Devanur, Greg Ganger, and Phil Gibbons. Pipedream: Fast and efficient pipeline parallel dnn training. *arXiv preprint arXiv:1806.03377*, 2018. + +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, +Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. +In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, *Advances in Neural Information Processing Systems*, volume 28, pages 1693–1701. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/2015/file/ +afdec7005cc9f14302cd0474fd0f3c96-Paper.pdf. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. + +arXiv preprint arXiv:1503.02531, 2015. + +Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. *Neural computation*, +9(8):1735–1780, 1997. + +Sara Hooker. The hardware lottery. *arXiv preprint arXiv:2009.06489*, 2020. + +Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, +HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training +of giant neural networks using pipeline parallelism. In Advances in neural information processing systems, pages 103–112, 2019. +Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. *Neural computation*, 3(1):79–87, 1991. + +Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm. *Neural computation*, 6(2):181–214, 1994. + +Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. *arXiv preprint* arXiv:1705.03551, 2017. + +Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. *arXiv preprint arXiv:2001.08361*, 2020. + +Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. + +arXiv preprint arXiv:2001.04451, 2020. + +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. *Transactions of the Association* for Computational Linguistics, 7:453–466, 2019. + +Guillaume Lample, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. Large memory layers with product keys. In Advances in Neural Information Processing Systems, pages 8548–8559, 2019. + +Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. Deduplicating training data makes language models better. *arXiv preprint arXiv:2107.06499*, 2021. + +Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. *arXiv preprint arXiv:2006.16668*, +2020. + +Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. + +Mixed precision training. *arXiv preprint arXiv:1710.03740*, 2017. + +Shashi Narayan, Shay B Cohen, and Mirella Lapata. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. arXiv preprint arXiv:1808.08745, 2018. + +Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. + +Adversarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599, 2019. + +Joan Puigcerver, Carlos Riquelme, Basil Mustafa, Cedric Renggli, Andr´e Susano Pinto, Sylvain Gelly, Daniel Keysers, and Neil Houlsby. Scalable transfer learning with expert models. *arXiv preprint arXiv:2009.13239*, 2020. + +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training, 2018. + +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. *arXiv preprint arXiv:1910.10683*, 2019. + +Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimization towards training a trillion parameter models. *arXiv preprint arXiv:1910.02054*, +2019. + +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ +questions for machine comprehension of text. *arXiv preprint arXiv:1606.05250*, 2016. + +Prajit Ramachandran and Quoc V Le. Diversity and depth in per-example routing models. + +In *International Conference on Learning Representations*, 2018. + +Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58(5):527–535, 1952. + +Adam Roberts, Colin Raffel, and Noam Shazeer. How much knowledge can you pack into the parameters of a language model? *arXiv preprint arXiv:2002.08910*, 2020. + +Clemens Rosenbaum, Tim Klinger, and Matthew Riemer. Routing networks: Adaptive +selection of non-linear functions for multi-task learning. *arXiv preprint arXiv:1711.01239*, +2017. +Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732–8740, 2020. + +Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter, 2019. + +Noam Shazeer. Glu variants improve transformer, 2020. + +Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-ofexperts layer. *arXiv preprint arXiv:1701.06538*, 2017. + +Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn +Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, et al. +Mesh-tensorflow: Deep learning for supercomputers. In *Advances in Neural Information* Processing Systems, pages 10414–10423, 2018. +Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and +Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using +gpu model parallelism. *arXiv preprint arXiv:1909.08053*, 2019. +Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan +Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. +Journal of Machine Learning Research, 15(1):1929–1958, 2014. URL http://www.cs. +toronto.edu/~rsalakhu/papers/srivastava14a.pdf. +Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. *arXiv preprint arXiv:1906.02243*, 2019. + +Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. *arXiv preprint arXiv:1905.07799*, 2019. + +Rich Sutton. The Bitter Lesson. *http://www.incompleteideas.net/IncIdeas/BitterLesson.html*, +2019. + +Richard S Sutton and Andrew G Barto. *Reinforcement learning: An introduction*. Stanford University, 2018. + +Wilson L Taylor. "cloze procedure": A new tool for measuring readability. *Journalism* +quarterly, 30(4):415–433, 1953. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N +Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017. + +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. *arXiv preprint arXiv:1804.07461*, 2018. + +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for generalpurpose language understanding systems. In *Advances in Neural Information Processing* +Systems, pages 3266–3280, 2019. + +Shibo Wang and Pankaj Kanwar. Bfloat16: The secret to high performance on cloud tpus. + +Google Cloud Blog, 2019. + +Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. mt5: A massively multilingual pre-trained text-to-text transformer. *arXiv preprint arXiv:2010.11934*, 2020. + +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. Xlnet: Generalized autoregressive pretraining for language understanding, 2020. + +Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: +Transformers for longer sequences. *arXiv preprint arXiv:2007.14062*, 2020. \ No newline at end of file