Spaces:
Runtime error
Runtime error
| <!--Copyright 2021 The HuggingFace Team. All rights reserved. | |
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
| the License. You may obtain a copy of the License at | |
| http://www.apache.org/licenses/LICENSE-2.0 | |
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
| specific language governing permissions and limitations under the License. | |
| --> | |
| # BigBirdPegasus | |
| ## Overview | |
| The BigBird model was proposed in [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by | |
| Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, | |
| Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention | |
| based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse | |
| attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it | |
| has been shown that applying sparse, global, and random attention approximates full attention, while being | |
| computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context, | |
| BigBird has shown improved performance on various long document NLP tasks, such as question answering and | |
| summarization, compared to BERT or RoBERTa. | |
| The abstract from the paper is the following: | |
| *Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. | |
| Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence | |
| length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that | |
| reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and | |
| is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our | |
| theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire | |
| sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to | |
| 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, | |
| BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also | |
| propose novel applications to genomics data.* | |
| Tips: | |
| - For an in-detail explanation on how BigBird's attention works, see [this blog post](https://huggingface.co/blog/big-bird). | |
| - BigBird comes with 2 implementations: **original_full** & **block_sparse**. For the sequence length < 1024, using | |
| **original_full** is advised as there is no benefit in using **block_sparse** attention. | |
| - The code currently uses window size of 3 blocks and 2 global blocks. | |
| - Sequence length must be divisible by block size. | |
| - Current implementation supports only **ITC**. | |
| - Current implementation doesn't support **num_random_blocks = 0**. | |
| - BigBirdPegasus uses the [PegasusTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/pegasus/tokenization_pegasus.py). | |
| - BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than | |
| the left. | |
| The original code can be found [here](https://github.com/google-research/bigbird). | |
| ## Documentation resources | |
| - [Text classification task guide](../tasks/sequence_classification) | |
| - [Question answering task guide](../tasks/question_answering) | |
| - [Causal language modeling task guide](../tasks/language_modeling) | |
| - [Translation task guide](../tasks/translation) | |
| - [Summarization task guide](../tasks/summarization) | |
| ## BigBirdPegasusConfig | |
| [[autodoc]] BigBirdPegasusConfig | |
| - all | |
| ## BigBirdPegasusModel | |
| [[autodoc]] BigBirdPegasusModel | |
| - forward | |
| ## BigBirdPegasusForConditionalGeneration | |
| [[autodoc]] BigBirdPegasusForConditionalGeneration | |
| - forward | |
| ## BigBirdPegasusForSequenceClassification | |
| [[autodoc]] BigBirdPegasusForSequenceClassification | |
| - forward | |
| ## BigBirdPegasusForQuestionAnswering | |
| [[autodoc]] BigBirdPegasusForQuestionAnswering | |
| - forward | |
| ## BigBirdPegasusForCausalLM | |
| [[autodoc]] BigBirdPegasusForCausalLM | |
| - forward | |