Commit 
							
							·
						
						f27b190
	
1
								Parent(s):
							
							0dd7bcc
								
Add note that this is the smallest version of the model (#18)
Browse files- Add note that this is the smallest version of the model (611838ef095a5bb35bf2027d05e1194b7c9d37ac)
Co-authored-by: helen <mathemakitten@users.noreply.huggingface.co>
    	
        README.md
    CHANGED
    
    | @@ -34,6 +34,10 @@ This way, the model learns an inner representation of the English language that | |
| 34 | 
             
            useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
         | 
| 35 | 
             
            prompt.
         | 
| 36 |  | 
|  | |
|  | |
|  | |
|  | |
| 37 | 
             
            ## Intended uses & limitations
         | 
| 38 |  | 
| 39 | 
             
            You can use the raw model for text generation or fine-tune it to a downstream task. See the
         | 
|  | |
| 34 | 
             
            useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
         | 
| 35 | 
             
            prompt.
         | 
| 36 |  | 
| 37 | 
            +
            This is the **smallest** version of GPT-2, with 124M parameters. 
         | 
| 38 | 
            +
             | 
| 39 | 
            +
            **Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
         | 
| 40 | 
            +
             | 
| 41 | 
             
            ## Intended uses & limitations
         | 
| 42 |  | 
| 43 | 
             
            You can use the raw model for text generation or fine-tune it to a downstream task. See the
         | 

 
		