Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -26,22 +26,7 @@ inference: false | |
| 26 |  | 
| 27 | 
             
            This repository provides large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
         | 
| 28 |  | 
| 29 | 
            -
             | 
| 30 | 
            -
            | Model Variants | 
         | 
| 31 | 
            -
            | :--- |
         | 
| 32 | 
            -
            | [llm-jp-3-1.8b](https://huggingface.co/llm-jp/llm-jp-3-1.8b) |
         | 
| 33 | 
            -
            | [llm-jp-3-1.8b-instruct](https://huggingface.co/llm-jp/llm-jp-3-1.8b-instruct) |
         | 
| 34 | 
            -
            | [llm-jp-3-3.7b](https://huggingface.co/llm-jp/llm-jp-3-3.7b) |
         | 
| 35 | 
            -
            | [llm-jp-3-3.7b-instruct](https://huggingface.co/llm-jp/llm-jp-3-3.7b-instruct) |
         | 
| 36 | 
            -
            | [llm-jp-3-7.2b](https://huggingface.co/llm-jp/llm-jp-3-7.2b) |
         | 
| 37 | 
            -
            | [llm-jp-3-7.2b-instruct](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct) |
         | 
| 38 | 
            -
            | [llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) |
         | 
| 39 | 
            -
            | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) |
         | 
| 40 | 
            -
            | [llm-jp-3-172b-beta1](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1) |
         | 
| 41 | 
            -
            | [llm-jp-3-172b-beta1-instruct](https://huggingface.co/llm-jp/llm-jp-3-172b-beta1-instruct) |
         | 
| 42 | 
            -
            | [llm-jp-3-172b-beta2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2) |
         | 
| 43 | 
            -
            | [llm-jp-3-172b-beta2-instruct2](https://huggingface.co/llm-jp/llm-jp-3-172b-beta2-instruct2) |
         | 
| 44 | 
            -
             | 
| 45 |  | 
| 46 | 
             
            Checkpoints format: Hugging Face Transformers
         | 
| 47 |  | 
| @@ -166,7 +151,6 @@ We evaluated the models using `gpt-4-0613`. Please see the [codes](https://githu | |
| 166 | 
             
            | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) | 6.47 | 3.15 | 7.05 | 9.15 | 3.75 | 5.40 | 8.30 | 7.50 | 7.45 |
         | 
| 167 |  | 
| 168 |  | 
| 169 | 
            -
             | 
| 170 | 
             
            ## Risks and Limitations
         | 
| 171 |  | 
| 172 | 
             
            The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
         | 
|  | |
| 26 |  | 
| 27 | 
             
            This repository provides large language models developed by the [Research and Development Center for Large Language Models](https://llmc.nii.ac.jp/) at the [National Institute of Informatics](https://www.nii.ac.jp/en/).
         | 
| 28 |  | 
| 29 | 
            +
            For LLM-jp-3 models with different parameters, please refer to [LLM-jp-3 Pre-trained Models](https://huggingface.co/collections/llm-jp/llm-jp-3-pre-trained-models-672c6096472b65839d76a1fa) and [LLM-jp-3 Fine-tuned Models](https://huggingface.co/collections/llm-jp/llm-jp-3-fine-tuned-models-672c621db852a01eae939731).
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 30 |  | 
| 31 | 
             
            Checkpoints format: Hugging Face Transformers
         | 
| 32 |  | 
|  | |
| 151 | 
             
            | [llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) | 6.47 | 3.15 | 7.05 | 9.15 | 3.75 | 5.40 | 8.30 | 7.50 | 7.45 |
         | 
| 152 |  | 
| 153 |  | 
|  | |
| 154 | 
             
            ## Risks and Limitations
         | 
| 155 |  | 
| 156 | 
             
            The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
         | 
