Update src/tasks_content.py
Browse files- src/tasks_content.py +1 -1
src/tasks_content.py
CHANGED
|
@@ -91,7 +91,7 @@ TASKS_DESCRIPTIONS = {
|
|
| 91 |
The model is required to generate such description, given the relevant context code and the intent behind the documentation.
|
| 92 |
|
| 93 |
We use a novel metric for evaluation:
|
| 94 |
-
* `CompScore`:
|
| 95 |
|
| 96 |
For further details on the dataset and the baselines from the 🏟️ Long Code Arena team, refer to the `module_summarization` directory in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines/blob/main/module_summarization/).
|
| 97 |
""",
|
|
|
|
| 91 |
The model is required to generate such description, given the relevant context code and the intent behind the documentation.
|
| 92 |
|
| 93 |
We use a novel metric for evaluation:
|
| 94 |
+
* `CompScore`: the new metric based on LLM as an assessor proposed for this task. Our approach involves feeding the LLM with relevant code and two versions of documentation: the ground truth and the model-generated text. More details on how it is calculated can be found in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines/blob/main/module_summarization/README.md).
|
| 95 |
|
| 96 |
For further details on the dataset and the baselines from the 🏟️ Long Code Arena team, refer to the `module_summarization` directory in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines/blob/main/module_summarization/).
|
| 97 |
""",
|