loodvanniekerkginkgo commited on
Commit
89d69bf
·
1 Parent(s): 10e69e7

Finished most of FAQs

Browse files
Files changed (3) hide show
  1. about.py +26 -13
  2. constants.py +2 -2
  3. utils.py +2 -1
about.py CHANGED
@@ -11,11 +11,20 @@ Here we show 5 of these properties and invite the community to submit and develo
11
 
12
  **How to submit?**
13
 
14
- TODO
 
 
 
 
 
 
 
15
 
16
  **How to evaluate?**
17
 
18
- TODO
 
 
19
 
20
  **How to contribute?**
21
 
@@ -39,34 +48,38 @@ FAQS = {
39
  "The sequences in the dataset are quite diverse as measured by pairwise sequence identity."
40
  ),
41
  "Do I need to design new proteins?": (
42
- "No. This is just a predictive competition, which will be judged according to the correlation between predictions and experimental values. "
43
- "There may be a generative round in the future."
44
  ),
45
  "Can I participate anonymously?": (
46
- "Yes! Please create an anonymous Hugging Face account so that we can uniquely associate submissions. "
47
- "Note that top participants will be contacted to identify themselves at the end of the tournament."
48
  ),
49
  "How is intellectual property handled?": (
50
  "Participants retain IP rights to the methods they use and develop during the tournament. Read more details in our terms here [link]."
51
  ),
52
  "Do I need to submit my code / methods in order to participate?": (
53
- "No, there are no requirements to submit code / methods and submitted predictions remain private."
54
  "We also have an optional field for including a short model description. "
55
  "Top performing participants will be requested to identify themselves at the end of the tournament. "
56
  "There will be one prize for the best open-source model, which will require code / methods to be available."
57
  ),
 
 
 
 
 
 
58
  "How are winners determined?": (
59
- "There will be 6 prizes (one for each of the assay properties plus an open-source prize). "
60
- "For the property-specific prizes, winners will be determined by the submission with the highest Spearman rank correlation coefficient on the private holdout set. "
61
- "For the open-source prize”, this will be determined by the highest average Spearman across all properties. "
62
  "We reserve the right to award the open-source prize to a predictor with competitive results for a subset of properties (e.g. a top polyreactivity model)."
63
  ),
64
  "How does the open-source prize work?": (
65
  "Participants who open-source their code and methods will be eligible for the open-source prize (as well as the other prizes)."
66
  ),
67
  "What do I need to submit?": (
68
- "There is a '✉️ Submit' tab on the Hugging Face competition page to upload predictions for datasets - for each dataset participants need to submit a CSV containing a column for each property they would like to predict (e.g. called HIC), "
69
- "and a row with the sequence matching the sequence in the input file. These predictions are then evaluated in the backend using the Spearman rank correlation between predictions and experimental values, "
70
- "and these metrics are then added to the leaderboard. Predictions remain private and are not seen by other contestants."
71
  ),
72
  }
 
11
 
12
  **How to submit?**
13
 
14
+ 1. Download the [GDPa1 dataset](https://huggingface.co/datasets/ginkgo-datapoints/GDPa1)
15
+ 2. Make predictions for all the antibody sequences in the list for your property of interest.
16
+ 3. Submit a CSV file containing the `"antibody_name"` column and a column per property you are predicting (e.g. `"antibody_name,Titer"` if you are predicting Titer).
17
+ There is an example submission filename on the "✉️ Submit" tab.
18
+
19
+ For the cross-validation metrics (if training only on the GDPa1 dataset), use the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column to split the dataset into folds and make predictions for each of the folds.
20
+ Submit a CSV file in the same format but also containing the `"hierarchical_cluster_IgG_isotype_stratified_fold"` column.
21
+ There is also an example cross-validation submission file on the "✉️ Submit" tab, and we will be releasing a full code tutorial shortly.
22
 
23
  **How to evaluate?**
24
 
25
+ You can calculate the Spearman correlation coefficient on the GDPa1 dataset yourself before uploading to the leaderboard.
26
+ Simply use the `spearmanr(predictions, targets, nan_policy='omit')` function from `scipy.stats`.
27
+ For the heldout private set, we will calculate these results privately at the end of the competition (and possibly at other points throughout the competition) - but there will not be rolling results on the private test set.
28
 
29
  **How to contribute?**
30
 
 
48
  "The sequences in the dataset are quite diverse as measured by pairwise sequence identity."
49
  ),
50
  "Do I need to design new proteins?": (
51
+ "No. This is just a predictive competition, which will be judged according to the correlation between predictions and experimental values. There may be a generative round in the future."
 
52
  ),
53
  "Can I participate anonymously?": (
54
+ "Yes! Please create an anonymous Hugging Face account so that we can uniquely associate submissions. Note that top participants will be contacted to identify themselves at the end of the tournament."
 
55
  ),
56
  "How is intellectual property handled?": (
57
  "Participants retain IP rights to the methods they use and develop during the tournament. Read more details in our terms here [link]."
58
  ),
59
  "Do I need to submit my code / methods in order to participate?": (
60
+ "No, there are no requirements to submit code / methods and submitted predictions remain private. "
61
  "We also have an optional field for including a short model description. "
62
  "Top performing participants will be requested to identify themselves at the end of the tournament. "
63
  "There will be one prize for the best open-source model, which will require code / methods to be available."
64
  ),
65
+ "How often does the leaderboard update?": (
66
+ "The leaderboard should reflect new submissions within a minute of submitting. Note that the leaderboard will not show the results on the private test set, these will be calculated once at the end of the tournament (and possibly at another occasion before that)."
67
+ ),
68
+ "How many submissions can I make?": (
69
+ "You can currently make unlimited submissions, but we may choose to limit the number of possible submissions per user. For the private test set evaluation the latest submission will be used."
70
+ ),
71
  "How are winners determined?": (
72
+ 'There will be 6 prizes (one for each of the assay properties plus an "open-source" prize). '
73
+ 'For the property-specific prizes, winners will be determined by the submission with the highest Spearman rank correlation coefficient on the private holdout set. '
74
+ 'For the "open-source" prize, this will be determined by the highest average Spearman across all properties. '
75
  "We reserve the right to award the open-source prize to a predictor with competitive results for a subset of properties (e.g. a top polyreactivity model)."
76
  ),
77
  "How does the open-source prize work?": (
78
  "Participants who open-source their code and methods will be eligible for the open-source prize (as well as the other prizes)."
79
  ),
80
  "What do I need to submit?": (
81
+ 'There is a tab on the Hugging Face competition page to upload predictions for datasets - for each dataset participants need to submit a CSV containing a column for each property they would like to predict (e.g. called "HIC"), '
82
+ 'and a row with the sequence matching the sequence in the input file. These predictions are then evaluated in the backend using the Spearman rank correlation between predictions and experimental values, and these metrics are then added to the leaderboard. '
83
+ 'Predictions remain private and are not seen by other contestants.'
84
  ),
85
  }
constants.py CHANGED
@@ -60,5 +60,5 @@ SUBMISSIONS_REPO = f"{ORGANIZATION}/abdev-bench-submissions"
60
  RESULTS_REPO = f"{ORGANIZATION}/abdev-bench-results"
61
 
62
  # Leaderboard dataframes
63
- LEADERBOARD_RESULTS_COLUMNS = ["model", "assay", "spearman", "dataset", "user"] # The columns expected from the results dataset
64
- LEADERBOARD_DISPLAY_COLUMNS = ["model", "property", "spearman", "dataset", "user"] # After changing assay to property (pretty formatting)
 
60
  RESULTS_REPO = f"{ORGANIZATION}/abdev-bench-results"
61
 
62
  # Leaderboard dataframes
63
+ LEADERBOARD_RESULTS_COLUMNS = ["model", "assay", "spearman", "dataset", "user", "submission_time"] # The columns expected from the results dataset
64
+ LEADERBOARD_DISPLAY_COLUMNS = ["model", "property", "spearman", "dataset", "user", "submission_time"] # After changing assay to property (pretty formatting)
utils.py CHANGED
@@ -30,7 +30,8 @@ def fetch_hf_results():
30
  RESULTS_REPO, data_files="auto_submissions/metrics_all.csv",
31
  )["train"].to_pandas()
32
  assert all(col in df.columns for col in LEADERBOARD_RESULTS_COLUMNS), f"Expected columns {LEADERBOARD_RESULTS_COLUMNS} not found in {df.columns}. Missing columns: {set(LEADERBOARD_COLUMNS) - set(df.columns)}"
33
- df = df.drop_duplicates(subset=["model", "assay"])
 
34
  df["property"] = df["assay"].map(ASSAY_RENAME)
35
  print(df.head())
36
  return df
 
30
  RESULTS_REPO, data_files="auto_submissions/metrics_all.csv",
31
  )["train"].to_pandas()
32
  assert all(col in df.columns for col in LEADERBOARD_RESULTS_COLUMNS), f"Expected columns {LEADERBOARD_RESULTS_COLUMNS} not found in {df.columns}. Missing columns: {set(LEADERBOARD_COLUMNS) - set(df.columns)}"
33
+ # Show latest submission only
34
+ df = df.sort_values("submission_time", ascending=False).drop_duplicates(subset=["model", "assay"], keep="first")
35
  df["property"] = df["assay"].map(ASSAY_RENAME)
36
  print(df.head())
37
  return df