Nora Petrova commited on
Commit
d7b5c85
·
1 Parent(s): 3d48265

Make feedback dataset primary, reorder documentation

Browse files
Files changed (1) hide show
  1. README.md +20 -20
README.md CHANGED
@@ -14,11 +14,11 @@ pretty_name: HUMAINE Human-AI Interaction Evaluation Dataset
14
  size_categories:
15
  - 100K<n<1M
16
  configs:
17
- - config_name: conversations_metadata
18
- data_files: conversations_metadata_dataset.parquet
19
- default: true
20
  - config_name: feedback_comparisons
21
  data_files: feedback_dataset.parquet
 
 
 
22
  ---
23
 
24
  # HUMAINE: Human-AI Interaction Evaluation Dataset
@@ -30,8 +30,8 @@ configs:
30
  The HUMAINE dataset contains human evaluations of AI model interactions across diverse demographic groups and conversation contexts. This dataset powers the [HUMAINE Leaderboard](https://huggingface.co/spaces/ProlificAI/humaine-leaderboard), providing insights into how different AI models perform across various user populations and use cases.
31
 
32
  The dataset consists of two main components:
33
- - **Conversations Metadata**: 40,332 conversations with task complexity, achievement, and engagement scores
34
  - **Feedback Comparisons**: 105,220 pairwise model comparisons across multiple evaluation metrics
 
35
 
36
  **Note**: There may be a slight discrepancy between the numbers in this dataset and the leaderboard app due to changes in consent related to data release and the post-processing steps involved in preparing this dataset.
37
 
@@ -49,25 +49,15 @@ The dataset consists of two main components:
49
 
50
  The dataset contains two CSV files:
51
 
52
- 1. **`conversations_metadata_dataset.csv`** (40,332 rows)
53
- - Metadata about individual conversations between users and AI models
54
- - Includes task types, domains, and performance scores
55
-
56
- 2. **`feedback_dataset.csv`** (105,220 rows)
57
  - Pairwise comparisons between different AI models
58
  - Includes demographic information and preference choices
59
 
60
- ### Data Fields
 
 
61
 
62
- #### Conversations Metadata
63
- - `conversation_id`: Unique identifier for the conversation
64
- - `model_name`: Name of the AI model used
65
- - `task_type`: Type of task (information_seeking, technical_assistance, etc.)
66
- - `domain`: Domain of the conversation (health_medical, technology, travel, etc.)
67
- - `task_complexity_score`: Complexity rating (1-5)
68
- - `goal_achievement_score`: How well the goal was achieved (1-5)
69
- - `user_engagement_score`: User engagement level (1-5)
70
- - `total_messages`: Total number of messages in the conversation
71
 
72
  #### Feedback Comparisons
73
  - `conversation_id`: Unique identifier linking to conversation metadata
@@ -80,11 +70,21 @@ The dataset contains two CSV files:
80
  - `political_affilation`: Political affiliation of the evaluator
81
  - `country_of_residence`: Country of residence of the evaluator
82
 
 
 
 
 
 
 
 
 
 
 
83
  ## Usage
84
 
85
  This dataset contains two CSV files that can be joined on the `conversation_id` field:
 
86
  - `conversations_metadata_dataset.csv`: Metadata about each conversation
87
- - `feedback_dataset.csv`: Pairwise model comparisons with demographic information
88
 
89
  Both files are included in this single dataset repository and can be accessed using HuggingFace's dataset loading utilities.
90
 
 
14
  size_categories:
15
  - 100K<n<1M
16
  configs:
 
 
 
17
  - config_name: feedback_comparisons
18
  data_files: feedback_dataset.parquet
19
+ default: true
20
+ - config_name: conversations_metadata
21
+ data_files: conversations_metadata_dataset.parquet
22
  ---
23
 
24
  # HUMAINE: Human-AI Interaction Evaluation Dataset
 
30
  The HUMAINE dataset contains human evaluations of AI model interactions across diverse demographic groups and conversation contexts. This dataset powers the [HUMAINE Leaderboard](https://huggingface.co/spaces/ProlificAI/humaine-leaderboard), providing insights into how different AI models perform across various user populations and use cases.
31
 
32
  The dataset consists of two main components:
 
33
  - **Feedback Comparisons**: 105,220 pairwise model comparisons across multiple evaluation metrics
34
+ - **Conversations Metadata**: 40,332 conversations with task complexity, achievement, and engagement scores
35
 
36
  **Note**: There may be a slight discrepancy between the numbers in this dataset and the leaderboard app due to changes in consent related to data release and the post-processing steps involved in preparing this dataset.
37
 
 
49
 
50
  The dataset contains two CSV files:
51
 
52
+ 1. **`feedback_dataset.csv`** (105,220 rows)
 
 
 
 
53
  - Pairwise comparisons between different AI models
54
  - Includes demographic information and preference choices
55
 
56
+ 2. **`conversations_metadata_dataset.csv`** (40,332 rows)
57
+ - Metadata about individual conversations between users and AI models
58
+ - Includes task types, domains, and performance scores
59
 
60
+ ### Data Fields
 
 
 
 
 
 
 
 
61
 
62
  #### Feedback Comparisons
63
  - `conversation_id`: Unique identifier linking to conversation metadata
 
70
  - `political_affilation`: Political affiliation of the evaluator
71
  - `country_of_residence`: Country of residence of the evaluator
72
 
73
+ #### Conversations Metadata
74
+ - `conversation_id`: Unique identifier for the conversation
75
+ - `model_name`: Name of the AI model used
76
+ - `task_type`: Type of task (information_seeking, technical_assistance, etc.)
77
+ - `domain`: Domain of the conversation (health_medical, technology, travel, etc.)
78
+ - `task_complexity_score`: Complexity rating (1-5)
79
+ - `goal_achievement_score`: How well the goal was achieved (1-5)
80
+ - `user_engagement_score`: User engagement level (1-5)
81
+ - `total_messages`: Total number of messages in the conversation
82
+
83
  ## Usage
84
 
85
  This dataset contains two CSV files that can be joined on the `conversation_id` field:
86
+ - `feedback_dataset.csv`: Pairwise model comparisons with demographic information (primary dataset)
87
  - `conversations_metadata_dataset.csv`: Metadata about each conversation
 
88
 
89
  Both files are included in this single dataset repository and can be accessed using HuggingFace's dataset loading utilities.
90