vikramvasudevan commited on
Commit
78bdd49
·
verified ·
1 Parent(s): f3f0477

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. app.py +13 -1
  2. graph_helper.py +31 -19
app.py CHANGED
@@ -295,6 +295,18 @@ async def chat_streaming(message, history, thread_id):
295
  logger.debug(
296
  f"Partial arguments for tool {accumulated_tool_call['name']}: {accumulated_tool_call['args_str']}"
297
  )
 
 
 
 
 
 
 
 
 
 
 
 
298
  except Exception as e:
299
  logger.error("❌❌❌ Error processing request: %s", e)
300
  traceback.print_exc()
@@ -305,7 +317,7 @@ async def chat_streaming(message, history, thread_id):
305
  "\nhere is what I got so far ...\n"
306
  f"\n{streamed_response}"
307
  )
308
-
309
 
310
  # UI Elements
311
  thread_id = gr.State(init_session)
 
295
  logger.debug(
296
  f"Partial arguments for tool {accumulated_tool_call['name']}: {accumulated_tool_call['args_str']}"
297
  )
298
+ except asyncio.CancelledError:
299
+ logger.warning("⚠️ Request cancelled by user")
300
+ node_tree = end_node_tree(node_tree=node_tree)
301
+ yield (
302
+ f"### {' → '.join(node_tree)}"
303
+ "\n⚠️⚠️⚠️ Request cancelled by user"
304
+ "\nhere is what I got so far ...\n"
305
+ f"\n{streamed_response}"
306
+ )
307
+ # Important: re-raise if you want upstream to also know
308
+ # raise
309
+ return
310
  except Exception as e:
311
  logger.error("❌❌❌ Error processing request: %s", e)
312
  traceback.print_exc()
 
317
  "\nhere is what I got so far ...\n"
318
  f"\n{streamed_response}"
319
  )
320
+ return
321
 
322
  # UI Elements
323
  thread_id = gr.State(init_session)
graph_helper.py CHANGED
@@ -80,23 +80,36 @@ Your tasks:
80
  1. Compare the original user query to the LLM’s answer.
81
  2. Identify the scripture context (e.g., Divya Prabandham, Bhagavad Gita, Upanishads, Ramayana, etc.).
82
  3. Based on the scripture context, dynamically choose the appropriate entity columns for validation:
83
- - **Divya Prabandham** → azhwar, prabandham, location/deity
84
- - **Bhagavad Gita** → chapter, verse number(s), speaker, listener
85
- - **Upanishads** → section, mantra number, rishi, deity
86
- - **Ramayana/Mahabharata** → book/kanda, section/sarga, character(s), location
87
- - **Other** → pick the 3–4 most relevant contextual entities from the scripture’s metadata.
88
  4. Verify (from `original_llm_response`):
89
- - Correct verse number(s)
90
- - Keyword/context match
91
- - All scripture-specific entity fields
92
- - Native verse text quality
93
  5. **Repair any garbled Tamil/Sanskrit characters** in the verse:
94
- - Restore correct letters, diacritics, and punctuation.
95
- - Replace broken Unicode with proper characters.
96
- - Correct vowel signs, consonants, and pulli markers.
97
- - Preserve original spacing and line breaks.
98
- The repaired version is `fixed_llm_response`.
99
- 6. Output in this exact order:
 
 
 
 
 
 
 
 
 
 
 
 
 
100
  ---
101
  <!-- **Step 1 – Repaired LLM Response in Markdown:** -->
102
  <!-- BEGIN_MARKDOWN -->
@@ -136,15 +149,14 @@ fixed_llm_response
136
  <b>Confidence score:</b> {{Confidence}}% – {{Justification}}<br>
137
  <span style="background-color:{{badge_color_code}}; color:white; padding:2px 6px; border-radius:4px;">{{badge_emoji}}</span></p>
138
  </div>
139
-
140
  ---
141
 
142
  Where:
143
  - `{{dynamic_entity_rows}}` is context-specific entity rows.
144
- - `{{cleaned_native_text}}` must be taken from the repaired `fixed_llm_response`.
145
  - ✅, ❌, ⚠️ remain for matches.
146
- - Hidden markers (`<!-- BEGIN_MARKDOWN -->`) make it parseable without showing literal marker text.
147
-
148
 
149
  """
150
  )
 
80
  1. Compare the original user query to the LLM’s answer.
81
  2. Identify the scripture context (e.g., Divya Prabandham, Bhagavad Gita, Upanishads, Ramayana, etc.).
82
  3. Based on the scripture context, dynamically choose the appropriate entity columns for validation:
83
+ - **Divya Prabandham** → azhwar, prabandham, location/deity
84
+ - **Bhagavad Gita** → chapter, verse number(s), speaker, listener
85
+ - **Upanishads** → section, mantra number, rishi, deity
86
+ - **Ramayana/Mahabharata** → book/kanda, section/sarga, character(s), location
87
+ - **Other** → pick the 3–4 most relevant contextual entities from the scripture’s metadata.
88
  4. Verify (from `original_llm_response`):
89
+ - Correct verse number(s)
90
+ - Keyword/context match
91
+ - All scripture-specific entity fields
92
+ - Native verse text quality
93
  5. **Repair any garbled Tamil/Sanskrit characters** in the verse:
94
+ - Restore correct letters, diacritics, and punctuation.
95
+ - Replace broken Unicode with proper characters.
96
+ - Correct vowel signs, consonants, and pulli markers.
97
+ - Preserve original spacing and line breaks.
98
+ The repaired version is `fixed_llm_response`.
99
+
100
+ 6. Confidence-based display rule:
101
+ - If `Confidence` ≥ 75:
102
+ 1. Show **Step 1** and **Step 2** as described below.
103
+ - If `Confidence` < 75:
104
+ - Show only:
105
+ ```
106
+ I got a response for your query, but I am not confident enough in its accuracy to display it in full.
107
+ Confidence score: {{Confidence}}%
108
+ ```
109
+ - Do not include the repaired LLM response or the validation table in this case.
110
+
111
+ 7. If showing the full output (Confidence ≥ 75), follow this format exactly:
112
+
113
  ---
114
  <!-- **Step 1 – Repaired LLM Response in Markdown:** -->
115
  <!-- BEGIN_MARKDOWN -->
 
149
  <b>Confidence score:</b> {{Confidence}}% – {{Justification}}<br>
150
  <span style="background-color:{{badge_color_code}}; color:white; padding:2px 6px; border-radius:4px;">{{badge_emoji}}</span></p>
151
  </div>
 
152
  ---
153
 
154
  Where:
155
  - `{{dynamic_entity_rows}}` is context-specific entity rows.
156
+ - `{{cleaned_native_text}}` must be from the repaired `fixed_llm_response`.
157
  - ✅, ❌, ⚠️ remain for matches.
158
+ - Hidden markers (`<!-- BEGIN_MARKDOWN -->`) prevent them from rendering as visible text.
159
+ - Always wrap verse text so it doesn’t overflow horizontally.
160
 
161
  """
162
  )