HeTalksInMaths commited on
Commit
8345a77
Β·
1 Parent(s): 310c773

Fix chat: Format tool results directly for reliability

Browse files
Files changed (2) hide show
  1. QUICK_PUSH.txt +37 -0
  2. app_combined.py +11 -8
QUICK_PUSH.txt ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ═══════════════════════════════════════════════
2
+ FIXED: Chat Integration Now Works!
3
+ ═══════════════════════════════════════════════
4
+
5
+ WHAT WAS FIXED:
6
+ - Chat now properly formats tool results
7
+ - No more failed second LLM call
8
+ - Direct tool result formatting for reliability
9
+
10
+ PUSH THIS UPDATE:
11
+
12
+ cd /Users/hetalksinmaths/togmal/Togmal-demo
13
+ git add app_combined.py QUICK_PUSH.txt
14
+ git commit -m "Fix chat: Format tool results directly without second LLM call"
15
+ git push origin main
16
+
17
+ ═══════════════════════════════════════════════
18
+ WHAT CHANGED
19
+ ═══════════════════════════════════════════════
20
+
21
+ Before:
22
+ 1. User asks question
23
+ 2. LLM calls tool βœ…
24
+ 3. Tool returns data βœ…
25
+ 4. LLM tries to format (FAILS ❌ - HF API issues)
26
+ 5. User sees no response
27
+
28
+ After:
29
+ 1. User asks question
30
+ 2. LLM calls tool βœ…
31
+ 3. Tool returns data βœ…
32
+ 4. Direct formatting βœ…
33
+ 5. User sees formatted response βœ…
34
+
35
+ ═══════════════════════════════════════════════
36
+
37
+ The chat will now work reliably!
app_combined.py CHANGED
@@ -346,14 +346,17 @@ def chat(message: str, history: List[Tuple[str, str]]) -> Tuple[List[Tuple[str,
346
  tool_result = execute_tool(tool_name, tool_args)
347
  tool_status += f"**Result:**\n```json\n{json.dumps(tool_result, indent=2)}\n```\n\n"
348
 
349
- messages.append({"role": "system", "content": f"Tool {tool_name} returned: {json.dumps(tool_result)}"})
350
-
351
- final_response, _ = call_llm_with_tools(messages, AVAILABLE_TOOLS)
352
-
353
- if final_response:
354
- response_text = final_response
355
- else:
356
- response_text = format_tool_result(tool_name, tool_result)
 
 
 
357
 
358
  history.append((message, response_text))
359
  return history, tool_status
 
346
  tool_result = execute_tool(tool_name, tool_args)
347
  tool_status += f"**Result:**\n```json\n{json.dumps(tool_result, indent=2)}\n```\n\n"
348
 
349
+ # Instead of calling LLM again (which often fails on free tier),
350
+ # directly format the tool result into a nice response
351
+ response_text = format_tool_result(tool_name, tool_result)
352
+
353
+ # If no tool was called and no response, provide helpful message
354
+ if not response_text:
355
+ response_text = """I'm ToGMAL Assistant. I can help analyze prompts for:
356
+ - **Difficulty**: How challenging is this for current LLMs?
357
+ - **Safety**: Are there any safety concerns?
358
+
359
+ Try asking me to analyze a prompt!"""
360
 
361
  history.append((message, response_text))
362
  return history, tool_status