File size: 9,353 Bytes
dd9648e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
I'll create a modern chatbot application with Gradio that includes streaming responses, message history, and a clean interface.

```python
import gradio as gr
import random
import time
from typing import List, Dict, Any, Generator

def generate_response(message: str, history: List[Dict[str, Any]]) -> Generator[str, None, None]:
    """Generate a streaming response based on the user message and chat history."""
    # Simulate thinking time
    time.sleep(0.5)
    
    # Simple response generation based on keywords
    responses = [
        "That's an interesting point! Tell me more about it.",
        "I understand what you're saying. How does that make you feel?",
        "Thanks for sharing that with me. What would you like to explore next?",
        "That's a great question! Let me think about that...",
        "I appreciate your input. Have you considered other perspectives?",
        "Fascinating! Could you elaborate on that idea?",
        "I see what you mean. What are your thoughts on this?",
        "That's quite insightful! What led you to that conclusion?",
    ]
    
    # Check for specific keywords to provide more contextual responses
    message_lower = message.lower()
    if "hello" in message_lower or "hi" in message_lower:
        response = "Hello! It's great to chat with you today. How are you feeling?"
    elif "how are you" in message_lower:
        response = "I'm doing well, thank you for asking! I'm here to help and chat with you."
    elif "weather" in message_lower:
        response = "I don't have access to current weather data, but I hope it's pleasant wherever you are!"
    elif "help" in message_lower:
        response = "I'm here to help! Feel free to ask me anything or just chat about whatever's on your mind."
    elif "bye" in message_lower or "goodbye" in message_lower:
        response = "Goodbye! It was nice chatting with you. Feel free to come back anytime!"
    else:
        response = random.choice(responses)
    
    # Stream the response word by word
    words = response.split()
    partial_response = ""
    for word in words:
        partial_response += word + " "
        yield partial_response
        time.sleep(0.1)  # Simulate typing delay

def user_input(user_message: str, history: List[Dict[str, Any]]) -> tuple[str, List[Dict[str, Any]]]:
    """Process user input and add to history."""
    if not user_message.strip():
        return "", history
    
    # Add user message to history
    history.append({"role": "user", "content": user_message})
    return "", history

def bot_response(history: List[Dict[str, Any]]) -> Generator[List[Dict[str, Any]], None, None]:
    """Generate bot response and add to history."""
    if not history:
        return history
    
    last_message = history[-1]["content"]
    
    # Add empty assistant message that will be filled progressively
    history.append({"role": "assistant", "content": ""})
    
    # Generate streaming response
    for partial_response in generate_response(last_message, history[:-1]):
        history[-1]["content"] = partial_response
        yield history

def clear_chat() -> List[Dict[str, Any]]:
    """Clear the chat history."""
    return []

def retry_last_response(history: List[Dict[str, Any]]) -> Generator[List[Dict[str, Any]], None, None]:
    """Retry the last response."""
    if len(history) < 2:
        return history
    
    # Remove the last assistant response
    user_message = history[-2]["content"]
    history = history[:-1]
    
    # Generate a new response
    for partial_response in generate_response(user_message, history):
        history.append({"role": "assistant", "content": partial_response})
        yield history

# Create the Gradio interface
with gr.Blocks(
    title="AI Chatbot",
    theme=gr.themes.Soft(),
    css="""
    .gradio-container {
        max-width: 800px !important;
        margin: auto !important;
    }
    .message.user {
        background-color: #e3f2fd !important;
    }
    .message.assistant {
        background-color: #f5f5f5 !important;
    }
    """
) as demo:
    gr.HTML("""
    <div style="text-align: center; margin-bottom: 20px;">
        <h1>πŸ€– AI Chatbot</h1>
        <p>Built with <a href="https://huggingface.co/spaces/akhaliq/anycoder" target="_blank">anycoder</a></p>
    </div>
    """)
    
    # Chat history state
    chat_history = gr.State(value=[])
    
    # Chatbot component
    with gr.Row():
        chatbot = gr.Chatbot(
            type="messages",
            height=500,
            show_copy_button=True,
            bubble_full_width=False,
            avatar_images=(
                None,  # User avatar (default)
                "https://www.gradio.app/_app/immutable/assets/logo.1c311d4a.svg"  # Bot avatar
            ),
        )
    
    # Input section
    with gr.Row():
        with gr.Column(scale=4):
            message_input = gr.MultimodalTextbox(
                placeholder="Type your message here...",
                show_label=False,
                lines=1,
                max_lines=5,
                file_types=["image"],
                file_count="single",
            )
        
        with gr.Column(scale=1):
            with gr.Row():
                submit_btn = gr.Button("Send", variant="primary", size="sm")
                clear_btn = gr.Button("Clear", size="sm")
    
    # Additional controls
    with gr.Row():
        retry_btn = gr.Button("πŸ”„ Retry Last Response", size="sm", variant="secondary")
        gr.HTML("""
        <div style="text-align: center; color: #666; font-size: 0.9em; margin-top: 10px;">
            πŸ’‘ Tip: Try asking about different topics or just have a casual conversation!
        </div>
        """)
    
    # Example prompts
    gr.HTML("""
    <div style="margin-top: 20px; padding: 15px; background-color: #f8f9fa; border-radius: 8px;">
        <h3>πŸ’¬ Example Prompts:</h3>
        <ul style="margin: 10px 0;">
            <li>"Tell me something interesting"</li>
            <li>"How does AI work?"</li>
            <li>"What's your favorite book?"</li>
            <li>"Can you help me with a problem?"</li>
        </ul>
    </div>
    """)
    
    # Event handlers
    msg_event = message_input.submit(
        user_input,
        [message_input, chat_history],
        [message_input, chat_history],
        queue=False
    ).then(
        bot_response,
        chat_history,
        chatbot,
        queue=True
    )
    
    submit_event = submit_btn.click(
        user_input,
        [message_input, chat_history],
        [message_input, chat_history],
        queue=False
    ).then(
        bot_response,
        chat_history,
        chatbot,
        queue=True
    )
    
    clear_btn.click(
        clear_chat,
        outputs=chatbot,
        queue=False
    ).then(
        lambda: [],
        outputs=chat_history,
        queue=False
    )
    
    retry_btn.click(
        retry_last_response,
        chat_history,
        chatbot,
        queue=True
    )
    
    # Handle file uploads in multimodal input
    def handle_multimodal_input(data: Dict[str, Any], history: List[Dict[str, Any]]) -> tuple[str, List[Dict[str, Any]]]:
        """Handle multimodal input including text and files."""
        if isinstance(data, dict):
            text = data.get("text", "")
            files = data.get("files", [])
            
            if files:
                # If there are files, acknowledge them
                if text:
                    message = f"{text} [πŸ“Ž Attachment: {len(files)} file(s)]"
                else:
                    message = f"[πŸ“Ž Sent {len(files)} file(s)]"
            else:
                message = text
        else:
            message = str(data) if data else ""
        
        if message.strip():
            history.append({"role": "user", "content": message})
        
        return "", history
    
    message_input.upload(
        handle_multimodal_input,
        [message_input, chat_history],
        [message_input, chat_history],
        queue=False
    ).then(
        bot_response,
        chat_history,
        chatbot,
        queue=True
    )

# Launch the demo
if __name__ == "__main__":
    demo.launch(
        share=False,
        server_name="0.0.0.0",
        server_port=7860,
        show_api=True,
        show_error=True,
    )
```

This chatbot application features:

🎯 **Core Features:**
- Streaming responses with realistic typing delays
- Message history persistence
- Clean, modern UI with avatar support
- Multimodal input support (text + file uploads)
- Contextual responses based on keywords

🎨 **UI Elements:**
- Professional header with "Built with anycoder" link
- Responsive chat interface
- Send, Clear, and Retry buttons
- Example prompts for users
- Styled message bubbles

⚑ **Interactive Components:**
- Real-time message streaming
- Retry last response functionality
- Clear chat history
- File upload support
- Copy message functionality

πŸ”§ **Technical Features:**
- State management for chat history
- Event-driven architecture
- Queue management for smooth streaming
- Error handling
- Responsive design

The chatbot provides engaging conversations with contextual responses and a polished user experience, perfect for demonstrations or as a foundation for more advanced AI chat applications.