Recursive Implementation Example
This document provides a detailed example of how the recursive cognitive architecture works in Multi-Agent Debate. We'll walk through the complete lifecycle of a market decision, from data ingestion to trade execution, highlighting the recursive patterns and interpretability mechanisms at each stage.
Overview
In this example, we'll follow a complete decision cycle focused on analyzing Tesla (TSLA) stock, showing how multiple philosophical agents evaluate the same data through different lenses, form consensus, and generate a final decision with full attribution.
1. Data Ingestion
The process begins with market data ingestion from Yahoo Finance:
from multi_agent_debate.market.environment import MarketEnvironment
# Initialize market environment
market = MarketEnvironment(data_source="yahoo", tickers=["TSLA"])
# Get current market data
market_data = market.get_current_market_data()
The market data includes:
- Price history
- Volume data
- Fundamental metrics
- Recent news sentiment
- Technical indicators
2. Agent-Specific Processing
Each philosophical agent processes this data through its unique cognitive lens. Let's look at three agents:
Graham (Value) Agent
# Graham Agent processing
graham_agent = GrahamAgent(reasoning_depth=3)
graham_processed = graham_agent.process_market_data(market_data)
The Graham agent focuses on intrinsic value calculation:
# Internal implementation of Graham's intrinsic value calculation
def _calculate_intrinsic_value(self, fundamentals, ticker_data):
eps = fundamentals.get('eps', 0)
book_value = fundamentals.get('book_value_per_share', 0)
growth_rate = fundamentals.get('growth_rate', 0)
# Graham's formula: IV = EPS * (8.5 + 2g) * 4.4 / Y
bond_yield = ticker_data.get('economic_indicators', {}).get('aaa_bond_yield', 0.045)
bond_factor = 4.4 / max(bond_yield, 0.01)
growth_adjusted_pe = 8.5 + (2 * growth_rate)
earnings_value = eps * growth_adjusted_pe * bond_factor if eps > 0 else 0
# Calculate book value with margin
book_value_margin = book_value * 1.5
# Use the lower of the two values for conservatism
if earnings_value > 0 and book_value_margin > 0:
intrinsic_value = min(earnings_value, book_value_margin)
else:
intrinsic_value = earnings_value if earnings_value > 0 else book_value_margin
return max(intrinsic_value, 0)
The agent calculates a margin of safety:
Ticker: TSLA
Current Price: $242.15
Intrinsic Value: $180.32
Margin of Safety: -34.3% (negative margin indicates overvaluation)
Analysis: TSLA appears overvalued compared to traditional value metrics.
Recommendation: SELL
Confidence: 0.78
Wood (Innovation) Agent
# Wood Agent processing
wood_agent = WoodAgent(reasoning_depth=4)
wood_processed = wood_agent.process_market_data(market_data)
The Wood agent focuses on disruptive innovation and growth potential:
# Internal implementation of growth potential analysis
def _analyze_growth_potential(self, ticker_data, market_context):
# Analyze innovation factors
innovation_score = self._calculate_innovation_score(ticker_data)
# Analyze addressable market
tam = self._calculate_total_addressable_market(ticker_data, market_context)
# Project future growth
growth_projection = self._project_exponential_growth(
ticker_data, innovation_score, tam
)
return {
"innovation_score": innovation_score,
"total_addressable_market": tam,
"growth_projection": growth_projection,
}
The agent's analysis shows:
Ticker: TSLA
Innovation Score: 0.87
Total Addressable Market: $4.2T
5-Year CAGR Projection: 28.3%
Analysis: TSLA is well-positioned in multiple disruptive fields including EVs, energy storage, AI, and robotics.
Recommendation: BUY
Confidence: 0.82
Dalio (Macro) Agent
# Dalio Agent processing
dalio_agent = DalioAgent(reasoning_depth=3)
dalio_processed = dalio_agent.process_market_data(market_data)
The Dalio agent examines macroeconomic factors:
# Internal implementation of macroeconomic analysis
def _analyze_macro_environment(self, ticker_data, economic_indicators):
# Analyze interest rate impact
interest_impact = self._calculate_interest_sensitivity(ticker_data, economic_indicators)
# Analyze inflation impact
inflation_impact = self._calculate_inflation_impact(ticker_data, economic_indicators)
# Analyze growth cycle position
cycle_position = self._determine_economic_cycle_position(economic_indicators)
# Assess geopolitical risks
geopolitical_risk = self._assess_geopolitical_risk(economic_indicators)
return {
"interest_impact": interest_impact,
"inflation_impact": inflation_impact,
"cycle_position": cycle_position,
"geopolitical_risk": geopolitical_risk,
}
The agent's analysis shows:
Ticker: TSLA
Interest Rate Sensitivity: -0.65 (high negative sensitivity)
Inflation Impact: -0.32 (moderate negative impact)
Economic Cycle Position: Late Expansion
Analysis: TSLA will face headwinds from high interest rates and potential economic slowdown.
Recommendation: HOLD
Confidence: 0.65
3. Reasoning Graph Execution
Each agent's reasoning process is executed via a LangGraph reasoning structure. Here's a simplified view of the Wood agent's reasoning graph:
def _configure_reasoning_graph(self) -> None:
"""Configure the reasoning graph for disruptive innovation analysis."""
# Add custom reasoning nodes
self.reasoning_graph.add_node(
"innovation_analysis",
self._innovation_analysis
)
self.reasoning_graph.add_node(
"growth_projection",
self._growth_projection
)
self.reasoning_graph.add_node(
"competition_analysis",
self._competition_analysis
)
self.reasoning_graph.add_node(
"valuation_adjustment",
self._valuation_adjustment
)
# Configure reasoning flow
self.reasoning_graph.set_entry_point("innovation_analysis")
self.reasoning_graph.add_edge("innovation_analysis", "growth_projection")
self.reasoning_graph.add_edge("growth_projection", "competition_analysis")
self.reasoning_graph.add_edge("competition_analysis", "valuation_adjustment")
Each reasoning node executes and passes state to the next node, building up a complete reasoning trace:
Step 1: Innovation Analysis
- Assessed disruptive potential in key markets
- Analyzed R&D pipeline and technological moats
- Identified 4 significant innovation vectors
Step 2: Growth Projection
- Projected TAM expansion in core markets
- Calculated penetration rates and growth curves
- Estimated revenue CAGR of 28.3% over 5 years
Step 3: Competition Analysis
- Assessed competitive positioning in EV market
- Analyzed first-mover advantages in energy storage
- Identified emerging threats in autonomous driving
Step 4: Valuation Adjustment
- Applied growth-adjusted valuation metrics
- Discounted future cash flows with risk adjustment
- Compared valuation to traditional metrics
4. Signal Generation
Each agent generates investment signals based on its reasoning:
# Generate signals from each agent
graham_signals = graham_agent.generate_signals(graham_processed)
wood_signals = wood_agent.generate_signals(wood_processed)
dalio_signals = dalio_agent.generate_signals(dalio_processed)
Each signal includes:
- Action recommendation (buy/sell/hold)
- Confidence level
- Quantity recommendation
- Complete reasoning chain
- Value basis (philosophical foundation)
- Attribution trace (causal links to evidence)
Example of Wood agent's signal:
{
"ticker": "TSLA",
"action": "buy",
"confidence": 0.82,
"quantity": 41,
"reasoning": "Tesla shows strong innovation potential across multiple verticals including EVs, energy storage, AI, and robotics. Their R&D pipeline demonstrates continued technological leadership with high growth potential in the coming decade.",
"intent": "Capitalize on long-term disruptive innovation growth",
"value_basis": "Disruptive innovation creates exponential growth and market expansion that traditional metrics fail to capture",
"attribution_trace": {
"innovation_score": 0.35,
"growth_projection": 0.25,
"competition_analysis": 0.20,
"valuation_adjustment": 0.20
},
"drift_signature": {
"interest_rates": -0.05,
"regulation": -0.03,
"competition": -0.02
}
}
5. Meta-Agent Arbitration
The portfolio meta-agent receives signals from all philosophical agents:
# Create portfolio manager (meta-agent)
portfolio = PortfolioManager(
agents=[graham_agent, wood_agent, dalio_agent],
arbitration_depth=2,
show_trace=True
)
# Process market data through meta-agent
meta_result = portfolio.process_market_data(market_data)
Consensus Formation
The meta-agent first attempts to find consensus on non-conflicting signals:
def _consensus_formation(self, state) -> Dict[str, Any]:
"""Form consensus from agent signals."""
# Extract signals by ticker
ticker_signals = state.context.get("ticker_signals", {})
# Form consensus for each ticker
consensus_decisions = []
for ticker, signals in ticker_signals.items():
# Collect buy/sell/hold signals
buy_signals = []
sell_signals = []
hold_signals = []
for item in signals:
signal = item.get("signal", {})
action = signal.action.lower()
if action == "buy":
buy_signals.append((item, signal))
elif action == "sell":
sell_signals.append((item, signal))
elif action == "hold":
hold_signals.append((item, signal))
# Skip if conflicting signals (handle in conflict resolution)
if (buy_signals and sell_signals) or (not buy_signals and not sell_signals and not hold_signals):
continue
# Form consensus for non-conflicting signals
if buy_signals:
# Form buy consensus
consensus = self._form_action_consensus(ticker, "buy", buy_signals)
if consensus:
consensus_decisions.append(consensus)
elif sell_signals:
# Form sell consensus
consensus = self._form_action_consensus(ticker, "sell", sell_signals)
if consensus:
consensus_decisions.append(consensus)
return {
"context": {
**state.context,
"consensus_decisions": consensus_decisions,
"consensus_tickers": [decision.get("ticker") for decision in consensus_decisions],
},
"output": {
"consensus_decisions": consensus_decisions,
}
}
Conflict Resolution
For TSLA, we have a conflict: Graham (SELL) vs. Wood (BUY) vs. Dalio (HOLD). The meta-agent resolves this conflict:
def _resolve_ticker_conflict(self, ticker: str, action_signals: Dict[str, List[Tuple[Dict[str, Any], Any]]]) -> Optional[Dict[str, Any]]:
"""Resolve conflict for a specific ticker."""
# Calculate total weight for each action
action_weights = {}
action_confidences = {}
for action, signals in action_signals.items():
total_weight = 0.0
weighted_confidence = for action, signals in action_signals.items():
total_weight = 0.0
weighted_confidence = 0.0
for item, signal in signals:
agent_id = item.get("agent_id", "")
# Skip if missing agent ID
if not agent_id:
continue
# Get agent weight
agent_weight = self.agent_weights.get(agent_id, 0)
# Add to weighted confidence
weighted_confidence += signal.confidence * agent_weight
total_weight += agent_weight
# Store action weight and confidence
if total_weight > 0:
action_weights[action] = total_weight
action_confidences[action] = weighted_confidence / total_weight
# Choose action with highest weight
if not action_weights:
return None
best_action = max(action_weights.items(), key=lambda x: x[1])[0]
# Check confidence threshold
if action_confidences.get(best_action, 0) < self.consensus_threshold:
return None
# Get signals for best action
best_signals = action_signals.get(best_action, [])
# Form consensus for best action
return self._form_action_consensus(ticker, best_action, best_signals)
In our case, after attributing current agent weights (based on historical performance):
- Graham agent: 0.25 (weight) × 0.78 (confidence) = 0.195 (weighted confidence)
- Wood agent: 0.40 (weight) × 0.82 (confidence) = 0.328 (weighted confidence)
- Dalio agent: 0.35 (weight) × 0.65 (confidence) = 0.228 (weighted confidence)
The Wood agent's BUY signal has the highest weighted confidence, so the meta-agent forms consensus around it.
Position Sizing
The meta-agent determines position size based on confidence and attribution:
def _calculate_position_size(self, ticker: str, action: str, confidence: float,
attribution: Dict[str, float], portfolio_value: float) -> float:
"""Calculate position size based on confidence and attribution."""
# Base position size as percentage of portfolio
base_size = self.min_position_size + (confidence * (self.max_position_size - self.min_position_size))
# Calculate attribution-weighted size
if attribution:
# Calculate agent performance scores
performance_scores = {}
for agent_id, weight in attribution.items():
# Find agent
agent = None
for a in self.agents:
if a.id == agent_id:
agent = a
break
if agent:
# Use consistency score as proxy for performance
performance_score = agent.state.consistency_score
performance_scores[agent_id] = performance_score
# Calculate weighted performance score
weighted_score = 0
total_weight = 0
for agent_id, weight in attribution.items():
if agent_id in performance_scores:
weighted_score += performance_scores[agent_id] * weight
total_weight += weight
# Adjust base size by performance
if total_weight > 0:
performance_factor = weighted_score / total_weight
base_size *= (0.5 + (0.5 * performance_factor))
# Calculate currency amount
target_size = portfolio_value * base_size
return target_size
For TSLA:
- Base position size: 0.01 + (0.82 × (0.20 - 0.01)) = 0.165 (16.5% of portfolio)
- Adjusted for agent performance: 16.5% × 1.1 = 18.2% of portfolio
- For a $1,000,000 portfolio: $182,000 position size
- At current price of $242.15: 751 shares
Meta Reflection
The meta-agent performs a final reflection on its decision process:
def _meta_reflection(self, state) -> Dict[str, Any]:
"""Perform meta-reflection on decision process."""
# Extract decisions
sized_decisions = state.context.get("sized_decisions", [])
# Update meta state with arbitration record
arbitration_record = {
"id": str(uuid.uuid4()),
"decisions": sized_decisions,
"timestamp": datetime.datetime.now().isoformat(),
}
self.meta_state["arbitration_history"].append(arbitration_record)
# Update agent weights based on performance
self._update_agent_weights()
# Calculate meta-confidence
meta_confidence = sum(decision.get("confidence", 0) for decision in sized_decisions) / len(sized_decisions) if sized_decisions else 0.5
# Return final output
return {
"output": {
"consensus_decisions": sized_decisions,
"meta_confidence": meta_confidence,
"agent_weights": self.agent_weights,
"timestamp": datetime.datetime.now().isoformat(),
},
"confidence": meta_confidence,
}
The meta-agent final reflection includes:
- Consensus tracking
- Agent weight adjustment
- Meta-confidence calculation
- Temporal memory update
6. Trade Execution
The final step is trade execution:
# Execute trades based on consensus decisions
consensus_decisions = meta_result.get("meta_agent", {}).get("consensus_decisions", [])
execution_results = portfolio.execute_trades(consensus_decisions)
The execution includes:
- Position sizing
- Order placement
- Confirmation handling
- Portfolio state update
{
"trades": [
{
"ticker": "TSLA",
"action": "buy",
"quantity": 751,
"price": 242.15,
"cost": 181853.65,
"timestamp": "2024-04-17T14:23:45.123456"
}
],
"errors": [],
"portfolio_update": {
"timestamp": "2024-04-17T14:23:45.654321",
"portfolio_value": 1000000.00,
"cash": 818146.35,
"positions": {
"TSLA": {
"ticker": "TSLA",
"quantity": 751,
"entry_price": 242.15,
"current_price": 242.15,
"market_value": 181853.65,
"allocation": 0.182,
"unrealized_gain": 0.0,
"entry_date": "2024-04-17T14:23:45.123456"
}
},
"returns": {
"total_return": 0.0,
"daily_return": 0.0
},
"allocation": {
"cash": 0.818,
"TSLA": 0.182
}
}
}
7. Attribution Tracing
Throughout this process, complete attribution tracing is maintained:
# Generate attribution report
attribution_report = portfolio.tracer.generate_attribution_report(meta_result.get("meta_agent", {}).get("consensus_decisions", []))
The attribution report shows the complete decision provenance:
{
"agent_name": "PortfolioMetaAgent",
"timestamp": "2024-04-17T14:23:46.123456",
"signals": 1,
"attribution_summary": {
"Wood": 0.45,
"Dalio": 0.35,
"Graham": 0.20
},
"confidence_summary": {
"mean": 0.82,
"median": 0.82,
"min": 0.82,
"max": 0.82
},
"top_factors": [
{
"source": "innovation_score",
"weight": 0.35
},
{
"source": "growth_projection",
"weight": 0.25
},
{
"source": "economic_cycle_position",
"weight": 0.15
},
{
"source": "competition_analysis",
"weight": 0.10
},
{
"source": "intrinsic_value_calculation",
"weight": 0.10
}
],
"shell_patterns": [
{
"pattern": "v07 CIRCUIT-FRAGMENT",
"count": 1,
"frequency": 1.0
}
]
}
8. Visualization
The system provides multiple visualization tools for interpretability:
Consensus Graph
# Generate consensus graph
consensus_graph = portfolio.visualize_consensus_graph()
The consensus graph shows the flow of influence between agents and decisions:
{
"nodes": [
{
"id": "meta",
"label": "Portfolio Meta-Agent",
"type": "meta",
"size": 20
},
{
"id": "agent-1",
"label": "Graham Agent",
"type": "agent",
"philosophy": "Value investing focused on margin of safety",
"size": 15,
"weight": 0.20
},
{
"id": "agent-2",
"label": "Wood Agent",
"type": "agent",
"philosophy": "Disruptive innovation investing",
"size": 15,
"weight": 0.45
},
{
"id": "agent-3",
"label": "Dalio Agent",
"type": "agent",
"philosophy": "Macroeconomic-based investing",
"size": 15,
"weight": 0.35
},
{
"id": "position-TSLA",
"label": "TSLA",
"type": "position",
"size": 10,
"value": 181853.65
}
],
"links": [
{
"source": "agent-1",
"target": "meta",
"value": 0.20,
"type": "influence"
},
{
"source": "agent-2",
"target": "meta",
"value": 0.45,
"type": "influence"
},
{
"source": "agent-3",
"target": "meta",
"value": 0.35,
"type": "influence"
},
{
"source": "meta",
"target": "position-TSLA",
"value": 1.0,
"type": "allocation"
},
{
"source": "agent-2",
"target": "position-TSLA",
"value": 0.45,
"type": "attribution"
},
{
"source": "agent-3",
"target": "position-TSLA",
"value": 0.35,
"type": "attribution"
},
{
"source": "agent-1",
"target": "position-TSLA",
"value": 0.20,
"type": "attribution"
}
],
"timestamp": "2024-04-17T14:23:46.987654"
}
Agent Conflict Map
# Generate agent conflict map
conflict_map = portfolio.visualize_agent_conflict_map()
The conflict map visualizes the specific disagreements between agents:
{
"nodes": [
{
"id": "agent-1",
"label": "Graham Agent",
"type": "agent",
"philosophy": "Value investing focused on margin of safety",
"size": 15
},
{
"id": "agent-2",
"label": "Wood Agent",
"type": "agent",
"philosophy": "Disruptive innovation investing",
"size": 15
},
{
"id": "agent-3",
"label": "Dalio Agent",
"type": "agent",
"philosophy": "Macroeconomic-based investing",
"size": 15
},
{
"id": "position-TSLA",
"label": "TSLA",
"type": "position",
"size": 10
}
],
"links": [
{
"source": "agent-1",
"target": "agent-2",
"value": 1.0,
"type": "conflict",
"ticker": "TSLA"
},
{
"source": "agent-2",
"target": "agent-3",
"value": 1.0,
"type": "conflict",
"ticker": "TSLA"
},
{
"source": "agent-1",
"target": "agent-3",
"value": 1.0,
"type": "conflict",
"ticker": "TSLA"
}
],
"conflict_zones": [
{
"id": "conflict-1",
"ticker": "TSLA",
"agents": ["agent-1", "agent-2", "agent-3"],
"resolution": "resolved",
"timestamp": "2024-04-17T14:23:44.567890"
}
],
"timestamp": "2024-04-17T14:23:47.654321"
}
Shell Failure Map
# Create shell diagnostics
shell_diagnostics = ShellDiagnostics(
agent_id="portfolio",
agent_name="Portfolio",
tracing_tools=TracingTools(
agent_id="portfolio",
agent_name="Portfolio",
tracing_mode=TracingMode.DETAILED,
)
)
# Create shell failure map
failure_map = ShellFailureMap()
# Analyze each agent's state for shell failures
for agent in [graham_agent, wood_agent, dalio_agent]:
agent_state = agent.get_state_report()
# Simulate shell failures based on agent state
for shell_pattern in [ShellPattern.CIRCUIT_FRAGMENT, ShellPattern.META_FAILURE]:
failure_data = shell_diagnostics.simulate_shell_failure(
shell_pattern=shell_pattern,
context=agent_state,
)
# Add to failure map
failure_map.add_failure(
agent_id=agent.id,
agent_name=agent.name,
shell_pattern=shell_pattern,
failure_data=failure_data,
)
# Generate visualization
shell_failure_viz = failure_map.generate_failure_map_visualization()
The shell failure map visualizes interpretability patterns detected in the agents:
{
"nodes": [
{
"id": "agent-1",
"label": "Graham Agent",
"type": "agent",
"size": 15,
"failure_count": 1
},
{
"id": "agent-2",
"label": "Wood Agent",
"type": "agent",
"size": 15,
"failure_count": 2
},
{
"id": "agent-3",
"label": "Dalio Agent",
"type": "agent",
"size": 15,
"failure_count": 1
},
{
"id": "v07 CIRCUIT-FRAGMENT",
"label": "CIRCUIT-FRAGMENT",
"type": "pattern",
"size": 10,
"failure_count": 3
},
{
"id": "v10 META-FAILURE",
"label": "META-FAILURE",
"type": "pattern",
"size": 10,
"failure_count": 1
},
{
"id": "failure-1",
"label": "Failure 3f4a9c",
"type": "failure",
"size": 5,
"timestamp": "2024-04-17T14:23:48.123456"
},
{
"id": "failure-2",
"label": "Failure b7d5e2",
"type": "failure",
"size": 5,
"timestamp": "2024-04-17T14:23:48.234567"
},
{
"id": "failure-3",
"label": "Failure 9c6f1a",
"type": "failure",
"size": 5,
"timestamp": "2024-04-17T14:23:48.345678"
},
{
"id": "failure-4",
"label": "Failure 2e8d7f",
"type": "failure",
"size": 5,
"timestamp": "2024-04-17T14:23:48.456789"
}
],
"links": [
{
"source": "agent-1",
"target": "failure-1",
"type": "agent_failure"
},
{
"source": "v07 CIRCUIT-FRAGMENT",
"target": "failure-1",
"type": "pattern_failure"
},
{
"source": "agent-2",
"target": "failure-2",
"type": "agent_failure"
},
{
"source": "v07 CIRCUIT-FRAGMENT",
"target": "failure-2",
"type": "pattern_failure"
},
{
"source": "agent-2",
"target": "failure-3",
"type": "agent_failure"
},
{
"source": "v10 META-FAILURE",
"target": "failure-3",
"type": "pattern_failure"
},
{
"source": "agent-3",
"target": "failure-4",
"type": "agent_failure"
},
{
"source": "v07 CIRCUIT-FRAGMENT",
"target": "failure-4",
"type": "pattern_failure"
}
],
"timestamp": "2024-04-17T14:23:49.000000"
}
9. Agent Memory & Learning
After each trading cycle, agents update their internal state:
# Update agent states based on market feedback
market_feedback = {
'portfolio_value': execution_results['portfolio_update']['portfolio_value'],
'performance': {'TSLA': 0.02}, # Example: 2% return
'decisions': consensus_decisions,
'avg_confidence': 0.82,
}
# Update each agent's state
for agent in [graham_agent, wood_agent, dalio_agent]:
agent.update_state(market_feedback)
Each agent processes the feedback differently based on its philosophy:
Wood Agent Memory Update
def _update_beliefs(self, market_feedback: Dict[str, Any]) -> None:
"""Update agent's belief state based on market feedback."""
# Extract relevant signals
if 'performance' in market_feedback:
performance = market_feedback['performance']
# Record decision outcomes
if 'decisions' in market_feedback:
for decision in market_feedback['decisions']:
self.state.decision_history.append({
'decision': decision,
'outcome': performance.get(decision.get('ticker'), 0),
'timestamp': datetime.datetime.now()
})
# For Wood Agent, reinforce innovation beliefs on positive outcomes
if performance.get(decision.get('ticker'), 0) > 0:
ticker = decision.get('ticker')
# Strengthen innovation belief
current_belief = self.state.belief_state.get(f"{ticker}_innovation", 0.5)
self.state.belief_state[f"{ticker}_innovation"] = min(1.0, current_belief + 0.05)
# Update industry trend belief
industry = self._get_ticker_industry(ticker)
if industry:
industry_belief = self.state.belief_state.get(f"{industry}_trend", 0.5)
self.state.belief_state[f"{industry}_trend"] = min(1.0, industry_belief + 0.03)
# Update general belief state based on performance
for ticker, perf in performance.items():
general_belief_key = f"{ticker}_general"
current_belief = self.state.belief_state.get(general_belief_key, 0.5)
# Wood Agent weights positive outcomes more heavily for innovative companies
if self._is_innovative_company(ticker):
update_weight = 0.3 # Higher weight for innovative companies
else:
update_weight = 0.1 # Lower weight for traditional companies
# Update belief
updated_belief = current_belief * (1 - update_weight) + np.tanh(perf) * update_weight
self.state.belief_state[general_belief_key] = updated_belief
# Track belief drift
if general_belief_key in self.state.belief_state:
drift = updated_belief - current_belief
self.state.drift_vector[general_belief_key] = drift
# Wood Agent's drift pattern analysis
self._analyze_drift_pattern(ticker, drift)
10. Command Interface
Throughout the system, the symbolic command interface enables deeper introspection:
# Get a reflection trace from the Graham agent
reflection_trace = graham_agent.execute_command(
command="reflect.trace",
depth=3
)
# Generate signals from alternative sources
alt_signals = wood_agent.execute_command(
command="fork.signal",
source="beliefs"
)
# Check for decision collapse
collapse_check = dalio_agent.execute_command(
command="collapse.detect",
threshold=0.7,
reason="consistency"
)
# Attribute weight to a justification
attribution = portfolio.execute_command(
command="attribute.weight",
justification="Tesla's innovation in AI and robotics represents a paradigm shift that traditional valuation metrics fail to capture."
)
# Track belief drift
drift_observation = wood_agent.execute_command(
command="drift.observe",
vector={"TSLA_innovation": 0.05, "AI_trend": 0.03, "EV_market": 0.02},
bias=0.01
)
These commands form the foundation of the system's interpretability architecture, enabling detailed tracing and analysis of decision processes.
Conclusion
This example demonstrates the recursive cognitive architecture of Multi-Agent Debate in action. From market data ingestion to trade execution, the system maintains complete transparency and interpretability through:
- Agent-specific cognitive lenses
- Recursive reasoning graphs
- Attribution tracing
- Meta-agent arbitration
- Position sizing
- Trade execution
- Memory and learning
Each component is designed to enable deeper introspection into the decision-making process, creating a truly transparent and interpretable multi-agent market cognition system.
The symbolic command interface and visualization tools provide multiple ways to understand and analyze the system's behavior, making it both effective and explainable.