Troubleshooting Guide¶
This document provides solutions to common issues encountered when using the Pure Python Pipeline.
Common Issues¶
Issue: All Holdings Have Identical Scores¶
Symptom:
Cause:
The QuantitativeAnalysisTool is returning default values instead of real market data for all tickers.
Diagnosis:
- Check if API keys are configured:
- Verify ticker symbols are valid:
Python
from finwiz.tools.ticker_validation_tool import TickerValidationTool
validator = TickerValidationTool()
result = validator._run(ticker="AAPL")
print(result)
- Check network connectivity:
Solutions:
- Configure API keys:
- Verify ticker format:
- Check QuantitativeAnalysisTool logs:
Python
import logging
logging.basicConfig(level=logging.DEBUG)
# Run analysis and check logs
results = analyze_portfolio_with_python(holdings, session_id)
- Test data fetching directly:
Python
from finwiz.tools.quantitative_analysis_tool import QuantitativeAnalysisTool
tool = QuantitativeAnalysisTool()
data = tool._run(symbol="AAPL", asset_class="stock", analysis_type="performance")
print(data)
Issue: No A+ Opportunities Found¶
Symptom:
Python
discovery_results["has_a_plus_analysis"] == False
discovery_results["total_opportunities_found"] == 0
Possible Causes:
- Deep analysis not completed successfully
- No holdings achieved A+ or A grade
- JSON files not in expected directories
- Session ID mismatch between analysis and discovery
Diagnosis:
- Check deep analysis results:
Python
print(f"Successful: {analysis_results['successful_analyses']}")
print(f"Failed: {analysis_results['failed_analyses']}")
- Check grades in analysis results:
Python
for ticker, result in analysis_results["deep_analysis_results"].items():
print(f"{ticker}: Grade {result.grade}, Score {result.composite_score:.3f}")
- Verify output directory structure:
- Check session ID consistency:
Python
print(f"Analysis session: {session_id}")
print(f"Discovery session: {session_id}") # Should match
Solutions:
- Verify deep analysis completed:
Python
if analysis_results["successful_analyses"] == 0:
print("â Deep analysis failed - check logs")
- Check analysis grades:
Python
# Look for A+ or A grades
aplus_count = sum(
1 for r in analysis_results["deep_analysis_results"].values()
if r.grade in ["A+", "A"]
)
print(f"Found {aplus_count} A+/A grade holdings")
- Verify directory structure:
Python
from pathlib import Path
output_dir = Path("output")
for asset_class in ["stock", "etf", "crypto"]:
dir_path = output_dir / asset_class
if not dir_path.exists():
print(f"â Missing directory: {dir_path}")
dir_path.mkdir(parents=True, exist_ok=True)
- Use consistent session ID:
Python
# Generate once, use everywhere
session_id = f"analysis_{int(time.time())}"
analysis_results = analyze_portfolio_with_python(holdings, session_id)
discovery_results = integrate_aplus_discovery_with_deep_analysis(session_id)
Issue: Backtesting Not Executing¶
Symptom:
Python
backtesting_results["backtesting_executed"] == False
backtesting_results["reason"] == "No A+ candidates available"
Possible Causes:
- A+ discovery found no opportunities
- Discovery integration failed
- Candidate list is empty
Diagnosis:
- Check discovery results:
Python
print(f"Has A+ analysis: {discovery_results['has_a_plus_analysis']}")
print(f"Opportunities: {discovery_results['total_opportunities_found']}")
- Check candidate list:
Python
if discovery_results["has_a_plus_analysis"]:
for holding in discovery_results["aplus_holdings"]:
print(f"Candidate: {holding['ticker']} (Grade {holding['grade']})")
- Verify discovery JSON exists:
Solutions:
- Run A+ discovery first:
Python
# Ensure discovery runs before backtesting
discovery_results = integrate_aplus_discovery_with_deep_analysis(session_id)
if discovery_results["has_a_plus_analysis"]:
backtesting_results = connect_backtesting_to_discovery_results(session_id)
else:
print("âšī¸ No A+ opportunities - skipping backtesting")
- Check discovery results:
Python
if not discovery_results["has_a_plus_analysis"]:
print("No A+ opportunities found")
print(f"Total analyzed: {discovery_results['total_analyzed']}")
- Verify discovery JSON:
Python
from pathlib import Path
import json
discovery_file = Path(f"output/aplus_discovery_{session_id}.json")
if discovery_file.exists():
with open(discovery_file) as f:
data = json.load(f)
print(f"Candidates: {len(data.get('aplus_holdings', []))}")
Issue: JSON Export Files Not Found¶
Symptom:
Text Only
FileNotFoundError: [Errno 2] No such file or directory: 'output/stock/AAPL_session_123.json'
Possible Causes:
- Deep analysis failed to export files
- Incorrect output directory path
- File permissions issue
- Session ID mismatch
Diagnosis:
- Check if files were created:
- Check file permissions:
- Check export logs:
Python
# Look for export messages in logs
# "đ Exported AAPL analysis to output/stock/AAPL_session_123.json"
Solutions:
- Verify export completed:
Python
if "export_info" in analysis_results:
print(f"Exported files: {analysis_results['export_info']['exported_files']}")
- Check directory permissions:
- Create directories if missing:
Python
from pathlib import Path
for dir_name in ["stock", "etf", "crypto"]:
dir_path = Path("output") / dir_name
dir_path.mkdir(parents=True, exist_ok=True)
Issue: Report Generation Fails¶
Symptom:
Possible Causes:
- Missing template file
- Invalid data structure
- Jinja2 template error
- File write permissions
Diagnosis:
- Check template exists:
- Validate input data:
- Check template rendering:
Python
from finwiz.reporting.python_report_generator import PythonReportGenerator
generator = PythonReportGenerator()
# Check for template errors
Solutions:
- Verify template exists:
Python
from pathlib import Path
template_path = Path("src/finwiz/templates/report.html.j2")
if not template_path.exists():
print(f"â Template not found: {template_path}")
- Validate data structure:
Python
from finwiz.schemas.portfolio_review import PortfolioReview
# Ensure portfolio_review is valid
assert isinstance(portfolio_review, PortfolioReview)
assert len(portfolio_review.holdings) > 0
- Check output permissions:
Performance Issues¶
Issue: Slow Analysis Execution¶
Symptom:
Analysis takes longer than expected (> 2 seconds per holding).
Possible Causes:
- Network latency for API calls
- API rate limiting
- Large portfolio size
- Inefficient data fetching
Solutions:
- Enable caching:
Python
# Cache market data to avoid redundant API calls
from functools import lru_cache
@lru_cache(maxsize=1000)
def get_cached_data(ticker, date):
return fetch_market_data(ticker, date)
- Batch processing:
Python
# Process holdings in batches
BATCH_SIZE = 10
for i in range(0, len(holdings), BATCH_SIZE):
batch = holdings[i:i + BATCH_SIZE]
process_batch(batch)
- Monitor API calls:
Python
import time
start = time.time()
data = fetch_data(ticker)
elapsed = time.time() - start
if elapsed > 1.0:
logger.warning(f"Slow API call for {ticker}: {elapsed:.2f}s")
Issue: High Memory Usage¶
Symptom:
Memory usage increases significantly during analysis.
Solutions:
- Process in batches:
Python
# Clear memory after each batch
for batch in batches:
results = process_batch(batch)
export_results(results)
del results # Free memory
- Use generators:
Python
def analyze_holdings_generator(holdings):
for holding in holdings:
yield analyze_holding(holding)
# Process one at a time
for result in analyze_holdings_generator(holdings):
export_result(result)
- Monitor memory:
Python
import psutil
process = psutil.Process()
memory_mb = process.memory_info().rss / 1024 / 1024
print(f"Memory usage: {memory_mb:.1f} MB")
Debugging Tips¶
Enable Debug Logging¶
Python
import logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# Run analysis with debug logging
results = analyze_portfolio_with_python(holdings, session_id)
Inspect Intermediate Results¶
Python
# Check analysis results
for ticker, result in analysis_results["deep_analysis_results"].items():
print(f"{ticker}:")
print(f" Grade: {result.grade}")
print(f" Score: {result.composite_score:.3f}")
print(f" Fundamental: {result.fundamental_score:.3f}")
print(f" Technical: {result.technical_score:.3f}")
print(f" Risk: {result.risk_score:.3f}")
Validate JSON Exports¶
Python
import json
from pathlib import Path
# Read and validate JSON export
export_file = Path(f"output/stock/AAPL_{session_id}.json")
with open(export_file) as f:
data = json.load(f)
# Validate structure
assert "ticker" in data
assert "composite_score" in data
assert "grade" in data
print("â
JSON export valid")
Getting Help¶
If you continue to experience issues:
- Check logs: Review application logs for error messages
- Verify configuration: Ensure all environment variables are set
- Test components individually: Isolate the failing component
- Review documentation: Check component-specific documentation
- Report issues: Create a GitHub issue with:
- Error message
- Steps to reproduce
- Environment details
- Log output
Related Documentation¶
- Components - Component documentation
- Best Practices - Implementation guidelines
- How-to Guide - Usage instructions
- API Reference - API documentation