Skip to content

Visualization & Reporting

Overview

The portfolio management system includes a comprehensive visualization framework for analyzing backtest results. The reporting.visualization module provides data preparation functions that transform backtest outputs into chart-ready formats for common financial visualizations.

Key Features:

  • Equity curve normalization and percentage changes
  • Drawdown analysis with underwater periods
  • Comprehensive performance summaries
  • Portfolio allocation tracking
  • Multi-strategy comparison tables
  • Transaction cost analysis
  • Return distribution preparation
  • Monthly returns heatmaps
  • Rolling performance metrics
  • Trade-level analysis

Module Structure:

portfolio_management/reporting/visualization/
├── equity_curves.py      # Equity curve normalization
├── drawdowns.py          # Drawdown calculations
├── summary.py            # Performance summary aggregation
├── allocations.py        # Portfolio allocation history
├── comparison.py         # Multi-strategy comparison
├── costs.py              # Transaction cost analysis
├── distributions.py      # Return distribution data
├── heatmaps.py          # Monthly returns heatmaps
├── metrics.py           # Rolling performance metrics
└── trade_analysis.py    # Trade-level statistics

Workflow Overview

graph LR
    A[Backtest outputs<br/>equity, trades, metrics]
    B[Equity curve module]
    C[Drawdown module]
    D[Summary module]
    E[Allocations / comparison / costs / distributions / heatmaps / metrics]
    F[Visualization-ready CSVs + dictionaries]

    A --> B
    A --> C
    A --> D
    A --> E
    B --> F
    C --> F
    D --> F
    E --> F

Each visualization helper consumes the backtest data and emits chart-ready artifacts for dashboards or reporting.


Equity Curves

prepare_equity_curve(equity_df: pd.DataFrame) -> pd.DataFrame

Normalizes equity curves to a base value of 100 and calculates percentage changes for plotting.

Purpose:

  • Compare strategies with different starting capital
  • Visualize relative performance over time
  • Generate line charts showing portfolio growth

Arguments:

  • equity_df: DataFrame with 'equity' column indexed by date

Returns:

  • DataFrame with:
  • equity_normalized: Values scaled to start at 100
  • equity_pct_change: Daily percentage changes

Example:

from portfolio_management.reporting.visualization import prepare_equity_curve

# After running backtest
equity_data = backtest_result.equity
chart_data = prepare_equity_curve(equity_data)

# Plot with matplotlib
import matplotlib.pyplot as plt
plt.plot(chart_data.index, chart_data['equity_normalized'])
plt.title('Portfolio Growth (Base 100)')
plt.ylabel('Portfolio Value')
plt.show()

Use Cases:

  • Comparing multiple strategies on same scale
  • Identifying growth trends and volatility patterns
  • Creating normalized performance charts for reports

Drawdowns

prepare_drawdown_series(equity_df: pd.DataFrame) -> pd.DataFrame

Calculates drawdown percentages and identifies underwater periods (time spent in drawdown).

Purpose:

  • Visualize portfolio losses from peak values
  • Identify recovery periods after drawdowns
  • Assess downside risk exposure

Arguments:

  • equity_df: DataFrame with 'equity' column indexed by date

Returns:

  • DataFrame with:
  • drawdown_pct: Percentage below running maximum
  • running_max: Highest portfolio value to date
  • underwater_days: Consecutive days in drawdown

Example:

from portfolio_management.reporting.visualization import prepare_drawdown_series

drawdown_data = prepare_drawdown_series(backtest_result.equity)

# Find maximum drawdown
max_dd = drawdown_data['drawdown_pct'].min()
print(f"Maximum Drawdown: {max_dd:.2%}")

# Find longest underwater period
max_underwater = drawdown_data['underwater_days'].max()
print(f"Longest Recovery: {max_underwater} days")

# Plot drawdown chart
import matplotlib.pyplot as plt
plt.fill_between(
    drawdown_data.index,
    drawdown_data['drawdown_pct'],
    0,
    alpha=0.3,
    color='red'
)
plt.title('Portfolio Drawdowns')
plt.ylabel('Drawdown %')
plt.show()

Use Cases:

  • Risk assessment and downside analysis
  • Recovery time evaluation
  • Comparing drawdown profiles across strategies

Performance Summary

create_summary_report(equity_df: pd.DataFrame, rebalance_events: list[RebalanceEvent]) -> dict

Aggregates all performance, risk, trading, and portfolio metrics into a comprehensive summary dictionary.

Purpose:

  • Generate complete backtest summary statistics
  • Create executive summary reports
  • Compare strategy performance across key metrics

Arguments:

  • equity_df: DataFrame with portfolio equity values
  • rebalance_events: List of all rebalancing events from backtest

Returns:

  • Dictionary with four major sections:
  • Performance Metrics:

    • total_return: Cumulative return over backtest period
    • annualized_return: Geometric average annual return
    • sharpe_ratio: Risk-adjusted return metric
    • sortino_ratio: Downside risk-adjusted return
    • calmar_ratio: Return/max drawdown ratio
  • Risk Metrics:

    • annualized_volatility: Standard deviation of returns (annualized)
    • max_drawdown: Largest peak-to-trough decline
    • expected_shortfall: Average return in worst 5% of cases (CVaR)
    • win_rate: Percentage of profitable periods
    • avg_win: Average gain on winning periods
    • avg_loss: Average loss on losing periods
  • Trading Activity:

    • num_rebalances: Total portfolio rebalancing events
    • total_costs: Sum of all transaction costs
    • avg_cost_per_rebalance: Mean cost per rebalancing event
    • turnover: Portfolio turnover rate
  • Portfolio Evolution:

    • initial_value: Starting portfolio value
    • final_value: Ending portfolio value
    • peak_value: Highest portfolio value reached

Example:

from portfolio_management.reporting.visualization import create_summary_report

summary = create_summary_report(
    equity_df=backtest_result.equity,
    rebalance_events=backtest_result.rebalance_events
)

# Print formatted summary
print("=== Backtest Summary ===")
print(f"Total Return: {summary['performance']['total_return']:.2%}")
print(f"Sharpe Ratio: {summary['risk']['sharpe_ratio']:.2f}")
print(f"Max Drawdown: {summary['risk']['max_drawdown']:.2%}")
print(f"Win Rate: {summary['risk']['win_rate']:.2%}")
print(f"Total Costs: ${summary['trading']['total_costs']:,.2f}")
print(f"Num Rebalances: {summary['trading']['num_rebalances']}")

Use Cases:

  • Generating executive summary reports
  • Quick performance assessment
  • Multi-strategy comparison input
  • Automated reporting pipelines

Portfolio Allocations

prepare_allocation_history(rebalance_events: list[RebalanceEvent]) -> pd.DataFrame

Extracts portfolio allocation percentages over time for stacked area charts and allocation tracking.

Purpose:

  • Visualize portfolio composition evolution
  • Track cash vs. holdings percentages
  • Identify rebalancing triggers and patterns

Arguments:

  • rebalance_events: List of rebalancing events from backtest

Returns:

  • DataFrame with:
  • cash_pct: Cash percentage of portfolio
  • holdings_pct: Holdings percentage of portfolio
  • trigger: Rebalancing trigger type (e.g., 'periodic', 'tolerance')

Example:

from portfolio_management.reporting.visualization import prepare_allocation_history

allocation_data = prepare_allocation_history(backtest_result.rebalance_events)

# Plot stacked area chart
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.stackplot(
    allocation_data.index,
    allocation_data['cash_pct'],
    allocation_data['holdings_pct'],
    labels=['Cash', 'Holdings']
)
ax.legend(loc='upper left')
ax.set_ylabel('Allocation %')
ax.set_title('Portfolio Allocation Over Time')
plt.show()

Use Cases:

  • Tracking cash drag on performance
  • Visualizing rebalancing frequency
  • Understanding portfolio composition changes

Multi-Strategy Comparison

prepare_metrics_comparison(metrics_list: list[tuple[str, PerformanceMetrics]]) -> pd.DataFrame

Creates a comparison table for multiple strategies showing key performance and risk metrics side-by-side.

Purpose:

  • Compare strategy performance across key metrics
  • Generate strategy ranking tables
  • Support multi-strategy portfolio decisions

Arguments:

  • metrics_list: List of tuples containing (strategy_name, PerformanceMetrics)

Returns:

  • DataFrame with strategies as rows and metrics as columns:
  • Total Return %
  • Annual Return %
  • Volatility %
  • Sharpe Ratio
  • Sortino Ratio
  • Max Drawdown %
  • Calmar Ratio
  • Win Rate %
  • Total Costs $
  • Num Rebalances

Example:

from portfolio_management.reporting.visualization import prepare_metrics_comparison

# Run multiple strategies
momentum_result = run_backtest(momentum_config)
lowvol_result = run_backtest(lowvol_config)
multifactor_result = run_backtest(multifactor_config)

# Prepare comparison
comparison = prepare_metrics_comparison([
    ("Momentum", momentum_result.metrics),
    ("Low Volatility", lowvol_result.metrics),
    ("Multi-Factor", multifactor_result.metrics)
])

print(comparison)
# Output:
#                  Total Return %  Annual Return %  Sharpe Ratio  ...
# Momentum               45.23           12.34          1.25    ...
# Low Volatility         38.67            9.87          1.42    ...
# Multi-Factor           52.11           13.89          1.38    ...

# Sort by Sharpe ratio
best_strategy = comparison.sort_values('Sharpe Ratio', ascending=False).index[0]
print(f"Best Risk-Adjusted Strategy: {best_strategy}")

Use Cases:

  • Strategy selection and evaluation
  • Multi-strategy portfolio construction
  • Performance reporting and presentations

Transaction Costs

prepare_transaction_costs_summary(rebalance_events: list[RebalanceEvent]) -> pd.DataFrame

Aggregates transaction costs over time for cost analysis and visualization.

Purpose:

  • Analyze transaction cost impact on performance
  • Identify cost spikes and patterns
  • Optimize rebalancing frequency

Arguments:

  • rebalance_events: List of rebalancing events from backtest

Returns:

  • DataFrame with:
  • date: Rebalancing date
  • costs: Transaction costs for that rebalancing event
  • cumulative_costs: Running total of all costs
  • trigger: Rebalancing trigger type

Example:

from portfolio_management.reporting.visualization import prepare_transaction_costs_summary

cost_data = prepare_transaction_costs_summary(backtest_result.rebalance_events)

# Calculate cost percentage of returns
total_costs = cost_data['cumulative_costs'].iloc[-1]
total_return = backtest_result.metrics.total_return
cost_drag = (total_costs / backtest_result.equity.iloc[0]['equity']) * 100

print(f"Total Costs: ${total_costs:,.2f}")
print(f"Cost Drag: {cost_drag:.2f}%")

# Plot cumulative costs
import matplotlib.pyplot as plt
plt.plot(cost_data.index, cost_data['cumulative_costs'])
plt.title('Cumulative Transaction Costs')
plt.ylabel('Total Costs ($)')
plt.show()

Use Cases:

  • Cost optimization and rebalancing frequency tuning
  • Transaction cost impact assessment
  • Strategy cost efficiency comparison

Return Distributions

prepare_returns_distribution(equity_df: pd.DataFrame, bins: int = 50) -> tuple[np.ndarray, np.ndarray]

Prepares daily return distribution data for histogram visualization.

Purpose:

  • Visualize return distribution characteristics
  • Assess normality assumptions
  • Identify fat tails and skewness

Arguments:

  • equity_df: DataFrame with equity values
  • bins: Number of histogram bins (default: 50)

Returns:

  • Tuple of (bin_edges, frequencies) for histogram plotting

Example:

from portfolio_management.reporting.visualization import prepare_returns_distribution
import matplotlib.pyplot as plt

bin_edges, frequencies = prepare_returns_distribution(backtest_result.equity)

plt.bar(bin_edges[:-1], frequencies, width=np.diff(bin_edges), alpha=0.7)
plt.title('Daily Returns Distribution')
plt.xlabel('Daily Return %')
plt.ylabel('Frequency')
plt.axvline(0, color='r', linestyle='--', label='Zero Return')
plt.legend()
plt.show()

Use Cases:

  • Return distribution analysis
  • Risk modeling (identifying fat tails)
  • Strategy behavior assessment

Monthly Returns Heatmap

prepare_monthly_returns_heatmap(equity_df: pd.DataFrame) -> pd.DataFrame

Calculates monthly returns organized in a calendar format for heatmap visualization.

Purpose:

  • Identify seasonal patterns in returns
  • Visualize return consistency across months/years
  • Spot problematic periods quickly

Arguments:

  • equity_df: DataFrame with equity values indexed by date

Returns:

  • DataFrame with years as rows, months as columns, values as monthly returns (%)

Example:

from portfolio_management.reporting.visualization import prepare_monthly_returns_heatmap
import seaborn as sns
import matplotlib.pyplot as plt

monthly_returns = prepare_monthly_returns_heatmap(backtest_result.equity)

# Create heatmap
plt.figure(figsize=(12, 6))
sns.heatmap(
    monthly_returns,
    annot=True,
    fmt='.1f',
    center=0,
    cmap='RdYlGn',
    cbar_kws={'label': 'Return %'}
)
plt.title('Monthly Returns Heatmap')
plt.ylabel('Year')
plt.xlabel('Month')
plt.show()

Use Cases:

  • Identifying seasonal patterns
  • Quick visual performance assessment
  • Spotting problematic periods

Rolling Performance Metrics

prepare_rolling_metrics(equity_df: pd.DataFrame, window: int = 60) -> pd.DataFrame

Calculates rolling performance metrics for time-varying performance analysis.

Purpose:

  • Track strategy performance stability over time
  • Identify regime changes and adaptation
  • Assess metric consistency

Arguments:

  • equity_df: DataFrame with equity values
  • window: Rolling window size in days (default: 60)

Returns:

  • DataFrame with:
  • rolling_return_annual: Annualized rolling returns
  • rolling_volatility_annual: Annualized rolling volatility
  • rolling_sharpe: Rolling Sharpe ratio
  • rolling_max_drawdown: Rolling maximum drawdown

Example:

from portfolio_management.reporting.visualization import prepare_rolling_metrics
import matplotlib.pyplot as plt

rolling = prepare_rolling_metrics(backtest_result.equity, window=60)

# Plot rolling Sharpe ratio
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 8))

ax1.plot(rolling.index, rolling['rolling_sharpe'])
ax1.set_title('Rolling 60-Day Sharpe Ratio')
ax1.set_ylabel('Sharpe Ratio')
ax1.axhline(0, color='r', linestyle='--', alpha=0.3)

ax2.plot(rolling.index, rolling['rolling_max_drawdown'])
ax2.set_title('Rolling 60-Day Maximum Drawdown')
ax2.set_ylabel('Drawdown %')
ax2.fill_between(rolling.index, rolling['rolling_max_drawdown'], 0, alpha=0.3)

plt.tight_layout()
plt.show()

Use Cases:

  • Regime detection and strategy adaptation analysis
  • Performance stability assessment
  • Dynamic risk monitoring

Trade Analysis

prepare_trade_analysis(rebalance_events: list[RebalanceEvent]) -> dict

Analyzes individual trades to provide trade-level statistics and insights.

Purpose:

  • Understand trade characteristics and patterns
  • Identify most/least profitable trades
  • Optimize trade sizing and timing

Arguments:

  • rebalance_events: List of rebalancing events from backtest

Returns:

  • Dictionary with:
  • num_trades: Total number of individual trades
  • avg_trade_size: Average trade size ($)
  • trade_size_distribution: Statistics on trade sizing
  • most_traded_assets: Assets with highest trading frequency
  • trade_profitability: Per-trade profit/loss distribution

Example:

from portfolio_management.reporting.visualization import prepare_trade_analysis

trade_stats = prepare_trade_analysis(backtest_result.rebalance_events)

print(f"Total Trades: {trade_stats['num_trades']}")
print(f"Avg Trade Size: ${trade_stats['avg_trade_size']:,.2f}")
print("\nMost Traded Assets:")
for asset, count in trade_stats['most_traded_assets'][:5]:
    print(f"  {asset}: {count} trades")

Use Cases:

  • Trade execution analysis
  • Position sizing optimization
  • Transaction cost reduction strategies

Complete Visualization Workflow

Example: Comprehensive Backtest Report

from portfolio_management.backtesting import BacktestEngine, BacktestConfig
from portfolio_management.reporting.visualization import (
    prepare_equity_curve,
    prepare_drawdown_series,
    create_summary_report,
    prepare_allocation_history,
    prepare_metrics_comparison,
    prepare_rolling_metrics,
)
import matplotlib.pyplot as plt

# Run backtest
config = BacktestConfig.from_yaml("config/momentum_strategy_config.yaml")
engine = BacktestEngine(config)
result = engine.run()

# 1. Equity Curve
equity_chart = prepare_equity_curve(result.equity)
plt.figure(figsize=(12, 6))
plt.plot(equity_chart.index, equity_chart['equity_normalized'])
plt.title('Portfolio Growth (Base 100)')
plt.ylabel('Value')
plt.savefig('equity_curve.png')
plt.close()

# 2. Drawdowns
drawdown_chart = prepare_drawdown_series(result.equity)
plt.figure(figsize=(12, 6))
plt.fill_between(
    drawdown_chart.index,
    drawdown_chart['drawdown_pct'],
    0,
    alpha=0.3,
    color='red'
)
plt.title('Portfolio Drawdowns')
plt.ylabel('Drawdown %')
plt.savefig('drawdowns.png')
plt.close()

# 3. Summary Report
summary = create_summary_report(result.equity, result.rebalance_events)
print("=== Performance Summary ===")
for category, metrics in summary.items():
    print(f"\n{category.upper()}:")
    for key, value in metrics.items():
        print(f"  {key}: {value}")

# 4. Rolling Metrics
rolling = prepare_rolling_metrics(result.equity, window=60)
fig, ax = plt.subplots(2, 1, figsize=(12, 8))
ax[0].plot(rolling.index, rolling['rolling_sharpe'])
ax[0].set_title('Rolling 60-Day Sharpe Ratio')
ax[1].plot(rolling.index, rolling['rolling_volatility_annual'])
ax[1].set_title('Rolling 60-Day Volatility (Annual)')
plt.tight_layout()
plt.savefig('rolling_metrics.png')
plt.close()

# 5. Allocation History
allocation = prepare_allocation_history(result.rebalance_events)
plt.figure(figsize=(12, 6))
plt.stackplot(
    allocation.index,
    allocation['cash_pct'],
    allocation['holdings_pct'],
    labels=['Cash', 'Holdings']
)
plt.legend()
plt.title('Portfolio Allocation Over Time')
plt.ylabel('Allocation %')
plt.savefig('allocations.png')
plt.close()

print("\nAll charts saved successfully!")

Integration with Web Dashboards

The visualization module functions are designed to integrate seamlessly with web-based dashboards and reporting tools:

Streamlit Integration

import streamlit as st
from portfolio_management.reporting.visualization import *

st.title("Portfolio Backtest Dashboard")

# Load backtest result
result = load_backtest_result()

# Display metrics
summary = create_summary_report(result.equity, result.rebalance_events)
col1, col2, col3 = st.columns(3)
col1.metric("Total Return", f"{summary['performance']['total_return']:.2%}")
col2.metric("Sharpe Ratio", f"{summary['risk']['sharpe_ratio']:.2f}")
col3.metric("Max Drawdown", f"{summary['risk']['max_drawdown']:.2%}")

# Interactive charts
equity_data = prepare_equity_curve(result.equity)
st.line_chart(equity_data['equity_normalized'])

drawdown_data = prepare_drawdown_series(result.equity)
st.area_chart(drawdown_data['drawdown_pct'])

Plotly Integration

import plotly.graph_objects as go
from portfolio_management.reporting.visualization import prepare_equity_curve

equity_data = prepare_equity_curve(result.equity)

fig = go.Figure()
fig.add_trace(go.Scatter(
    x=equity_data.index,
    y=equity_data['equity_normalized'],
    mode='lines',
    name='Portfolio Value',
    line=dict(color='blue', width=2)
))

fig.update_layout(
    title='Portfolio Growth (Normalized to 100)',
    xaxis_title='Date',
    yaxis_title='Value',
    hovermode='x unified'
)

fig.show()

Best Practices

  1. Consistent Date Ranges: Ensure all visualization functions receive data from the same backtest run with consistent date ranges.

  2. Performance Considerations: For large backtests (>10 years daily data), consider:

  3. Downsampling equity curves for chart display

  4. Using monthly data for heatmaps instead of daily
  5. Caching prepared visualization data

  6. Multiple Strategies: When comparing strategies:

  7. Use prepare_metrics_comparison() for tabular comparisons

  8. Overlay equity curves using prepare_equity_curve() for each strategy
  9. Compare rolling metrics to understand time-varying performance differences

  10. Interactive Exploration: The data preparation functions return standard pandas DataFrames and dictionaries, making them compatible with:

  11. Jupyter notebooks for interactive exploration

  12. Streamlit/Dash for web dashboards
  13. Matplotlib/Plotly/Seaborn for static reports

  14. Error Handling: All functions handle edge cases gracefully:

  15. Empty rebalance event lists return empty DataFrames

  16. Insufficient data for metrics returns NaN or zero values
  17. Functions validate input data types and raise clear exceptions

See Also