Skip to content

Scaling Guide

This guide covers strategies for scaling the VPN Exit Controller system to handle increased load and geographic expansion.

Overview

The VPN Exit Controller is designed to scale both vertically (more resources) and horizontally (more nodes). This guide covers both approaches and when to use each.

Vertical Scaling

Resource Upgrades

Increase resources for the existing LXC container:

# On Proxmox host
# Increase CPU cores
pct set 201 --cores 8

# Increase memory
pct set 201 --memory 16384

# Increase disk
pct resize 201 rootfs +20G

Container Limits

Adjust Docker resource limits:

# In docker-compose.yml
services:
  vpn-exit-us:
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 2G
        reservations:
          cpus: '1.0'
          memory: 1G

Horizontal Scaling

Adding More VPN Nodes

  1. Add new country nodes:

    # Use the API to start new nodes
    curl -X POST -u admin:password \
      https://exit.idlegaming.org/api/nodes/br/start
    

  2. Configure load balancing:

    # Add to HAProxy configuration
    server vpn-br-1 100.73.33.20:3128 check
    server vpn-br-2 100.73.33.21:3128 check
    

Multi-Region Deployment

Deploy additional VPN controllers in different regions:

  1. Create new LXC container
  2. Install VPN controller
  3. Configure geo-routing
  4. Update DNS for regional access

Auto-Scaling

Connection-Based Scaling

Monitor and scale based on connections:

# Auto-scaling logic
async def check_scaling_needed():
    stats = await get_connection_stats()

    for country, connections in stats.items():
        if connections > SCALE_UP_THRESHOLD:
            await start_additional_node(country)
        elif connections < SCALE_DOWN_THRESHOLD:
            await stop_extra_node(country)

Performance-Based Scaling

Scale based on response times:

# Monitor response times
for country in us uk de jp; do
    response_time=$(curl -w "%{time_total}" -o /dev/null -s \
        -x http://proxy-$country.exit.idlegaming.org:3128 \
        http://httpbin.org/ip)

    if (( $(echo "$response_time > 2.0" | bc -l) )); then
        echo "Scaling up $country - slow response"
    fi
done

Load Distribution

Geographic Distribution

Distribute load across regions:

# Nginx geo-based routing
geo $proxy_pool {
    default us;

    # Europe
    2.0.0.0/8 eu;
    5.0.0.0/8 eu;

    # Asia
    1.0.0.0/8 asia;
    14.0.0.0/8 asia;

    # Americas
    8.0.0.0/8 us;
    24.0.0.0/8 us;
}

Load Balancing Strategies

Configure different strategies:

# Set load balancing strategy
curl -X POST -u admin:password \
    https://exit.idlegaming.org/api/lb/strategy/health_score

Available strategies: - round_robin - Equal distribution - least_connections - Fewest active connections - health_score - Best performing nodes - random - Random selection - ip_hash - Consistent hashing

Capacity Planning

Monitoring Metrics

Track key metrics for capacity planning:

# Connection count per node
docker exec haproxy-default \
    echo "show stat" | socat stdio /var/run/haproxy/admin.sock | \
    awk -F',' '{print $2, $5}'

# Bandwidth usage
vnstat -i eth0 -h

# Resource utilization
docker stats --no-stream --format json | \
    jq '.CPUPerc, .MemUsage'

Growth Projections

Plan for growth:

  1. Track daily peaks: Monitor busiest hours
  2. Calculate growth rate: Month-over-month increase
  3. Plan ahead: Scale before hitting limits
  4. Budget resources: CPU, memory, bandwidth

Optimization

Container Optimization

Optimize container performance:

# Optimized Dockerfile
FROM alpine:latest
RUN apk add --no-cache \
    openvpn \
    squid \
    dante-server

# Reduce layers
RUN configuration && \
    cleanup && \
    optimization

Network Optimization

Improve network performance:

# Increase network buffers
sysctl -w net.core.rmem_max=134217728
sysctl -w net.core.wmem_max=134217728
sysctl -w net.ipv4.tcp_rmem="4096 87380 134217728"
sysctl -w net.ipv4.tcp_wmem="4096 65536 134217728"

Scaling Checklist

Before scaling:

  • Monitor current utilization
  • Identify bottlenecks
  • Plan scaling approach
  • Test in staging environment
  • Prepare rollback plan
  • Schedule during low traffic
  • Monitor after scaling
  • Document changes

Best Practices

  1. Scale gradually: Add resources incrementally
  2. Monitor impact: Watch metrics after changes
  3. Automate where possible: Use scripts for common tasks
  4. Plan for failures: Have redundancy
  5. Document everything: Keep scaling playbooks

Cost Optimization

Right-Sizing

Ensure efficient resource usage:

# Analyze container usage over time
for container in $(docker ps --format "{{.Names}}" | grep vpn-); do
    echo "=== $container ==="
    docker stats $container --no-stream
done

Idle Resource Management

Stop unused resources:

# Stop idle nodes during off-peak
for country in $(low_traffic_countries); do
    docker stop vpn-exit-$country-backup
done

Scaling Considerations

Always test scaling changes in a staging environment first. Monitor closely after any scaling operations to ensure system stability.