Configuration

Complete configuration reference for ProxLB

Configuration File

ProxLB uses a YAML configuration file located at /etc/proxlb/proxlb.yaml.

Options

The following options can be set in the configuration file:

SectionOptionSub OptionExampleTypeDescription
proxmox_api
hosts[‘virt01.example.com’, ‘10.10.10.10’]ListList of Proxmox nodes. Can be IPv4, IPv6 or mixed. You can specify custom ports.
userroot@pamStrUsername for the API.
passFooBarStrPassword for the API. (Recommended: Use API token authorization!)
token_idproxlbStrToken ID of the user for the API.
token_secret430e308f-1337-1337-beef-1337beefcafeStrSecret of the token ID for the API.
ssl_verificationTrueBoolValidate SSL certificates (1) or ignore (0). [values: 1 (default), 0]
timeout10IntTimeout for the Proxmox API in sec.
retries1IntHow often a connection attempt to the defined API host should be performed.
wait_time1IntHow many seconds should be waited before performing another connection attempt to the API host.
proxmox_cluster
maintenance_nodes[‘virt66.example.com’]ListA list of Proxmox nodes that are defined to be in maintenance.
ignore_nodes[]ListA list of Proxmox nodes that are defined to be ignored.
overprovisioningFalseBoolAvoids balancing when nodes would become overprovisioned.
balancing
enableTrueBoolEnables the guest balancing.
enforce_affinityFalseBoolEnforcing affinity/anti-affinity rules but balancing might become worse.
enforce_pinningFalseBoolEnforcing pinning rules but balancing might become worse.
parallelFalseBoolIf guests should be moved in parallel or sequentially.
parallel_jobs5IntThe amount of parallel jobs when migrating guests. (default: 5)
liveTrueBoolIf guests should be moved live or shutdown.
with_local_disksTrueBoolIf balancing of guests should include local disks.
with_conntrack_stateTrueBoolIf balancing of guests should include the conntrack state.
balance_types[‘vm’, ‘ct’]ListDefined the types of guests that should be honored. [values: vm, ct]
max_job_validation1800IntHow long a job validation may take in seconds. (default: 1800)
balanciness10IntThe maximum delta of resource usage between node with highest and lowest usage.
memory_threshold75IntThe maximum threshold (in percent) that needs to be hit to perform balancing actions. (Optional)
cpu_threshold75IntThe maximum threshold (in percent) that needs to be hit to perform balancing actions. (Optional)
methodmemoryStrThe balancing method. [values: memory (default), cpu, disk]
modeusedStrThe balancing mode. [values: used (default), assigned, psi (pressure)]
balance_larger_guests_firstFalseBoolOption to prefer larger/smaller guests first.
node_resource_reserveDictA dict of resource reservations per node (in GB).
psiDictA dict of PSI based thresholds for nodes and guests.
poolsDictA dict of pool names and their type for creating affinity/anti-affinity rules.
service
daemonTrueBoolIf daemon mode should be activated.
scheduleinterval12IntHow often rebalancing should occur in daemon mode.
scheduleformathoursStrSets the time format. [values: hours (default), minutes]
delayenableFalseBoolIf a delay time should be validated.
delaytime1IntDelay time until the service starts after the initial execution.
delayformathoursStrSets the time format. [values: hours (default), minutes]
log_levelINFOStrDefines the default log level. [values: INFO (default), WARNING, CRITICAL, DEBUG]

Complete Example

proxmox_api:
  hosts: ['virt01.example.com', '10.10.10.10', 'fe01:bad:code::cafe']
  user: root@pam
  pass: crazyPassw0rd!
  # API Token method
  # token_id: proxlb
  # token_secret: 430e308f-1337-1337-beef-1337beefcafe
  ssl_verification: True
  timeout: 10

proxmox_cluster:
  maintenance_nodes: ['virt66.example.com']
  ignore_nodes: []
  overprovisioning: True

balancing:
  enable: True
  enforce_affinity: False
  enforce_pinning: False
  parallel: False
  parallel_jobs: 1
  live: True
  with_local_disks: True
  with_conntrack_state: True
  balance_types: ['vm', 'ct']
  max_job_validation: 1800
  memory_threshold: 75
  cpu_threshold: 75
  balanciness: 5
  method: memory
  mode: used
  balance_larger_guests_first: False
  node_resource_reserve:
    defaults:
      memory: 4
    node01:
      memory: 6
  pools:
    dev:
      type: affinity
    de-nbg01-db:
      type: anti-affinity
      pin:
        - virt66
        - virt77
      strict: False

service:
  daemon: True
  schedule:
    interval: 12
    format: hours
  delay:
    enable: False
    time: 1
    format: hours
  log_level: INFO

Command Line Options

OptionLong OptionDescriptionDefault
-c–configPath to a config file./etc/proxlb/proxlb.yaml
-d–dry-runPerforms a dry-run without doing any actions.False
-j–jsonReturns a JSON of the VM movement.False
-b–best-nodeReturns the best next node for a VM/CT placement.False
-v–versionReturns the ProxLB version on stdout.False

Affinity & Anti-Affinity Rules

ProxLB provides an advanced mechanism to define affinity and anti-affinity rules, enabling precise control over virtual machine placement. These rules help manage resource distribution, improve high availability configurations, and optimize performance within a Proxmox VE cluster.

Affinity Rules

Affinity rules group certain VMs together, ensuring that they run on the same host whenever possible. This can be beneficial for workloads requiring low-latency communication.

By Tags

Assign a tag with the prefix plb_affinity_ to your guests in the Proxmox web interface:

plb_affinity_webservers

ProxLB will attempt to place all VMs with the same affinity tag on the same host.

By Pools

Define affinity rules using Proxmox pools in the configuration file:

balancing:
  pools:
    dev:
      type: affinity
      pin:
        - virt77

Anti-Affinity Rules

Anti-affinity rules ensure that designated VMs do not run on the same physical host. This is useful for high-availability setups where redundancy is crucial.

By Tags

Assign a tag with the prefix plb_anti_affinity_:

plb_anti_affinity_ntp

ProxLB will try to place these VMs on different hosts.

By Pools

balancing:
  pools:
    de-nbg01-db:
      type: anti-affinity

Ignore VMs

Guests can be excluded from any migration by assigning a tag with the prefix plb_ignore_:

plb_ignore_dev

Ignored guests are still evaluated for resource calculations but will not be migrated, even when affinity rules are enforced.

Pin VMs to Specific Nodes

By Tags

Pin a guest to a specific cluster node by assigning a tag with the prefix plb_pin_:

plb_pin_node03

You can repeat this for multiple node names to create a group of allowed hosts. ProxLB picks the node with the lowest resource usage from the group.

By Pools

balancing:
  pools:
    dev:
      type: affinity
      pin:
        - virt77

Maintenance Mode

The maintenance_nodes option allows operators to designate one or more Proxmox nodes for maintenance. All existing workloads will be migrated away from maintenance nodes while respecting affinity rules and resource availability.

proxmox_cluster:
  maintenance_nodes: ['virt66.example.com']

Proxmox HA Integration

When using Proxmox HA (High Availability) groups together with ProxLB, conflicts can arise. ProxLB may redistribute VMs in a way that does not align with HA group constraints.

Due to these potential conflicts, it is currently not recommended to use both HA groups and ProxLB simultaneously.

See also: #65: Host groups: Honour HA groups.