Configuration
6 minute read
Configuration File
ProxLB uses a YAML configuration file located at /etc/proxlb/proxlb.yaml.
Options
The following options can be set in the configuration file:
| Section | Option | Sub Option | Example | Type | Description |
|---|---|---|---|---|---|
proxmox_api | |||||
| hosts | [‘virt01.example.com’, ‘10.10.10.10’] | List | List of Proxmox nodes. Can be IPv4, IPv6 or mixed. You can specify custom ports. | ||
| user | root@pam | Str | Username for the API. | ||
| pass | FooBar | Str | Password for the API. (Recommended: Use API token authorization!) | ||
| token_id | proxlb | Str | Token ID of the user for the API. | ||
| token_secret | 430e308f-1337-1337-beef-1337beefcafe | Str | Secret of the token ID for the API. | ||
| ssl_verification | True | Bool | Validate SSL certificates (1) or ignore (0). [values: 1 (default), 0] | ||
| timeout | 10 | Int | Timeout for the Proxmox API in sec. | ||
| retries | 1 | Int | How often a connection attempt to the defined API host should be performed. | ||
| wait_time | 1 | Int | How many seconds should be waited before performing another connection attempt to the API host. | ||
proxmox_cluster | |||||
| maintenance_nodes | [‘virt66.example.com’] | List | A list of Proxmox nodes that are defined to be in maintenance. | ||
| ignore_nodes | [] | List | A list of Proxmox nodes that are defined to be ignored. | ||
| overprovisioning | False | Bool | Avoids balancing when nodes would become overprovisioned. | ||
balancing | |||||
| enable | True | Bool | Enables the guest balancing. | ||
| enforce_affinity | False | Bool | Enforcing affinity/anti-affinity rules but balancing might become worse. | ||
| enforce_pinning | False | Bool | Enforcing pinning rules but balancing might become worse. | ||
| parallel | False | Bool | If guests should be moved in parallel or sequentially. | ||
| parallel_jobs | 5 | Int | The amount of parallel jobs when migrating guests. (default: 5) | ||
| live | True | Bool | If guests should be moved live or shutdown. | ||
| with_local_disks | True | Bool | If balancing of guests should include local disks. | ||
| with_conntrack_state | True | Bool | If balancing of guests should include the conntrack state. | ||
| balance_types | [‘vm’, ‘ct’] | List | Defined the types of guests that should be honored. [values: vm, ct] | ||
| max_job_validation | 1800 | Int | How long a job validation may take in seconds. (default: 1800) | ||
| balanciness | 10 | Int | The maximum delta of resource usage between node with highest and lowest usage. | ||
| memory_threshold | 75 | Int | The maximum threshold (in percent) that needs to be hit to perform balancing actions. (Optional) | ||
| cpu_threshold | 75 | Int | The maximum threshold (in percent) that needs to be hit to perform balancing actions. (Optional) | ||
| method | memory | Str | The balancing method. [values: memory (default), cpu, disk] | ||
| mode | used | Str | The balancing mode. [values: used (default), assigned, psi (pressure)] | ||
| balance_larger_guests_first | False | Bool | Option to prefer larger/smaller guests first. | ||
| node_resource_reserve | Dict | A dict of resource reservations per node (in GB). | |||
| psi | Dict | A dict of PSI based thresholds for nodes and guests. | |||
| pools | Dict | A dict of pool names and their type for creating affinity/anti-affinity rules. | |||
service | |||||
| daemon | True | Bool | If daemon mode should be activated. | ||
| schedule | interval | 12 | Int | How often rebalancing should occur in daemon mode. | |
| schedule | format | hours | Str | Sets the time format. [values: hours (default), minutes] | |
| delay | enable | False | Bool | If a delay time should be validated. | |
| delay | time | 1 | Int | Delay time until the service starts after the initial execution. | |
| delay | format | hours | Str | Sets the time format. [values: hours (default), minutes] | |
| log_level | INFO | Str | Defines the default log level. [values: INFO (default), WARNING, CRITICAL, DEBUG] |
Complete Example
proxmox_api:
hosts: ['virt01.example.com', '10.10.10.10', 'fe01:bad:code::cafe']
user: root@pam
pass: crazyPassw0rd!
# API Token method
# token_id: proxlb
# token_secret: 430e308f-1337-1337-beef-1337beefcafe
ssl_verification: True
timeout: 10
proxmox_cluster:
maintenance_nodes: ['virt66.example.com']
ignore_nodes: []
overprovisioning: True
balancing:
enable: True
enforce_affinity: False
enforce_pinning: False
parallel: False
parallel_jobs: 1
live: True
with_local_disks: True
with_conntrack_state: True
balance_types: ['vm', 'ct']
max_job_validation: 1800
memory_threshold: 75
cpu_threshold: 75
balanciness: 5
method: memory
mode: used
balance_larger_guests_first: False
node_resource_reserve:
defaults:
memory: 4
node01:
memory: 6
pools:
dev:
type: affinity
de-nbg01-db:
type: anti-affinity
pin:
- virt66
- virt77
strict: False
service:
daemon: True
schedule:
interval: 12
format: hours
delay:
enable: False
time: 1
format: hours
log_level: INFO
Command Line Options
| Option | Long Option | Description | Default |
|---|---|---|---|
| -c | –config | Path to a config file. | /etc/proxlb/proxlb.yaml |
| -d | –dry-run | Performs a dry-run without doing any actions. | False |
| -j | –json | Returns a JSON of the VM movement. | False |
| -b | –best-node | Returns the best next node for a VM/CT placement. | False |
| -v | –version | Returns the ProxLB version on stdout. | False |
Affinity & Anti-Affinity Rules
ProxLB provides an advanced mechanism to define affinity and anti-affinity rules, enabling precise control over virtual machine placement. These rules help manage resource distribution, improve high availability configurations, and optimize performance within a Proxmox VE cluster.
Affinity Rules
Affinity rules group certain VMs together, ensuring that they run on the same host whenever possible. This can be beneficial for workloads requiring low-latency communication.
By Tags
Assign a tag with the prefix plb_affinity_ to your guests in the Proxmox web interface:
plb_affinity_webservers
ProxLB will attempt to place all VMs with the same affinity tag on the same host.
By Pools
Define affinity rules using Proxmox pools in the configuration file:
balancing:
pools:
dev:
type: affinity
pin:
- virt77
Anti-Affinity Rules
Anti-affinity rules ensure that designated VMs do not run on the same physical host. This is useful for high-availability setups where redundancy is crucial.
By Tags
Assign a tag with the prefix plb_anti_affinity_:
plb_anti_affinity_ntp
ProxLB will try to place these VMs on different hosts.
By Pools
balancing:
pools:
de-nbg01-db:
type: anti-affinity
If you have more guests attached to a group than nodes in the cluster, ProxLB will select the node with the most free resources for the remaining guests.
Ignore VMs
Guests can be excluded from any migration by assigning a tag with the prefix plb_ignore_:
plb_ignore_dev
Ignored guests are still evaluated for resource calculations but will not be migrated, even when affinity rules are enforced.
Pin VMs to Specific Nodes
By Tags
Pin a guest to a specific cluster node by assigning a tag with the prefix plb_pin_:
plb_pin_node03
You can repeat this for multiple node names to create a group of allowed hosts. ProxLB picks the node with the lowest resource usage from the group.
By Pools
balancing:
pools:
dev:
type: affinity
pin:
- virt77
Node names from tags are validated against the cluster. If a node name is invalid or unavailable, ProxLB falls back to regular balancing.
Maintenance Mode
The maintenance_nodes option allows operators to designate one or more Proxmox nodes for maintenance. All existing workloads will be migrated away from maintenance nodes while respecting affinity rules and resource availability.
proxmox_cluster:
maintenance_nodes: ['virt66.example.com']
Proxmox HA Integration
When using Proxmox HA (High Availability) groups together with ProxLB, conflicts can arise. ProxLB may redistribute VMs in a way that does not align with HA group constraints.
Due to these potential conflicts, it is currently not recommended to use both HA groups and ProxLB simultaneously.
See also: #65: Host groups: Honour HA groups.