backend.daemons.vm_master

class backend.daemons.vm_master.VmMaster(opts, vmm, spawner, checker)[source]

Spawns and terminate VM for builder process.

remove_old_dirty_vms()[source]
check_one_vm_for_dead_builder(vmd)[source]
remove_vm_with_dead_builder()[source]
check_vms_health()[source]
start_vm_check(vm_name)[source]

Start VM health check sub-process if current VM state allows it

_check_total_running_vm_limit(group)[source]

Checks that number of VM in any state excluding Terminating plus number of running spawn processes is less than threshold defined by BackendConfig.build_group[group][“max_vm_total”]

_check_elapsed_time_after_spawn(group)[source]

Checks that time elapsed since latest VM spawn attempt is greater than threshold defined by BackendConfig.build_group[group][“vm_spawn_min_interval”]

_check_number_of_running_spawn_processes(group)[source]

Check that number of running spawn processes is less than threshold defined by BackendConfig.build_group[][“max_spawn_processes”]

_check_total_vm_limit(group)[source]

Check that number of running spawn processes is less than threshold defined by BackendConfig.build_group[][“max_spawn_processes”]

try_spawn_one(group)[source]

Starts spawning process if all conditions are satisfied

start_spawn_if_required()[source]
do_cycle()[source]
run()[source]
terminate()[source]
finalize_long_health_checks()[source]

After server crash it’s possible that some VM’s will remain in check_health state Here we are looking for such records and mark them with check_health_failed state

terminate_again()[source]

If we failed to terminate instance request termination once more. Non-terminated instance detected as vm in the terminating state with

time.time() - terminating since > Threshold

It’s possible, that VM was terminated but termination process doesn’t receive confirmation from VM provider, but we have already got a new VM with the same IP => it’s safe to remove old vm from pool