Ansible management for stand-alone vmware esxi host
Ansible has some great modules for VMware vCenter (especially in 2.5), but none for managing standalone ESXi hosts. There are many cases when full vCenter infrastructure is not required and web-based Host UI is quite enough for routine administrative tasks.
Modules, roles and playbooks presented here allow to manage standalone ESXi hosts (although hosts under vCenter are ok too) with direct SSH connection, usually with transparent key-based authentication.
roles/hostconf_esxi
)vm_deploy/
)
upload_clone
)clone_local
)esxi_vm_info
)esxi_autostart
)esxi_vib
)split
: split string into a listtodict
: convert a list of records into a dictionary, using specified field as a keyupdate_esxi.yaml
)get_vault_pass.esxi.sh
)hostconf-esxi
roleThis role takes care of many aspects of standalone ESXi server configuration like
creds/
)create_vmotion_iface
in role defaults)vpxa
and other noisy components logging level from
default verbose
to info
disable_slpd: true
in host vars to turn it off)Only requirement is correctly configured network (especially uplinks) and reachability over ssh with root password. ESXi must be reasonably recent (6.0+, although some newer versions of 5.5 have working python 2.7 too).
ansible.cfg
: specify remote user, inventory path etc; specify vault pass method
if using one for certificate private key encryption.group_vars/all.yaml
: specify global parameters like NTP and syslog servers theregroup_vars/<site>.yaml
: set specific params for each <site>
in inventoryhost_vars/<host>.yaml
: override global and group values with e.g. host-specific
users list or datastore configroles/hostconf-esxi/files/id_rsa.<user>@<keyname>.pub
for referencing them later in user list host_vars
or group_vars
(group|host)_vars
serial number to assign, usually set in global group_vars/all.yaml
; does not get
changed if not set
esxi_serial: "XXXXX-XXXXX-XXXX-XXXXX-XXXXX"
general network environment, usually set in group_vars/<site>.yaml
dns_domain: "m0.maxidom.ru"
name_servers:
- 10.0.128.1
- 10.0.128.2
ntp_servers:
- 10.1.131.1
- 10.1.131.2
# defaults: "log." + dns_domain
# syslog_host: log.m0.maxidom.ru
user configuration: those users are created (if not present) and assigned random
passwords (printed out and stored in creds/<user>.<host>.pass.out
), have ssh keys assigned to them (persistently) and restricted to specified hosts (plus global list
in permit_ssh_from
), are granted administrative rights and access to the console
esxi_local_users:
"<user>":
desc: "<user description>""
pubkeys:
- name: "<keyname>"
hosts: "1.2.3.4,some-host.com"
users that are not in this list (except root) are removed from host, so be careful.
network configuration: portgroups list in esxi_portgroups
are exhaustive, i.e. those
and only those portgroups (with exactly matched tags) should be present oh host after
playbook run (missed are created, wrong names are fixed, extra are removed if not used)
esxi_portgroups:
all-tagged: { tag: 4095 }
adm-srv: { tag: 210 }
srv-netinf: { tag: 131 }
pvt-netinf: { tag: 199 }
# could also specify vSwitch (default is vSwitch0)
adm-stor: { tag: 21, vswitch: vSwitch1 }
datastore configuration: datastores would be created on those devices if missed and
create_datastores
is set; existent datastores would be renamed to match specified
name if rename_datastores
is set and they are empty
local_datastores:
"vmhba0:C0:T0:L1": "nest-test-sys"
"vmhba0:C0:T0:L2": "nest-test-apps"
VIBs to install or update (like latest esx-ui host client fling)
vib_list:
- name: esx-ui
url: "http://www-distr.m1.maxidom.ru/suse_distr/iso/esxui-signed-6360286.vib"
autostart configuration: listed VMs are added to esxi auto-start list, in specified order
if order is present, else just randomly; if autostart_only_listed
is set, only those VMs
will be autostarted on host with extra VMs removed from autostart
vms_to_autostart:
eagle-m0:
order: 1
hawk-m0:
order: 2
falcon-u1:
inventory.esxi
files/<host>.rui.crt
,files/<host>.key.vault
(and encrypt vault)host_vars/hostname.yaml
For the initial config only the "root" user is available, so run playbook like this:
ansible-playbook all.yaml -l new-host -u root -k --tags hostconf --diff
After local users are configured (and ssh key auth is in place), just use remote_user
from ansible.cfg
and run it like
ansible-playbook all.yaml -l host-or-group --tags hostconf --diff
vSwitch0
) is currently supportedThere are two playbooks in vm_deploy/
subdir
upload_clone
) is for copying template VM from source host to new targetclone_local
) is for making custom clones of local template VMSee playbook source and comments at the top for a list if parameters, some are mentioned below.
netaddr
and dnspython
upload_clone
This playbooks is mostly used to upload initial "template" VM to target host (to be, in turn, template for further local cloning). Source of template VM is usually at another ESXi host, and there are 3 modes of copy:
ansible-deploy.cfg
for tmp configuration)There are no options for customization there, only for src and dst params like datastore, and usual invocation looks like
ansible-playbook upload_clone.yaml -l nest2-k1 \
-e 'src_vm_name=phoenix11-1-k1 src_vm_vol=nest1-sys src_vm_server=nest1-k1' \
-e 'dst_vm_name=phoenix11-2-k1' \
-e 'direct_scp=true'
clone_local
This playbook is used to produce new VM from local template source, optionally customize parameters like datastore, network and disks, and optionally power it on. Invocation to create new machine (with additional network card and disk) and power it on looks like
ansible-playbook clone_local.yaml -l nest1-mf1 -e 'vm_name=files-mf1-vm \
vm_desc="samba file server" vm_net2=srv-smb vm_disk2=100G' \
-e 'do_power_on=true'
To simplify cloning, it is better to
host_vars
(as src_vm_name
)Modules (library/
) are documented with usual Ansible docs. They could be used
stand-alone, like
ansible -m esxi_vm_list -a 'get_power_state=true get_start_state=true' esxi-name
to get a list of host VMs together with autostart state and current run state