Motivation
Picking up from the Part 1, where we left off from a PoC single LXC container, deployed from Terraform. My setup right now is using the Ansible execution environment we built from post on my laptop to a Workstation where we will bring up a bunch of containers to do some hands-on labbing stuff. So i am mainly working in my laptop where i have all the dev environment setup (Terminal QoL setup, editor, LSP etc). It’s easier to directly do terraform apply
etc directly from an ansible playbook ( stupid me didn’t think of search terraform ansible module, turns out there is a community.general.terraform module for doing complex enough stuff in my case)
Tasks
From last part, i didn’t complete the hosts name part, so we will explore more cloud-init config directives to do that
Complete the terraform template and have ansible deploy it
/etc/hosts
If we check the doc of cloud-init, we can see that there is a “write_files” directives, but it seems it was writing a new file rather than appending. We can update the cloud-init user data file as below
- content: |
127.0.0.1 localhost
10.188.251.100 control
10.188.251.101 worker1
10.188.251.102 worker2
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
path: /etc/hosts
permissions: '0644'
users:
- name: mgmt
groups: admin
ssh_authorized_keys:
One thing is that in yaml file, there seems to be different flavor of multi-line(?), using just “|” would not cause the resulting /etc/hosts file having the leading spaces in this file.
Terraform template
I think there should be better way of doing this, especially if say i need 3 control nodes and 100 worker nodes, but for the purpose here i will stick with this ( Pretty sure will need to come back to this later as i intend to explore the use of Talos for building out K8s clusters)
locals {
gateway4 = "10.188.251.1"
}
resource "lxd_instance" "control" {
name = "control"
image = "ubuntu:22.04"
config = {
"cloud-init.user-data" = file("./my-user-data"),
#"cloud-init.network-config" = file("./my-network-config")
"cloud-init.network-config" = templatefile("./netplan.tfpl",
{
ipv4_addr = "10.188.251.100/24",
gateway4 = local.gateway4
})
}
}
resource "lxd_instance" "worker1" {
name = "worker1"
image = "ubuntu:22.04"
config = {
"cloud-init.user-data" = file("./my-user-data"),
#"cloud-init.network-config" = file("./my-network-config")
"cloud-init.network-config" = templatefile("./netplan.tfpl",
{
ipv4_addr = "10.188.251.101/24"
gateway4 = local.gateway4
})
}
}
Here, i use a local block for simplicity, and the ipv4_addr variable i also opted to have it in the main.tf directly rather than having them on a separate variable files which could then be used by the Ansible module to avoid spamming these values all over the place.
Finally, the ansible playbook is simple enough; copy the files to remote and then have the module run the apply.
- name: Copy TF files to remote
ansible.builtin.copy:
src: files/terraform
dest: /home/mgmt/iac
owner: mgmt
group: mgmt
mode: '0755'
- name: Basic deploy of a service
community.general.terraform:
project_path: /home/mgmt/iac/terraform
state: present
Finally…
In the Ansible Execution Environment post, i left out the part of how we should put the ssh keys in the EE (in the documentation, we see that you can copy files into the image, but you would get file not found error somehow. I see others are having same issue SO. I suspect it might have something to do with the unprivileged container, but need more digging). I ended up doing a bindmount on command line argument.
ansible-navigator run base-install.yml \
--execution-environment-image my_ee:test \
--ll=debug \
-i inventories/staging \
-m stdout -vv --ep \
--eev "/home/alfred/.ssh:/runner/.ssh:Z" \
-K