Перейти к содержанию

▍Terraform

Terraform, Proxmox и Cloud-Init

Настройка Terraform

Скачиваем последнюю версию Terraform 1.6.6 актуальную на начало 2024:

wget https://releases.hashicorp.com/terraform/1.6.6/terraform_1.6.6_linux_amd64.zip

Распаковываем:

unzip terraform_1.6.6_linux_amd64.zip

Перенесем распакованный бинарник в каталог:

sudo mv terraform /usr/local/bin/

Проверяем, что Terraform работает:

$ terraform -v
Terraform v1.6.6
on linux_amd64

Также рекомендуется установить автоподстановки:

terraform -install-autocomplete

Это позволит нам завершать команды terraform с помощью клавиши Tab.

Установка провайдера Terraform proxmox

# Clone the repository
$ git clone https://github.com/Telmate/terraform-provider-proxmox

# Go Install the provider and provisioner (requires a working Goland installation)
$ cd terraform-provider-proxmox

# Run the Makefile
$ make

# Create the Terraform plugins directory and move the generated plugins to it
$ mkdir ~/.terraform.d/plugins/registry.terraform.io/telmate/proxmox/2.9.14/linux_amd64/
$ cp bin/terraform-provider-proxmox ~/.terraform.d/plugins/registry.terraform.io/telmate/proxmox/2.9.14/linux_amd64/

Настройка API

На хосте Proxmox создайте пользователя Terraform, роль и токен аутентификации, которые будут использоваться Terraform для подключения к Proxmox и управления гостями.

При генерации токена важно учитывать флаг --privsep=0. Без этого аутентификация на основе токена работать не будет.

# pveum role add terraform-role -privs "Datastore.AllocateSpace Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt SDN.Use"

# pveum user add terraform@pve

# pveum aclmod / -user terraform@pve -role terraform-role

# pveum user token add terraform@pve terraform-token --privsep=0
┌──────────────┬──────────────────────────────────────┐
│ key          │ value                                │
╞══════════════╪══════════════════════════════════════╡
│ full-tokenid │ terraform@pve!terraform-token        │
├──────────────┼──────────────────────────────────────┤
│ info         │ {"privsep":"0"}├──────────────┼──────────────────────────────────────┤
│ value        │ 474a4cea-68d5-4b31-8d3c-09b28b4b7430 │
└──────────────┴──────────────────────────────────────┘

Важно! Токен должен быть записан. К нему нельзя будет получить доступ позже.

На машине с терраформом добавим переменные для авторизации:

nano ~/.bashrc
# ~/.zshrc
export PM_API_TOKEN_ID="terraform@pve!terraform-token"
export PM_API_TOKEN_SECRET="474a4cea-68d5-4b31-8d3c-09b28b4b7430"

Загружаем переменные в нашу среду:

source ~/.bashrc

Мы готовы приступить непосредственно к работе с terraform.

Создаем рабочую директорию:

mkdir -p ~/terraform/proxmox/ && cd ~/terraform/proxmox/

Создаём файл provider.tf, который сообщает Terraform об используемых провайдеров.

nano ~/terraform/proxmox/provider.tf
terraform {
    required_providers {
        proxmox = {
        source = "telmate/proxmox"
        version = "2.9.14"
        }
    }
}

vars.tf используется для хранения переменных, используемых в других файлах Terraform. В этом файле мы также определяем параметры наших виртуальных машин.

nano ~/terraform/proxmox/vars.tf
variable "proxmox_host" {
    default = "192.168.0.21"
}

variable "ssh_key" {
  default = "ssh-ed25519 AAAAC........"
}

variable "virtual_machines" {
    default = {
        "swarm-test-01" = {
            hostname = "swarm-manager1"
            ip_address = "192.168.0.31/24"
            gateway = "192.168.0.1",
#            vlan_tag = 100,
            target_node = "srv-pve1",
            cpu_cores = 2,
            cpu_sockets = 1,
            memory = "2048",
#            hdd_size = "20G",
            vm_template = "ubuntu23.04-Template",
        },
        "swarm-test-02" = {
            hostname = "swarm-manager2"
            ip_address = "192.168.0.32/24"
            gateway = "192.168.0.1",
#            vlan_tag = 100,
            target_node = "srv-pve2",
            cpu_cores = 2,
            cpu_sockets = 1,
            memory = "2048",
#            hdd_size = "20G",
            vm_template = "ubuntu23.04-Template",
        },
        "swarm-test-03" = {
            hostname = "swarm-worker1"
            ip_address = "192.168.0.33/24"
            gateway = "192.168.0.1",
#            vlan_tag = 100,
            target_node = "srv-pve1",
            cpu_cores = 4,
            cpu_sockets = 1,
            memory = "20480",
#            hdd_size = "20G",
            vm_template = "ubuntu23.04-Template",
        },
        "swarm-test-04" = {
            hostname = "swarm-worker2"
            ip_address = "192.168.0.34/24"
            gateway = "192.168.0.1",
#            vlan_tag = 100,
            target_node = "srv-pve2",
            cpu_cores = 4,
            cpu_sockets = 1,
            memory = "20480",
#            hdd_size = "20G",
            vm_template = "ubuntu23.04-Template",
        },
        "swarm-test-05" = {
            hostname = "swarm-worker3"
            ip_address = "192.168.0.35/24"
            gateway = "192.168.0.1",
#            vlan_tag = 100,
            target_node = "srv-pve3",
            cpu_cores = 4,
            cpu_sockets = 1,
            memory = "20480",
#            hdd_size = "20G",
            vm_template = "ubuntu23.04-Template",
        },
    }
}

main.tf используется для определения ресурсов, которые будут предоставлены. В приведенном ниже файле есть цикл for_each, который выполняет цикл через переменную virtual_machines в файле vars.tf для создания нескольких ресурсов.

nano ~/terraform/proxmox/main.tf
provider "proxmox" {
    pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
    pm_tls_insecure = true

    # Раскомментировать для дебага.
    # pm_log_enable = true
    # pm_log_file = "terraform-plugin-proxmox.log"
    # pm_debug = true
    # pm_log_levels = {
    # _default = "debug"
    # _capturelog = ""
    # }
}

resource "proxmox_vm_qemu" "virtual_machines" {
    for_each = var.virtual_machines

    name = each.value.hostname
    target_node = each.value.target_node
    clone = each.value.vm_template
    # Activate QEMU agent for this VM
    agent = "1"
    # HA
    hastate     = "started"
    hagroup     = "HA" # Имя HA группы
    # Start on boot
    onboot      = true

    os_type = "cloud-init"
    cores = each.value.cpu_cores
    sockets = each.value.cpu_sockets
    cpu = "host"
    memory = each.value.memory
    scsihw = "virtio-scsi-pci"
    bootdisk = "scsi0"
    disk {
        slot = 0
        size = each.value.hdd_size
        type = "scsi"
        storage = "cephpool01"
        iothread = 1
    }

    network {
        model = "virtio"
        bridge = "vmbr0"
#           tag = each.value.vlan_tag
    }

    # Not sure exactly what this is for. something about 
    # ignoring network changes during the life of the VM.
    lifecycle {
        ignore_changes = [
        network,
        ]
    }

    # Cloud-init config
    ipconfig0 = "ip=${each.value.ip_address},gw=${each.value.gateway}"
    sshkeys = var.ssh_key
}

output "vm_ipv4_addresses" {
  value = {
      for instance in proxmox_vm_qemu.virtual_machines:
      instance.name => instance.default_ipv4_address
  }
}

Установим Proxmox Terraform provider.

┌─( daffin@srv-ssh ) - ( 33 files, 12K ) - ( ~/terraform/proxmox )
└─> terraform init

Initializing the backend...

Initializing provider plugins...
- Finding telmate/proxmox versions matching "2.9.14"...
- Installing telmate/proxmox v2.9.14...
- Installed telmate/proxmox v2.9.14 (unauthenticated)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

│ Warning: Incomplete lock file information for providers
│ Due to your customized provider installation methods, Terraform was forced to calculate lock file checksums locally for the following providers:
│   - telmate/proxmox
│ The current .terraform.lock.hcl file only includes checksums for linux_amd64, so Terraform running on another platform will fail to install these providers.
│ To calculate additional checksums for another platform, run:
│   terraform providers lock -platform=linux_amd64
(where linux_amd64 is the platform to generate)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Попросим Terraform спланировать изменения, которые будут внедрены. Это пробный запуск, и никакие изменения применяться не будут.

┌─( daffin@srv-ssh ) - ( 8 files, 16K ) - ( ~/terraform/proxmox )
└─> terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.virtual_machines["swarm-test-01"] will be created
  + resource "proxmox_vm_qemu" "virtual_machines" {
      + additional_wait           = 5
      + agent                     = 1
      + automatic_reboot          = true
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = (known after apply)
      + bootdisk                  = "scsi0"
      + clone                     = "ubuntu23.04-Template"
      + clone_wait                = 10
      + cores                     = 2
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.0.31/24,gw=192.168.0.1"
      + kvm                       = true
      + memory                    = 2048
      + name                      = "swarm-manager1"
      + nameserver                = (known after apply)
      + onboot                    = true
      + oncreate                  = false
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = "virtio-scsi-pci"
      + searchdomain              = (known after apply)
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + sshkeys                   = "ssh-ed25519 AAAAC"
      + tablet                    = true
      + target_node               = "srv-pve1"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vm_state                  = "running"
      + vmid                      = (known after apply)

      + disk {
          + backup             = true
          + cache              = "none"
          + file               = (known after apply)
          + format             = (known after apply)
          + iops               = 0
          + iops_max           = 0
          + iops_max_length    = 0
          + iops_rd            = 0
          + iops_rd_max        = 0
          + iops_rd_max_length = 0
          + iops_wr            = 0
          + iops_wr_max        = 0
          + iops_wr_max_length = 0
          + iothread           = 1
          + mbps               = 0
          + mbps_rd            = 0
          + mbps_rd_max        = 0
          + mbps_wr            = 0
          + mbps_wr_max        = 0
          + media              = (known after apply)
          + replicate          = 0
          + size               = "20G"
          + slot               = 0
          + ssd                = 0
          + storage            = "cephpool01"
          + storage_type       = (known after apply)
          + type               = "scsi"
          + volume             = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }
    }
...

Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + vm_ipv4_addresses = {
      + swarm-manager1 = (known after apply)
      + swarm-manager2 = (known after apply)
      + swarm-worker1  = (known after apply)
      + swarm-worker2  = (known after apply)
      + swarm-worker3  = (known after apply)
    }

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

Если нас всё устраивает, то применяем изменения.

terraform apply

...
Plan: 5 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + vm_ipv4_addresses = {
      + swarm-manager1 = (known after apply)
      + swarm-manager2 = (known after apply)
      + swarm-worker1  = (known after apply)
      + swarm-worker2  = (known after apply)
      + swarm-worker3  = (known after apply)
    }

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

proxmox_vm_qemu.virtual_machines["swarm-test-05"]: Creating...
proxmox_vm_qemu.virtual_machines["swarm-test-01"]: Creating...
proxmox_vm_qemu.virtual_machines["swarm-test-04"]: Creating...
proxmox_vm_qemu.virtual_machines["swarm-test-02"]: Creating...
proxmox_vm_qemu.virtual_machines["swarm-test-03"]: Creating...
proxmox_vm_qemu.virtual_machines["swarm-test-05"]: Still creating... [10s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-01"]: Still creating... [10s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-02"]: Still creating... [10s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-04"]: Still creating... [10s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-03"]: Still creating... [10s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-05"]: Still creating... [20s elapsed]
...
proxmox_vm_qemu.virtual_machines["swarm-test-03"]: Still creating... [3m40s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-01"]: Creation complete after 3m41s [id=srv-pve1/qemu/104]
proxmox_vm_qemu.virtual_machines["swarm-test-03"]: Still creating... [3m50s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-03"]: Still creating... [4m0s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-03"]: Still creating... [4m10s elapsed]
proxmox_vm_qemu.virtual_machines["swarm-test-03"]: Creation complete after 4m17s [id=srv-pve1/qemu/107]

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Outputs:

vm_ipv4_addresses = {
  "swarm-manager1" = "192.168.0.31"
  "swarm-manager2" = "192.168.0.32"
  "swarm-worker1" = "192.168.0.33"
  "swarm-worker2" = "192.168.0.34"
  "swarm-worker3" = "192.168.0.35"
}

Чтобы автоматически принять отпечаток ключа SSH можно воспользоваться инструментом ssh-keyscan, выполнив:

┌─( daffin@srv-ssh ) - ( 33 files, 56K ) - ( ~/ansible )
└─> ssh-keyscan -f swarm_host >> ~/.ssh/known_hosts
# 192.168.0.31:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.31:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.31:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.32:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.32:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.32:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.33:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.33:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.33:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.34:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.34:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.34:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.35:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.35:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7
# 192.168.0.35:22 SSH-2.0-OpenSSH_9.0p1 Ubuntu-1ubuntu8.7

где swarm_host файл со списком ip адресов:

cat swarm_host
192.168.0.31
192.168.0.32
192.168.0.33
192.168.0.34
192.168.0.35

Теперь при частых экспериментах с terraform apply и terraform destroy не нужно будет каждый раз применять новый отпечаток.

К началу